Real-time flow analytics on VyOS

VyOS with Host sFlow agent describes support for streaming sFlow telemetry added to the open source VyOS router operating system. This article describes how to install analytics software on a VyOS router by configuring a container.
vyos@vyos:~$ add container image sflow/ddos-protect
First, download the sflow/ddos-protect image.
vyos@vyos:~$ mkdir -m 777 /config/sflow-rt
Create a directory to store persistent container state.
set container name sflow-rt image sflow/ddos-protect
set container name sflow-rt allow-host-networks
set container name sflow-rt arguments '-Dhttp.hostname=10.0.0.240'
set container name sflow-rt environment RTMEM value 200M
set container name sflow-rt memory 0
set container name sflow-rt volume store source /config/sflow-rt
set container name sflow-rt volume store destination /sflow-rt/store
Configure a container to run the image. The RMEM environment variable setting limits the amount of memory that the container will use to 200M bytes. The -Dhttp.hostname argument sets the internal web server to listen on management address, 10.0.0.240, assigned to eth0 on this router. The container has is no built-in authentication, so access needs to be limited using an ACL or through a reverse proxy - see Download and install.
set system sflow interface eth0
set system sflow interface eth1
set system sflow interface  Continue reading

What’s New: Cloud Automation with amazon.cloud 0.3.0

Blog Whats new with cloud control API collection

Last year, we made available an experimental alpha Ansible Content Collection of generated modules using the AWS Cloud Control API to interact with AWS services. Although the Collection is not intended for production, we are constantly trying to improve and extend its functionality and achieve its supportability in the future.

In this blog post, we will go over what else has changed and highlight what’s new in the 0.3.0 release of this Ansible Content Collection.

 

Forward-looking Changes

Much of our work in release 0.3.0 focused on releasing several new enhancements, clarifying supportability policies, and extending the automation umbrella by generating new modules. Let’s deep dive into it!

 

New boto3/botocore Versioning

The amazon.cloud Collection has dropped support for botocore<1.28.0 and boto3<1.25.0. Most modules will continue to work with older versions of the AWS Software Development Kit (SDK), however, compatibility with older versions of the AWS SDK is not guaranteed and will not be tested. 

 

New Ansible Support Policy

This Collection release drops support for  ansible-core<2.11. In particular, Ansible Core 2.10 and Ansible 2.9 are not supported. For more information, visit Ansible release documentation.

 

Continue reading

Oxy: the journey of graceful restarts

Oxy: the journey of graceful restarts
Oxy: the journey of graceful restarts

Any software under continuous development and improvement will eventually need a new version deployed to the systems running it. This can happen in several ways, depending on how much you care about things like reliability, availability, and correctness. When I started out in web development, I didn’t think about any of these qualities; I simply blasted my new code over FTP directly to my /cgi-bin/ directory, which was the style at the time. For those of us producing desktop software, often you sidestep this entirely by having the user save their work, close the program and install an update – but they usually get to decide when this happens.

At Cloudflare we have to take this seriously. Our software is in constant use and cannot simply be stopped abruptly. A dropped HTTP request can cause an entire webpage to load incorrectly, and a broken connection can kick you out of a video call. Taking away reliability creates a vacuum filled only by user frustration.

The limitations of the typical upgrade process

There is no one right way to upgrade software reliably. Some programming languages and environments make it easier than others, but in a Turing-complete language few things are impossible.

Continue reading

Oracle plans second cloud region in Singapore to meet growing demand

Oracle on Tuesday said it is planning to add a second cloud region in Singapore to meet the growing demand for cloud services across Southeast Asia.“Our upcoming second cloud region in Singapore will help meet the tremendous upsurge in demand for cloud services in South East Asia,” Garrett Ilg, president, Japan & Asia Pacific at Oracle, said in a statement.Public cloud services market across Asia Pacific, excluding Japan, is expected to reach $153.6 billion in 2026 from $53.4 billion in 2021, growing at a compound annual growth rate of 23.5%, according to a report from IDC.To read this article in full, please click here

Oracle plans second cloud region in Singapore to meet growing demand

Oracle on Tuesday said it is planning to add a second cloud region in Singapore to meet the growing demand for cloud services across Southeast Asia.“Our upcoming second cloud region in Singapore will help meet the tremendous upsurge in demand for cloud services in South East Asia,” Garrett Ilg, president, Japan & Asia Pacific at Oracle, said in a statement.Public cloud services market across Asia Pacific, excluding Japan, is expected to reach $153.6 billion in 2026 from $53.4 billion in 2021, growing at a compound annual growth rate of 23.5%, according to a report from IDC.To read this article in full, please click here

netlab Release 1.5.1: VLAN and VRF Links

netlab release 1.5.1 makes it easier to create topologies with lots of VRF- or VLAN access links, or topologies with numerous similar links. It also includes support for D2 diagram scripting language in case you prefer its diagrams over those generated by Graphviz.

Even if you don’t find those features interesting (more about them later), you might want to upgrade to fix a nasty container-related behavior I discovered in recently-upgraded Ubuntu servers.

netlab Release 1.5.1: VLAN and VRF Links

netlab release 1.5.1 makes it easier to create topologies with lots of VRF- or VLAN access links, or topologies with numerous similar links. It also includes support for D2 diagram scripting language in case you prefer its diagrams over those generated by Graphviz.

Even if you don’t find those features interesting (more about them later), you might want to upgrade to fix a nasty container-related behavior I discovered in recently-upgraded Ubuntu servers.

AWS to invest $8.9 billion across its regions in Australia by 2027

Within months of adding a second region in Melbourne, Amazon Web Services (AWS) on Tuesday said it would invest $8.93 billion (AU$13.2 billion) to spruce up infrastructure across its cloud regions in Australia through 2027.The majority share of the investment, about $7.45 billion, will be invested in the company’s cloud region in Sydney through the defined time period. The remaining $1.49 billion will be used to expand data center infrastructure in Melbourne, the company said.The $8.93 billion investment includes a $495 million investment in network infrastructure to extend AWS cloud and edge infrastructure across Australia, including partnerships with telecom partners to facilitate high-speed fiber connectivity between Availability Zones, AWS said.To read this article in full, please click here

AWS to invest $8.9 billion across its regions in Australia by 2027

Within months of adding a second region in Melbourne, Amazon Web Services (AWS) on Tuesday said it would invest $8.93 billion (AU$13.2 billion) to spruce up infrastructure across its cloud regions in Australia through 2027.The majority share of the investment, about $7.45 billion, will be invested in the company’s cloud region in Sydney through the defined time period. The remaining $1.49 billion will be used to expand data center infrastructure in Melbourne, the company said.The $8.93 billion investment includes a $495 million investment in network infrastructure to extend AWS cloud and edge infrastructure across Australia, including partnerships with telecom partners to facilitate high-speed fiber connectivity between Availability Zones, AWS said.To read this article in full, please click here

IBM targets edge, AI use cases with new z16 mainframes

IBM has significantly reduced the size of some its Big Iron z16 mainframes and given them a new operating system that emphasizes AI and edge computing.The new configurations—which include Telum processor-based, 68-core IBM z16 Single Frame and Rack Mounted models and a new IBM LinuxONE Rockhopper 4 and LinuxONE Rockhopper Rack Mount boxes—are expected to offer customers better data-center configuration options while reducing energy consumption. Both new Rack Mount boxes are 18U compared to the current smallest Single Frame models, which are 42U.To read this article in full, please click here

IBM targets edge, AI use cases with new z16 mainframes

IBM has significantly reduced the size of some its Big Iron z16 mainframes and given them a new operating system that emphasizes AI and edge computing.The new configurations—which include Telum processor-based, 68-core IBM z16 Single Frame and Rack Mounted models and a new IBM LinuxONE Rockhopper 4 and LinuxONE Rockhopper Rack Mount boxes—are expected to offer customers better data-center configuration options while reducing energy consumption. Both new Rack Mount boxes are 18U compared to the current smallest Single Frame models, which are 42U.To read this article in full, please click here

10 things to know about data-center outages

The severity of data-center outages appears to be falling, while the cost of outages continues to climb. Power failures are “the biggest cause of significant site outages.” Network failures and IT system glitches also bring down data centers, and human error often contributes.Those are some of the problems pinpointed in the most recent Uptime Institute data-center outage report, which analyzes types of outages, their frequency, and what they cost both in money and consequences.Unreliable data is an ongoing problem Uptime cautions that data relating to outages should be treated skeptically given the lack of transparency of some outage victims and the quality of reporting mechanisms. “Outage information is opaque and unreliable,” said Andy Lawrence, executive director of research at Uptime, during a briefing about Uptime’s Annual Outages Analysis 2023.To read this article in full, please click here

10 things to know about data-center outages

Data-center outage severity appears to be falling, while the cost of outages continues to climb.Power failures are “the biggest cause of significant site outages”.Network failures and IT system glitches also bring down data centers, and human error often contributes.Those are some of the problems pinpointed in the most recent Uptime Institute data-center outage report that analyzes types of outages, their frequency, and what they cost both in money and consequences.Unreliable data is an ongoing problem Uptime cautions that data relating to outages should be treated skeptically given the lack of transparency of some outage victims and the quality of reporting mechanisms. “Outage information is opaque and unreliable,” said Andy Lawrence, executive director of research at Uptime, during a briefing about Uptime’s Annual Outages Analysis 2023.To read this article in full, please click here

Kubernetes Security And Networking 5: Installing A Service Mesh – Video

This video walks through installing a service mesh. We use Linkerd, but there are many other options. We show how to install Linkerd in your cluster and add sidecars to pods. You can subscribe to the Packet Pushers’ YouTube channel for more videos as they are published. It’s a diverse a mix of content from […]

The post Kubernetes Security And Networking 5: Installing A Service Mesh – Video appeared first on Packet Pushers.

Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service

On today's Network Break podcast we cover Amazon opening its Sidewalk low-power IoT wireless network to developers, Cisco putting the expiration date on Prime Infrastructure, HAProxy adding QUIC support in its enterprise load balancer, Huawei touting revenue stability, and more IT news.

The post Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service appeared first on Packet Pushers.

Recording your commands on the Linux command line

Recording the commands that you run on the Linux command line can be useful for two important reasons. For one, the recorded commands provide a way to review your command line activity, which is extremely helpful if something didn't work as expected and you need to take a closer look. In addition, capturing commands can make it easy to repeat the commands or to turn them into scripts or aliases for long-term reuse. This post examines two ways that you can easily record and reuse commands.Using history to record Linux commands The history command makes it extremely easy to record commands that you enter on the command line because it happens automatically. The only thing you might want to check is the setting that determines how many commands are retained and, therefore, how long they're going to stay around for viewing and reusing. The command below will display your command history buffer size. If it's 1,000 like that shown, it will retain the last 1,000 commands that you entered.To read this article in full, please click here