Oxy: the journey of graceful restarts

Oxy: the journey of graceful restarts
Oxy: the journey of graceful restarts

Any software under continuous development and improvement will eventually need a new version deployed to the systems running it. This can happen in several ways, depending on how much you care about things like reliability, availability, and correctness. When I started out in web development, I didn’t think about any of these qualities; I simply blasted my new code over FTP directly to my /cgi-bin/ directory, which was the style at the time. For those of us producing desktop software, often you sidestep this entirely by having the user save their work, close the program and install an update – but they usually get to decide when this happens.

At Cloudflare we have to take this seriously. Our software is in constant use and cannot simply be stopped abruptly. A dropped HTTP request can cause an entire webpage to load incorrectly, and a broken connection can kick you out of a video call. Taking away reliability creates a vacuum filled only by user frustration.

The limitations of the typical upgrade process

There is no one right way to upgrade software reliably. Some programming languages and environments make it easier than others, but in a Turing-complete language few things are impossible.

Continue reading

Oracle plans second cloud region in Singapore to meet growing demand

Oracle on Tuesday said it is planning to add a second cloud region in Singapore to meet the growing demand for cloud services across Southeast Asia.“Our upcoming second cloud region in Singapore will help meet the tremendous upsurge in demand for cloud services in South East Asia,” Garrett Ilg, president, Japan & Asia Pacific at Oracle, said in a statement.Public cloud services market across Asia Pacific, excluding Japan, is expected to reach $153.6 billion in 2026 from $53.4 billion in 2021, growing at a compound annual growth rate of 23.5%, according to a report from IDC.To read this article in full, please click here

Oracle plans second cloud region in Singapore to meet growing demand

Oracle on Tuesday said it is planning to add a second cloud region in Singapore to meet the growing demand for cloud services across Southeast Asia.“Our upcoming second cloud region in Singapore will help meet the tremendous upsurge in demand for cloud services in South East Asia,” Garrett Ilg, president, Japan & Asia Pacific at Oracle, said in a statement.Public cloud services market across Asia Pacific, excluding Japan, is expected to reach $153.6 billion in 2026 from $53.4 billion in 2021, growing at a compound annual growth rate of 23.5%, according to a report from IDC.To read this article in full, please click here

netlab Release 1.5.1: VLAN and VRF Links

netlab release 1.5.1 makes it easier to create topologies with lots of VRF- or VLAN access links, or topologies with numerous similar links. It also includes support for D2 diagram scripting language in case you prefer its diagrams over those generated by Graphviz.

Even if you don’t find those features interesting (more about them later), you might want to upgrade to fix a nasty container-related behavior I discovered in recently-upgraded Ubuntu servers.

netlab Release 1.5.1: VLAN and VRF Links

netlab release 1.5.1 makes it easier to create topologies with lots of VRF- or VLAN access links, or topologies with numerous similar links. It also includes support for D2 diagram scripting language in case you prefer its diagrams over those generated by Graphviz.

Even if you don’t find those features interesting (more about them later), you might want to upgrade to fix a nasty container-related behavior I discovered in recently-upgraded Ubuntu servers.

AWS to invest $8.9 billion across its regions in Australia by 2027

Within months of adding a second region in Melbourne, Amazon Web Services (AWS) on Tuesday said it would invest $8.93 billion (AU$13.2 billion) to spruce up infrastructure across its cloud regions in Australia through 2027.The majority share of the investment, about $7.45 billion, will be invested in the company’s cloud region in Sydney through the defined time period. The remaining $1.49 billion will be used to expand data center infrastructure in Melbourne, the company said.The $8.93 billion investment includes a $495 million investment in network infrastructure to extend AWS cloud and edge infrastructure across Australia, including partnerships with telecom partners to facilitate high-speed fiber connectivity between Availability Zones, AWS said.To read this article in full, please click here

AWS to invest $8.9 billion across its regions in Australia by 2027

Within months of adding a second region in Melbourne, Amazon Web Services (AWS) on Tuesday said it would invest $8.93 billion (AU$13.2 billion) to spruce up infrastructure across its cloud regions in Australia through 2027.The majority share of the investment, about $7.45 billion, will be invested in the company’s cloud region in Sydney through the defined time period. The remaining $1.49 billion will be used to expand data center infrastructure in Melbourne, the company said.The $8.93 billion investment includes a $495 million investment in network infrastructure to extend AWS cloud and edge infrastructure across Australia, including partnerships with telecom partners to facilitate high-speed fiber connectivity between Availability Zones, AWS said.To read this article in full, please click here

IBM targets edge, AI use cases with new z16 mainframes

IBM has significantly reduced the size of some its Big Iron z16 mainframes and given them a new operating system that emphasizes AI and edge computing.The new configurations—which include Telum processor-based, 68-core IBM z16 Single Frame and Rack Mounted models and a new IBM LinuxONE Rockhopper 4 and LinuxONE Rockhopper Rack Mount boxes—are expected to offer customers better data-center configuration options while reducing energy consumption. Both new Rack Mount boxes are 18U compared to the current smallest Single Frame models, which are 42U.To read this article in full, please click here

IBM targets edge, AI use cases with new z16 mainframes

IBM has significantly reduced the size of some its Big Iron z16 mainframes and given them a new operating system that emphasizes AI and edge computing.The new configurations—which include Telum processor-based, 68-core IBM z16 Single Frame and Rack Mounted models and a new IBM LinuxONE Rockhopper 4 and LinuxONE Rockhopper Rack Mount boxes—are expected to offer customers better data-center configuration options while reducing energy consumption. Both new Rack Mount boxes are 18U compared to the current smallest Single Frame models, which are 42U.To read this article in full, please click here

10 things to know about data-center outages

The severity of data-center outages appears to be falling, while the cost of outages continues to climb. Power failures are “the biggest cause of significant site outages.” Network failures and IT system glitches also bring down data centers, and human error often contributes.Those are some of the problems pinpointed in the most recent Uptime Institute data-center outage report, which analyzes types of outages, their frequency, and what they cost both in money and consequences.Unreliable data is an ongoing problem Uptime cautions that data relating to outages should be treated skeptically given the lack of transparency of some outage victims and the quality of reporting mechanisms. “Outage information is opaque and unreliable,” said Andy Lawrence, executive director of research at Uptime, during a briefing about Uptime’s Annual Outages Analysis 2023.To read this article in full, please click here

10 things to know about data-center outages

Data-center outage severity appears to be falling, while the cost of outages continues to climb.Power failures are “the biggest cause of significant site outages”.Network failures and IT system glitches also bring down data centers, and human error often contributes.Those are some of the problems pinpointed in the most recent Uptime Institute data-center outage report that analyzes types of outages, their frequency, and what they cost both in money and consequences.Unreliable data is an ongoing problem Uptime cautions that data relating to outages should be treated skeptically given the lack of transparency of some outage victims and the quality of reporting mechanisms. “Outage information is opaque and unreliable,” said Andy Lawrence, executive director of research at Uptime, during a briefing about Uptime’s Annual Outages Analysis 2023.To read this article in full, please click here

Kubernetes Security And Networking 5: Installing A Service Mesh – Video

This video walks through installing a service mesh. We use Linkerd, but there are many other options. We show how to install Linkerd in your cluster and add sidecars to pods. You can subscribe to the Packet Pushers’ YouTube channel for more videos as they are published. It’s a diverse a mix of content from […]

The post Kubernetes Security And Networking 5: Installing A Service Mesh – Video appeared first on Packet Pushers.

Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service

On today's Network Break podcast we cover Amazon opening its Sidewalk low-power IoT wireless network to developers, Cisco putting the expiration date on Prime Infrastructure, HAProxy adding QUIC support in its enterprise load balancer, Huawei touting revenue stability, and more IT news.

The post Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service appeared first on Packet Pushers.

Recording your commands on the Linux command line

Recording the commands that you run on the Linux command line can be useful for two important reasons. For one, the recorded commands provide a way to review your command line activity, which is extremely helpful if something didn't work as expected and you need to take a closer look. In addition, capturing commands can make it easy to repeat the commands or to turn them into scripts or aliases for long-term reuse. This post examines two ways that you can easily record and reuse commands.Using history to record Linux commands The history command makes it extremely easy to record commands that you enter on the command line because it happens automatically. The only thing you might want to check is the setting that determines how many commands are retained and, therefore, how long they're going to stay around for viewing and reusing. The command below will display your command history buffer size. If it's 1,000 like that shown, it will retain the last 1,000 commands that you entered.To read this article in full, please click here

Recording your commands on the Linux command line

Recording the commands that you run on the Linux command line can be useful for two important reasons. For one, the recorded commands provide a way to review your command line activity, which is extremely helpful if something didn't work as expected and you need to take a closer look. In addition, capturing commands can make it easy to repeat the commands or to turn them into scripts or aliases for long-term reuse. This post examines two ways that you can easily record and reuse commands.Using history to record Linux commands The history command makes it extremely easy to record commands that you enter on the command line because it happens automatically. The only thing you might want to check is the setting that determines how many commands are retained and, therefore, how long they're going to stay around for viewing and reusing. The command below will display your command history buffer size. If it's 1,000 like that shown, it will retain the last 1,000 commands that you entered.To read this article in full, please click here

Dropped packet reason codes in VyOS

The article VyOS with Host sFlow agent describes how to use industry standard sFlow telemetry to monitor network traffic flows and statistics in the latest VyOS rolling releases. VyOS dropped packet notifications describes how sFlow also provides visibility into network packet drops and Dropped packet reason codes in Linux 6+ kernels describes how newer kernels are able to provide specific reasons for dropping packets. 
vyos@vyos:~$ uname -r
6.1.22-amd64-vyos

The latest VyOS rolling release runs on a Linux 6.1 kernel and the latest release of VyOS now provides enhanced visibility into dropped packets using kernel reason codes.

vyos@vyos:~$ show version
Version:          VyOS 1.4-rolling-202303310716
Release train:    current

Built by:         [email protected]
Built on:         Fri 31 Mar 2023 07:16 UTC
Build UUID:       1a7448d9-d53c-48a0-8644-ed1970c1abb8
Build commit ID:  75c9311fba375e

Architecture:     x86_64
Boot via:         installed image
System type:       guest

Hardware vendor:  innotek GmbH
Hardware model:   VirtualBox
Hardware S/N:     0
Hardware UUID:    da75808d-ff60-1d4c-babd-84a7fa341053

Copyright:        VyOS maintainers and contributors
Verify that the version of of VyOS is VyOS 1.4-rolling-202303310716 or later.

In the previous article, VyOS dropped packet notifications,  two tests were performed, the first a failed attempt to connect to the VyOS router using telnet (telnet has been disabled in Continue reading