Archive

Category Archives for "Networking"

Incrementing and decrementing numeric variables in bash

When preparing scripts that will run in bash, it’s often critical to be able to set up numeric variables that you can then increment or decrement as the script proceeds. The only surprising part of this is how many options you have to choose from to make the increment or decrement operation happen.Incrementing numeric variables First, to increment a variable, you first need to set it up. While the example below sets the variable $count to 1, there is no need to start at 1.$ count=1 This would also work:$ count=111 Regardless of the initial setting, you can then increment your variable using any of the following commands. Just replace $count with your variable name.To read this article in full, please click here

3 types of incremental forever backup

A traditional backup starts with an initial full backup and is followed by a series of incremental or cumulative incremental backups (also known as differential backups). After some period of time, you will perform another full backup and more incremental backups. However, the advent of disk-based backup systems has given rise to the concept of the incremental forever approach, where only one backup is performed followed by a series of incremental backups. Let’s take a look at the different ways to do this.File-level incremental forever The first type of incremental forever backup is a file-level incremental forever backup product. This type of approach has actually been around for quite some time, with early versions of it available in the ‘90s. The reason why this is called a file-level incremental is that the decision to backup an item happens at the file level. If anything within a file changes, it will change its modification date (or archive bit in Windows), and the entire file will be backed up. Even if only one byte of data was changed within the file, the entire file will be included in the backup.To read this article in full, please click here

Incrementing and decrementing numeric variables in bash

When preparing scripts that will run in bash, it’s often critical to be able to set up numeric variables that you can then increment or decrement as the script proceeds. The only surprising part of this is how many options you have to choose from to make the increment or decrement operation happen.Incrementing numeric variables First, to increment a variable, you first need to set it up. While the example below sets the variable $count to 1, there is no need to start at 1.$ count=1 This would also work:$ count=111 Regardless of the initial setting, you can then increment your variable using any of the following commands. Just replace $count with your variable name.To read this article in full, please click here

3 types of incremental forever backup

A traditional backup starts with an initial full backup and is followed by a series of incremental or cumulative incremental backups (also known as differential backups). After some period of time, you will perform another full backup and more incremental backups. However, the advent of disk-based backup systems has given rise to the concept of the incremental forever approach, where only one backup is performed followed by a series of incremental backups. Let’s take a look at the different ways to do this.File-level incremental forever The first type of incremental forever backup is a file-level incremental forever backup product. This type of approach has actually been around for quite some time, with early versions of it available in the ‘90s. The reason why this is called a file-level incremental is that the decision to backup an item happens at the file level. If anything within a file changes, it will change its modification date (or archive bit in Windows), and the entire file will be backed up. Even if only one byte of data was changed within the file, the entire file will be included in the backup.To read this article in full, please click here

Incrementing and decrementing numeric variables in bash

When preparing scripts that will run in bash, it’s often critical to be able to set up numeric variables that you can then increment or decrement as the script proceeds. The only surprising part of this is how many options you have to choose from to make the increment or decrement operation happen.Incrementing numeric variables First, to increment a variable, you first need to set it up. While the example below sets the variable $count to 1, there is no need to start at 1.$ count=1 This would also work:$ count=111 Regardless of the initial setting, you can then increment your variable using any of the following commands. Just replace $count with your variable name.To read this article in full, please click here

3 types of incremental forever backup

A traditional backup starts with an initial full backup and is followed by a series of incremental or cumulative incremental backups (also known as differential backups). After some period of time, you will perform another full backup and more incremental backups. However, the advent of disk-based backup systems has given rise to the concept of the incremental forever approach, where only one backup is performed followed by a series of incremental backups. Let’s take a look at the different ways to do this.File-level incremental forever The first type of incremental forever backup is a file-level incremental forever backup product. This type of approach has actually been around for quite some time, with early versions of it available in the ‘90s. The reason why this is called a file-level incremental is that the decision to backup an item happens at the file level. If anything within a file changes, it will change its modification date (or archive bit in Windows), and the entire file will be backed up. Even if only one byte of data was changed within the file, the entire file will be included in the backup.To read this article in full, please click here

Random Thoughts on Zero-Trust Architecture

When preparing the materials for the Design Clinic section describing Zero-Trust Network Architecture, I wondered whether I was missing something crucial. After all, I couldn’t find anything new when reading the NIST documents – we’ve seen all they’re describing 30 years ago (remember Kerberos?).

In late August I dropped by the fantastic Roundtable and Barbecue event organized by Gabi Gerber (running Security Interest Group Switzerland) and used the opportunity to join the Zero Trust Architecture roundtable. Most other participants were seasoned IT security professionals with a level of skepticism approaching mine. When I mentioned I failed to see anything new in the now-overhyped topic, they quickly expressed similar doubts.

Random Thoughts on Zero-Trust Architecture

When preparing the materials for the Design Clinic section describing Zero-Trust Network Architecture, I wondered whether I was missing something crucial. After all, I couldn’t find anything new when reading the NIST documents – we’ve seen all they’re describing 30 years ago (remember Kerberos?).

In late August I dropped by the fantastic Roundtable and Barbecue event organized by Gabi Gerber (running Security Interest Group Switzerland) and used the opportunity to join the Zero Trust Architecture roundtable. Most other participants were seasoned IT security professionals with a level of skepticism approaching mine. When I mentioned I failed to see anything new in the now-overhyped topic, they quickly expressed similar doubts.

On Infrastructure as Code and Bit Rot

The architecture of the infrastructure-as-code (IaC) tooling you use will determine the level to which your IaC definitions are exposed to bit rot.

This is a maxim I have arrived at after working with multiple IaC tool sets, both professionally and personally, over the last few years. In this blog post, I will explain how I arrived at this maxim by describing three architectural patterns for IaC tools, each with differing levels of risk for bit rot.

Read the rest of this post.

Connection coalescing with ORIGIN Frames: fewer DNS queries, fewer connections

Connection coalescing with ORIGIN Frames: fewer DNS queries, fewer connections

This blog reports and summarizes the contents of a Cloudflare research paper which appeared at the ACM Internet Measurement Conference, that measures and prototypes connection coalescing with ORIGIN Frames.

Connection coalescing with ORIGIN Frames: fewer DNS queries, fewer connections

Some readers might be surprised to hear that a single visit to a web page can cause a browser to make tens, sometimes even hundreds, of web connections. Take this very blog as an example. If it is your first visit to the Cloudflare blog, or it has been a while since your last visit, your browser will make multiple connections to render the page. The browser will make DNS queries to find IP addresses corresponding to blog.cloudflare.com and then subsequent requests to retrieve any necessary subresources on the web page needed to successfully render the complete page. How many? Looking below, at the time of writing, there are 32 different hostnames used to load the Cloudflare Blog. That means 32 DNS queries and at least 32 TCP (or QUIC) connections, unless the client is able to reuse (or coalesce) some of those connections.

Connection coalescing with ORIGIN Frames: fewer DNS queries, fewer connections

Each new web connection not only introduces additional load on a server's processing capabilities – potentially leading to scalability challenges during peak usage hours Continue reading

Microsoft blames Aussie data center outage on staff strength, failed automation

Microsoft has blamed staff strength and failed automation for a data center outage in Australia that took place on August 30, disabling users from accessing Azure, Microsoft 365, and Power Platform services for over 24 hours.In a post-incident analysis report, Microsoft said the outage occurred due to a utility power sag in Australia’s East region, which in turn “tripped a subset of the cooling units offline in one data center, within one of the Availability Zones.”As the cooling units were not working properly, the rise in temperature forced an automated shutdown of the data center in order to preserve data and infrastructure health, affecting compute, network, and storage services.To read this article in full, please click here

Microsoft blames Aussie data center outage on staff strength, failed automation

Microsoft has blamed staff strength and failed automation for a data center outage in Australia that took place on August 30, disabling users from accessing Azure, Microsoft 365, and Power Platform services for over 24 hours.In a post-incident analysis report, Microsoft said the outage occurred due to a utility power sag in Australia’s East region, which in turn “tripped a subset of the cooling units offline in one data center, within one of the Availability Zones.”As the cooling units were not working properly, the rise in temperature forced an automated shutdown of the data center in order to preserve data and infrastructure health, affecting compute, network, and storage services.To read this article in full, please click here

Microsoft blames Aussie data center outage on staff strength, failed automation

Microsoft has blamed staff strength and failed automation for a data center outage in Australia that took place on August 30, disabling users from accessing Azure, Microsoft 365, and Power Platform services for over 24 hours.In a post-incident analysis report, Microsoft said the outage occurred due to a utility power sag in Australia’s East region, which in turn “tripped a subset of the cooling units offline in one data center, within one of the Availability Zones.”As the cooling units were not working properly, the rise in temperature forced an automated shutdown of the data center in order to preserve data and infrastructure health, affecting compute, network, and storage services.To read this article in full, please click here

Arm unveils project to rapidly develop server processors

Arm Holdings unveiled a program that it says will simplify and accelerate the adoption of Arm Neoverse-based technology into new compute solutions. The program, called Arm Neoverse Compute Subsystems (CSS), was introduced at the Hot Chips 2023 technical conference held at Stanford University.Neoverse is Arm’s server-side technology meant for high performance while still offering the power efficiency that Arm’s mobile parts are known for. CSS enables partners to build specialized silicon more affordably and quickly than previous discrete IP solutions.The first-generation CSS product, Arm CSS N2, is based on the Neoverse N2 platform first introduced in 2020. CSS N2 provides partners with a customizable compute subsystem, allowing them to focus on features like memory, I/O, acceleration, and so on.To read this article in full, please click here

Arm unveils project to rapidly develop server processors

Arm Holdings unveiled a program that it says will simplify and accelerate the adoption of Arm Neoverse-based technology into new compute solutions. The program, called Arm Neoverse Compute Subsystems (CSS), was introduced at the Hot Chips 2023 technical conference held at Stanford University.Neoverse is Arm’s server-side technology meant for high performance while still offering the power efficiency that Arm’s mobile parts are known for. CSS enables partners to build specialized silicon more affordably and quickly than previous discrete IP solutions.The first-generation CSS product, Arm CSS N2, is based on the Neoverse N2 platform first introduced in 2020. CSS N2 provides partners with a customizable compute subsystem, allowing them to focus on features like memory, I/O, acceleration, and so on.To read this article in full, please click here

BGP Labs: Simple Routing Policy Tools

The first set of BGP labs covered the basics, the next four will help you master simple routing policy tools (BGP weights, AS-path filters, prefix filters) using real-life examples:

The labs are best used with netlab (it supports BGP on almost 20 different devices), but you could use any system you like (including GNS3 and CML/VIRL). If you’re stubborn enough it’s possible to make them work with the physical gear, but don’t ask me for help. For more details, read the Installation and Setup documentation.

BGP Labs: Simple Routing Policy Tools

The first set of BGP labs covered the basics; the next four will help you master simple routing policy tools (BGP weights, AS-path filters, prefix filters) using real-life examples:

The labs are best used with netlab (it supports BGP on almost 20 different devices), but you could use any system you like (including GNS3 and CML/VIRL). For more details, read the Installation and Setup documentation.

Valley-free Routing in Leaf and Spine Topology

Valley-free routing is a concept that may not be well known but that is relevant to datacenter design. In this post, we’ll valley-free routing based on a leaf and spine topology.

There are many posts about leaf and spine topology and the benefits. To summarize, some of the most prominent advantages are:

  • Predictable number of hops between any two nodes.
  • All links are available for usage providing high amount of bisection bandwidth (ECMP).
  • The architecture is easy to scale out.
  • Redundant and resilient.

Now, what does this have to do with valley-free routing? To understand what valley-free routing is, first let’s take a look at the expected traffic flow in a leaf and spine topology:

For traffic between Leaf1 and Leaf4, the two expected paths are:

  • Red path – Leaf-1 to Spine-1 to Leaf-4.
  • Blue path – Leaf-1 to Spine-2 to Leaf-4.

This means that there is only one intermediate hop between Leaf1 and Leaf4. Let’s confirm with a traceroute:

Leaf1# traceroute 203.0.113.4
traceroute to 203.0.113.4 (203.0.113.4), 30 hops max, 48 byte packets
 1  Spine2 (192.0.2.2)  1.831 ms  1.234 ms  1.12 ms
 2  Leaf4 (203. Continue reading