Tech Bytes: Assembling A SASE Architecture With Fortinet (Sponsored)

Today on the Tech Bytes podcast we pull back the covers on SASE, or Secure Access Service Edge. Fortinet is our sponsor. One important concept to grasp around SASE is that it’s as much an architecture as it is a product. It requires planning and foresight to put the pieces together and operate them. We talk with Fortinet about the elements of its SASE offering and what a typical customer engagement with SASE looks like.

The post Tech Bytes: Assembling A SASE Architecture With Fortinet (Sponsored) appeared first on Packet Pushers.

How doNotTrack policies work in the Calico eBPF dataplane

Almost all modern network systems, including stateful firewalls, make use of connection tracking (“conntrack”) because it consumes less processing power per packet and simplifies operations. However, there are use cases where connection tracking has a negative impact, as we described in Linux Conntrack: Why it breaks down and avoiding the problem. Distributed Denial of Service (DDoS) mitigation systems, defending against volumetric network attacks, is a well known example of such a use case, as it needs to drop malicious packets as fast as possible. In addition to these attacks, connection tracking becomes a potential attack vector as it is a limited resource. There are also applications generating huge amounts of short lived connections per second, to the point that tracking connections leads to more processing and defeating its intended purposes. These use cases demonstrate that there is a need to not track connections in a firewall, also known as stateless firewalling.

In this blog post, we will explain how Project Calico uses eXpress Data Path (XDP) in its eBPF dataplane (also in its iptables dataplane but not the focus of this post) to improve the performance of its stateless firewall. XDP is an eBPF hook that allows a program to Continue reading

MGX: Nvidia Standardizes Multi-Generation Server Designs

Whenever a compute engine maker also does motherboards as well as system designs, those companies that make motherboards (there are dozens who do) and create system designs (the original design manufacturers and the original – get a little bit nervous as well as a bit relieved.

MGX: Nvidia Standardizes Multi-Generation Server Designs was written by Timothy Prickett Morgan at The Next Platform.

Qualcomm doubles down on its pivot to AI

Qualcomm has announced it is shifting its focus from providing chips exclusively for communications devices and doubling down on its efforts to support AI workloads.The company is transitioning to becoming  an “intelligent edge computing” firm, Alex Katouzian, a senior vice president at Qualcomm, said during a keynote speech at the Computex show in Taipei Tuesday.AI workloads require a lot of compute power and in February, Qualcomm announced the Snapdragon X75, its latest 5G modem component that the company said will be the world’s first modem-RF system for 5G-Advanced — a set of specifications designed to improve speed, maximize coverage, and enhance mobility and power efficiency for mobile devices. The X75 is also reportedly able to process AI workloads 2.5 times faster than its predecessor, the X70.To read this article in full, please click here

Qualcomm doubles down on its pivot to AI during Computex keynote

Qualcomm has announced it is shifting its focus from providing chips exclusively for communications devices and doubling down on its efforts to support AI workloads.The company is transitioning to becoming  an “intelligent edge computing” firm, Alex Katouzian, a senior vice president at Qualcomm, said during a keynote speech at the Computex show in Taipei Tuesday.AI workloads require a lot of compute power and in February, Qualcomm announced the Snapdragon X75, its latest 5G modem component that the company said will be the world’s first modem-RF system for 5G-Advanced — a set of specifications designed to improve speed, maximize coverage, and enhance mobility and power efficiency for mobile devices. The X75 is also reportedly able to process AI workloads 2.5 times faster than its predecessor, the X70.To read this article in full, please click here

Inside Nvidia’s new AI supercomputer

With Nvidia’s Arm-based Grace processor at its core, the company has introduced a supercomputer designed to perform AI processing powered by a CPU/GPU combination.The new system, formally introduced at the Computex tech conference in Taipei the DGX GH200 supercomputer is powered by 256 Grace Hopper Superchips, technology that is a combination of Nvidia’s Grace CPU, a 72-core Arm processor designed for high-performance computing and the Hopper GPU. The two are connected by Nvidia’s proprietary NVLink-C2C high-speed interconnect.To read this article in full, please click here

Inside Nvidia’s new AI supercomputer

With Nvidia’s Arm-based Grace processor at its core, the company has introduced a supercomputer designed to perform AI processing powered by a CPU/GPU combination.The new system, formally introduced at the Computex tech conference in Taipei the DGX GH200 supercomputer is powered by 256 Grace Hopper Superchips, technology that is a combination of Nvidia’s Grace CPU, a 72-core Arm processor designed for high-performance computing and the Hopper GPU. The two are connected by Nvidia’s proprietary NVLink-C2C high-speed interconnect.To read this article in full, please click here

Nvidia’s new Grace Hopper superchip to fuel its DGX GH200 AI supercomputer

Nvidia has unveiled a new DGX GH200 AI supercomputer, underpinned by its new Grace Hopper superchip and targeted toward developing and supporting large language models.“DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies to expand the frontier of AI,” Nvidia CEO Jensen Huang said in a blog post.To read this article in full, please click here

Nvidia’s new Grace Hopper superchip to fuel its DGX GH200 AI supercomputer

Nvidia has unveiled a new DGX GH200 AI supercomputer, underpinned by its new Grace Hopper superchip and targeted toward developing and supporting large language models.“DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies to expand the frontier of AI,” Nvidia CEO Jensen Huang said in a blog post.To read this article in full, please click here

Nvidia’s new Grace Hopper superchip to fuel its DGX GH200 AI supercomputer

Nvidia has unveiled a new DGX GH200 AI supercomputer, underpinned by its new Grace Hopper superchip and targeted toward developing and supporting large language models.“DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies to expand the frontier of AI,” Nvidia CEO Jensen Huang said in a blog post.The supercomputer, according to Huang, combines the company’s GH200 Grace Hopper superchip and Nvidia’s NVLink and Switch System, to allow the development of large language models for generative AI language applications, recommender systems, and data analytics workloads.To read this article in full, please click here

Nvidia’s new Grace Hopper superchip to fuel its DGX GH200 AI supercomputer

Nvidia has unveiled a new DGX GH200 AI supercomputer, underpinned by its new Grace Hopper superchip and targeted toward developing and supporting large language models.“DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies to expand the frontier of AI,” Nvidia CEO Jensen Huang said in a blog post.The supercomputer, according to Huang, combines the company’s GH200 Grace Hopper superchip and Nvidia’s NVLink and Switch System, to allow the development of large language models for generative AI language applications, recommender systems, and data analytics workloads.To read this article in full, please click here

Path Failure Detection on Multi-Homed Servers

TL&DR: Installing an Ethernet NIC with two uplinks in a server is easy1. Connecting those uplinks to two edge switches is common sense2. Detecting physical link failure is trivial in Gigabit Ethernet world. Deciding between two independent uplinks or a link aggregation group is interesting. Detecting path failure and disabling the useless uplink that causes traffic blackholing is a living hell (more details in this Design Clinic question).

Want to know more? Let’s dive into the gory details.

Path Failure Detection on Multi-Homed Servers

TL&DR: Installing an Ethernet NIC with two uplinks in a server is easy1. Connecting those uplinks to two edge switches is common sense2. Detecting physical link failure is trivial in Gigabit Ethernet world. Deciding between two independent uplinks or a link aggregation group is interesting. Detecting path failure and disabling the useless uplink that causes traffic blackholing is a living hell (more details in this Design Clinic question).

Want to know more? Let’s dive into the gory details.

Goodbye Twitter. It Was Fun While It Lasted

I joined Twitter in October 2008 (after noticing everyone else was using it during a Networking Field Day event), and eventually figured out how to automate posting the links to my blog posts in case someone uses Twitter as their primary source of news – an IFTTT applet that read my RSS feed and posted links to new entries to Twitter.

This week, I got a nice email from IFTTT telling me they had to disable the post-to-Twitter applet. Twitter started charging for the API, and I was using their free service – obviously the math didn’t work out.

That left me with three options:

Goodbye Twitter. It Was Fun While It Lasted

I joined Twitter in October 2008 (after noticing everyone else was using it during a Networking Field Day event), and eventually figured out how to automate posting the links to my blog posts in case someone uses Twitter as their primary source of news.

This week, I got a nice email from IFTTT (the solution I used) telling me they had to disable the post-to-Twitter applet. Twitter started charging for the API, and I was using their free service – obviously the math didn’t work out.

That left me with three options:

VPP MPLS – Part 4

VPP

About this series

Special Thanks: Adrian vifino Pistol for writing this code and for the wonderful collaboration!

Ever since I first saw VPP - the Vector Packet Processor - I have been deeply impressed with its performance and versatility. For those of us who have used Cisco IOS/XR devices, like the classic ASR (aggregation service router), VPP will look and feel quite familiar as many of the approaches are shared between the two.

In the last three articles, I thought I had described “all we need to know” to perform MPLS using the Linux Controlplane in VPP:

  1. In the [first article] of this series, I took a look at MPLS in general.
  2. In the [second article] of the series, I demonstrated a few special case labels (such as Explicit Null and Implicit Null which enables the fabled Penultimate Hop Popping behavior of MPLS.
  3. Then, in the [third article], I worked with @vifino to implement the plumbing for MPLS in the Linux Control Plane plugin for VPP. He did most of the work, I just watched :)

As if in a state of premonition, I mentioned:

Caveat empor, outside of a modest functional and Continue reading

Worth Reading: Cargo Cult AI

Before we managed to recover from the automation cargo cults, a tsunami wave of cargo cult AI washed over us as Edlyn V. Levine explained in an ACM Queue article. Enjoy ;)

Also, a bit of a historical perspective is never a bad thing:

Impressive progress in AI, including the recent sensation of ChatGPT, has been dominated by the success of a single, decades-old machine-learning approach called a multilayer (or deep) neural network. This approach was invented in the 1940s, and essentially all of the foundational concepts of neural networks and associated methods—including convolutional neural networks and backpropagation—were in place by the 1980s.