What is CXL, and why should you care?

If you purchase a server in the next few months featuring Intel’s Sapphire Rapids generation of Xeon Scalable processor or AMD’s Genoa generation of Epyc processors, they will come with a notable new function called Compute Express Link (CXL)—an open interconnect standard you may find useful, especially in future iterations.CXL is supported by pretty much every hardware vendor and built on top of PCI Express for coherent memory access between a CPU and a device, such as a hardware accelerator, or a CPU and memory.PCIe is meant for point-to-point communications such as SSD to memory, but CXL will eventually support one-to-many communication by transmitting over coherent protocols. So far, CXL is capable of simple point-to-point communication only.To read this article in full, please click here

What is CXL, and why should you care?

If you purchase a server in the next few months featuring Intel’s Sapphire Rapids generation of Xeon Scalable processor or AMD’s Genoa generation of Epyc processors, they will come with a notable new function called Compute Express Link (CXL)—an open interconnect standard you may find useful, especially in future iterations.CXL is supported by pretty much every hardware vendor and built on top of PCI Express for coherent memory access between a CPU and a device, such as a hardware accelerator, or a CPU and memory.PCIe is meant for point-to-point communications such as SSD to memory, but CXL will eventually support one-to-many communication by transmitting over coherent protocols. So far, CXL is capable of simple point-to-point communication only.To read this article in full, please click here

What is CXL, and why should you care?

If you purchase a server in the next few months featuring Intel’s Sapphire Rapids generation of Xeon Scalable processor or AMD’s Genoa generation of Epyc processors, they will come with a notable new function called Compute Express Link (CXL)—an open interconnect standard you may find useful, especially in future iterations.CXL is supported by pretty much every hardware vendor and built on top of PCI Express for coherent memory access between a CPU and a device, such as a hardware accelerator, or a CPU and memory.PCIe is meant for point-to-point communications such as SSD to memory, but CXL will eventually support one-to-many communication by transmitting over coherent protocols. So far, CXL is capable of simple point-to-point communication only.To read this article in full, please click here

What is CXL, and why should you care?

If you purchase a server in the next few months featuring Intel’s Sapphire Rapids generation of Xeon Scalable processor or AMD’s Genoa generation of Epyc processors, they will come with a notable new function called Compute Express Link (CXL)—an open interconnect standard you may find useful, especially in future iterations.CXL is supported by pretty much every hardware vendor and built on top of PCI Express for coherent memory access between a CPU and a device, such as a hardware accelerator, or a CPU and memory.PCIe is meant for point-to-point communications such as SSD to memory, but CXL will eventually support one-to-many communication by transmitting over coherent protocols. So far, CXL is capable of simple point-to-point communication only.To read this article in full, please click here

Network Break 395: Broadcom Ships 51.2Tbps ASIC; Extreme’s New AP Goes Outdoors; Lloyd’s Rethinks Cyber Insurance Policies

This week's Network Break podcast drills into features in Broadcom's newest Tomahawk ASIC, a new Wi-Fi 6E from Extreme for outdoor use, and a $262 million infusion for the startup DriveNets. We also cover serious Apple vulnerabilities, why Lloyd's is rethinking cyber insurance for state-sponsored attacks, Cisco financial results, and more.

BrandPost: What’s the Difference Between SASE, SD-WAN, and SSE?

By: Derek Granath, Senior Director, SD-WAN Product and Technical Marketing at Aruba, a Hewlett Packard Enterprise company.A Quick History LessonBelieve it or not, the term Software-defined Wide Area Network (SD-WAN) was first introduced back in 2014, practically ancient history when it comes to networking at the edge. It’s now well recognized and increasingly adopted as the cloud-first way to transform WAN architecture, improving application performance, enabling more efficient connectivity, and reducing network complexity.Secure Access Service Edge, known as SASE, describes the cloud-first architecture for both WAN and security functions, all delivered and managed in the cloud. In short, SASE is a blend of SD-WAN and cloud-delivered security.To read this article in full, please click here

BrandPost: What’s the Difference Between SASE, SD-WAN, and SSE?

By: Derek Granath, Senior Director, SD-WAN Product and Technical Marketing at Aruba, a Hewlett Packard Enterprise company.A Quick History LessonBelieve it or not, the term Software-defined Wide Area Network (SD-WAN) was first introduced back in 2014, practically ancient history when it comes to networking at the edge. It’s now well recognized and increasingly adopted as the cloud-first way to transform WAN architecture, improving application performance, enabling more efficient connectivity, and reducing network complexity.Secure Access Service Edge, known as SASE, describes the cloud-first architecture for both WAN and security functions, all delivered and managed in the cloud. In short, SASE is a blend of SD-WAN and cloud-delivered security.To read this article in full, please click here

Internet Edge IP SLA Deep Dive

It is a common design to have an internet Edge router connected to two different internet service providers to protect against the failure of an ISP bringing the office down. The topology may look something like this:

Internet Edge HA scenario

The two ISPs are used in an active/standby fashion using static routes. This is normally implemented by using two default routes where one of the routes is a floating static route. It will look something like this:

ip route 0.0.0.0 0.0.0.0 203.0.113.1 name PRIMARY
ip route 0.0.0.0 0.0.0.0 203.0.113.9 200 name SECONDARY

With this configuration, if the interface to ISP1 goes down, the floating static route which has an administrative distance (AD) of 200 will be installed and traffic will flow via ISP2. The drawback to this configuration is that it only works if the physical interface goes down. What happens if ISP1’s CPE has the interface towards the customer up but the interface towards the ISP Core goes down? What happens if there is a failure in another part of the ISP’s network? What if all interfaces are up but Continue reading

Managing a VMware Template Lifecycle with Ansible

When we manage a large number of virtual machines (VMs), we want to reduce the differences between them and create a standard template. By using a standard template, it becomes easier to manage and propagate the same operation on the different nodes. When using VMware vSphere, it is a common practice to share a standardized VM template and use it to create new VMs. This template is often called a golden image. Its creation involves a series of steps that can be automated with Ansible. In this blog, we will see how one can create and use a new golden image.

 

Prepare the golden image

We use image builder to prepare a new image. The tool provides a user interface that allows users to define custom images. In this example, we include the SSH server and tmux. The result image is a file in the VMDK4 format that is not totally supported by VMware vSphere 7, so this is the reason why we use a .vmdk-4 suffix.

We upload the image using the uri module. Uploading large files using this method is rather slow. If you can,  you may want to drop the file on the datastore directly (e. Continue reading

Cisco DevNet certifications explained

The enterprise network is undergoing a fundamental transition from manual to automated, from hardware to software-defined, from tightly controlled to sprawling across SaaS, multi-cloud, remote work and IoT environments.Network professionals are expected to not only extend existing functionality across all of those environments, they must elevate the capabilities of the network to enable digital transformation. That means building a network that’s more agile, resilient, secure, scalable, observable and intelligent.To read this article in full, please click here

Cisco DevNet certifications explained

The enterprise network is undergoing a fundamental transition from manual to automated, from hardware to software-defined, from tightly controlled to sprawling across SaaS, multi-cloud, remote work and IoT environments.Network professionals are expected to not only extend existing functionality across all of those environments, they must elevate the capabilities of the network to enable digital transformation. That means building a network that’s more agile, resilient, secure, scalable, observable and intelligent.To read this article in full, please click here

Building High-Available Web Services: Open Source Load Balancing Based on HAProxy + FRR and Origin Web Server Based on NGINX Connected to Arista EVPN/VXLAN. Part 1.

Hello my friend,

Recently we’ve been working on an interesting (at least for me) project, which is an MVP of the highly available infrastructure for web services. There are multiple approaches existing to create such a solution including “simply” putting everything in Kubernetes. However, in our case we are building a solution for a telco cloud, which is traditionally not the best candidate for a cloud native world. Moreover, putting it to Kubernetes will require to build a Kubernetes cluster first, which is completely separate magnitude of the problem. Originally we were planning to write this blogpost the last weekend, but it took us a little bit longer to put everything together properly. Let’s see, what we are to share with you.


1
2
3
4
5
No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.

You Are Not Talking about Automation Today, Aren’t You?

Yes, today’s blogpost is dedicated to the network technologies (to a huge mix of different network and infrastructure technologies, to be honest). That’s why there Continue reading

Privacy and Networking Part 6: Essential Questions For Privacy Best Practices

Thus far, I’ve concluded that IP addresses and other information network operators handle is personally identifiable (PII) and covered under privacy and security regulations. I’ve also looked at the data lifecycle and user rights related to private data. What are some best practices network operators can follow to reduce their risk? The simplest way to […]

The post Privacy and Networking Part 6: Essential Questions For Privacy Best Practices appeared first on Packet Pushers.

The Puzzle of Peering with Kentik

If you’ve worked at an ISP or even just closely with them you’ve probably hearing the term peering quite a bit. Peering is essentially a reciprocal agreement to provide access to networks between two providers. Provider A agrees to allow Provider B to send traffic over and through their network in exchange for the same access in the other direction. Sounds easy, right? On a technical level it is pretty easy. You simply set up a BGP session with the partner provider and make sure all the settings match and you’ve got things rolling.

The technical part isn’t usually where peering gets complicated. Instead it’s almost always related to the business side of things. The policy and negations that have to happen for a good peering agreement take way more time that hammering out some BGP configuration stanzas. The amount of traffic to be sent, the latency requirements, and even the cost of the agreement are all things that have to be figured out before the first hello packet can be exchanged. This agreement is always up for negotiation too, since the traffic patterns can change before you realize it and put you at a disadvantage.

Peerless Data Collection

If Continue reading