There is a continued push to go even “faster.” Lowering port to port latency while maintaining features and increasing link speeds and system density is a significant technology challenge for designers and the laws of physics. Since the first release of Arista’s 7100 and 7150 switch families, the company has been a partner in building best-in-class low latency trading networks that are today deployed in global financial institutions and trading locations.
Cutting edge customers took the approach of disaggregating network functions into pools of functionality – extremely fast Layer 1 switching, operating as low as 5 ns and FPGA-driven trading pipelines running at under 40 ns with the Arista 7130 family. This approach allowed more sophisticated L2 / L3 networking functionality, such as the ability to tap any flow or enable routing protocols, to run on general-purpose systems, including the Arista 7050X, 7060X and 7170 full-featured platforms, using merchant silicon with billions of packets per second and low latency.
Multi-Chassis Link Aggregation (MLAG) – the ability to terminate a Port Channel/Link Aggregation Group on multiple switches – is one of the more convoluted1 bridging technologies2. After all, it’s not trivial to persuade two boxes to behave like one and handle the myriad corner cases correctly.
In this series of deep dive blog posts, we’ll explore the intricacies of MLAG, starting with the data plane considerations and the control plane requirements resulting from the data plane quirks. If you wonder why we need all that complexity, remember that Ethernet networks still try to emulate the ancient thick yellow cable that could lose some packets but could never reorder packets or deliver duplicate packets.
Multi-Chassis Link Aggregation (MLAG) – the ability to terminate a Port Channel/Link Aggregation Group on multiple switches – is one of the more convoluted1 bridging technologies2. After all, it’s not trivial to persuade two boxes to behave like one and handle the myriad corner cases correctly.
In this series of deep dive blog posts, we’ll explore the intricacies of MLAG, starting with the data plane considerations and the control plane requirements resulting from the data plane quirks. If you wonder why we need all that complexity, remember that Ethernet networks still try to emulate the ancient thick yellow cable that could lose some packets but could never reorder packets or deliver duplicate packets.
Juniper Network has several products tha can be run on virtualization (hypervisor), such as KVM […]
The post Juniper vMX on GNS3 first appeared on Brezular's Blog.
The following post is by Sehjung Hah at VMware. We thank VMware for being a sponsor. Catch up and listen to VMware’s latest podcast with Packet Pushers introducing vRealize Network Insight Universal with Ethan Banks and Ned Bellavance on Day 2 Cloud 145: Tech Bytes: Flexible Cloud Migration Using VMware vRealize Network Insight Universal. More details are available in […]
The post Easier Network Visibility Using SaaS appeared first on Packet Pushers.
Russ White’s BGP course moves on to the concept of BGP communities, including the three basic types of communities, as well as no_export and no_advertise communities. You can subscribe to the Packet Pushers’ YouTube channel for more videos as they are published. It’s a diverse a mix of content from Ethan and Greg, plus selected […]
The post Learning BGP Module 2 Lesson 5: BGP Communities – Video appeared first on Packet Pushers.
Today we welcome sponsor Nokia back to the Tech Bytes podcast to get more information about its Digital Sandbox, and how this software, part of Nokia’s Fabric Services System, helps enable a continuous integration/continuous delivery, or CI/CD framework, for network engineers. Our guest is Erwan James, Product Line Manager at Nokia.
The post Tech Bytes: Enhancing CI/CD Pipelines With Nokia’s Digital Sandbox (Sponsored) appeared first on Packet Pushers.