Today marks the 20th anniversary of Arista!
Over that time, our company has grown from nothing to #1 in Data Center Ethernet, a highly profitable $100+ billion S&P 500 company doing $6+ billion in annual revenue.
As we all recover from NVIDIA’s exhilarating GTC 2024 in San Jose last week, AI state-of-the-art news seems fast and furious. Nvidia’s latest Blackwell GPU announcement and Meta’s blog validating Ethernet for their pair of clusters with 24,000 GPUs to train on their Llama 3 large language model (LLM) made the headlines. Networking has come a long way, accelerating pervasive compute, storage, and AI workloads for the next era of AI. Our large customers across every market segment, as well as the cloud and AI titans, recognize the rapid improvements in productivity and unprecedented insights and knowledge that AI enables. At the heart of many of these AI clusters is the flagship Arista 7800R AI spine.
Welcome to the digital age, where the marvels of self-driving cars and sophisticated AI like ChatGPT grace our everyday lives. Yet, amidst these advancements, a battleground often goes unnoticed, hidden within the layers of our network infrastructures. It's a world where network teams are the unsung heroes, tirelessly working behind the scenes to keep our digital lifelines seamless and uninterrupted. Today, I want to take you on a journey through Network Observability, a beacon of hope in the relentless quest to avoid outages, understand the impact of change, and quickly and accurately root cause complex situations.
AWS Cloud WAN Tunnel-less Connect and Arista CloudEOS integrate to accelerate cloud onramp
As cloud and multicloud adoption continue to evolve, public cloud providers like AWS continue to introduce more and more tools for enterprise IT to choose from. For example, customers can deploy a virtual router in a Transit VPC and BGP peer with AWS Cloud WAN to interconnect on-premises networks and AWS VPCs. However, GRE or IPsec tunnels are often required for the BGP peering, adding up the network complexity and increasing operational costs.
Recently I attended the 50th golden anniversary of Ethernet at the Computer History Museum. It was a reminder of how familiar and widely deployed Ethernet is and how it has evolved by orders of magnitude. Since the 1970s, it has progressed from a shared collision network at 2.95 megabits in the file/print/share era to the promise of Terabit Ethernet switching in the AI/ML era. Legacy Ethernot* alternatives such as Token Ring, FDDI, and ATM generally get subsumed by Ethernet. I believe history is going to repeat itself for AI networks.
The AI industry has taken us by storm, bringing supercomputers, algorithms, data processing and training methods into the mainstream. The rapid ramp of large language inference models combined with Open AI's ChatGPT has captured the interest and imagination of people worldwide. Generative AI applications promise benefits to just about every industry. New types of AI applications are expected to improve productivity on a wide range of tasks, be it marketing image creation for ads, video games or customer support. These generative large language models with over 100 billion parameters are advancing the power of AI applications and deployments. Furthermore, Moore's law is pushing silicon geometries of TPU/GPU processors that connect 100 to 400 to 800 gigabits of network throughput with parallel processing and bandwidth capacity to match.
A pioneer in cloud networking for the last decade, Arista has become synonymous with elastic scaling and programmable provisioning through a modern data-driven software stack. Legacy networks with manual box-by-box configurations for production and testing have led to cumbersome and complex practices. Arista leads the industry in cloud automation built on an open foundation.
In 2022 we are witnessing many forms of cloud networking pioneered a decade ago by Arista. Cloud models continue to take precedence through the new network as organizations address the scale of machines and network traffic.
There is a continued push to go even “faster.” Lowering port to port latency while maintaining features and increasing link speeds and system density is a significant technology challenge for designers and the laws of physics. Since the first release of Arista’s 7100 and 7150 switch families, the company has been a partner in building best-in-class low latency trading networks that are today deployed in global financial institutions and trading locations.
Cutting edge customers took the approach of disaggregating network functions into pools of functionality – extremely fast Layer 1 switching, operating as low as 5 ns and FPGA-driven trading pipelines running at under 40 ns with the Arista 7130 family. This approach allowed more sophisticated L2 / L3 networking functionality, such as the ability to tap any flow or enable routing protocols, to run on general-purpose systems, including the Arista 7050X, 7060X and 7170 full-featured platforms, using merchant silicon with billions of packets per second and low latency.
As we enter 2022, there is much discussion on the “post-pandemic” world of campus and how it’s changing. Undoubtedly, the legacy 2000 era campus was mired in complexity, with proprietary features, siloed designs, and fragile software ripe for change. This oversubscribed campus is riddled with challenges, including critical outages causing risk-adverse behaviors and labor-intensive roll-outs hampering improvements. The future of the campus has changed as the lines between corporate headquarters, home, remote and transit workers are blurring and creating distributed workspaces. Before the pandemic, the most common network designs were rigidly hierarchical. They were based upon a manual model developed in the mid-1990s. As the demand for scale increased, the end user experience was degraded and the cost per connected host continued to escalate.
Are we ready to evolve the legacy campus to a new cognitive edge for the new and dispersed class of users, devices and IoT/OT? I think so and the time to recalibrate and redesign the campus is now!
Over the last few years, we have seen an age of edgeless, multi-cloud, multi-device collaboration for hybrid work giving rise to a new network that transcends traditional perimeters. As hybrid work models gain precedence through the new network, organizations must address the cascading attack surface. Reactionary, bolt-on security measures are simply too tactical and expensive.
The power and potential of the next generation cognitive campus are transformative as the industry undergoes a massive transition to hybrid work in the post-pandemic era. A key underpinning to successful campus networking deployments has been our very first acquisition of Mojo Networks for cognitive Wi-Fi. Arista’s entry into wireless is only in its third year, yet the advances in this space will be profound over the next decade.
In the past decade, the emergence of cloud networks has blurred the line between switching and routing versus traditional routers. Today the industry is at an inflection point, where the adoption of cloud principles for routing intersects the rapidly expanding capabilities of the merchant silicon feature set and scale, creating a disruption of legacy routing architectures.
The rise of cloud migration for enterprises with mission critical applications is redefining the data center. The reality for any enterprise: a systematic approach balancing workloads in the cloud and premises while securing data. Data and applications must be managed as critical assets in the 21st century.
The rise of cloud migration for enterprises with mission critical applications is redefining the data center. The reality for any enterprise: a systematic approach balancing workloads in the cloud and premises while securing data. Data and applications must be managed as critical assets in the 21st century.
Over a decade ago, we entered the high speed switching market with our low latency switches. Our fastest switch then, the 7124, could forward L2/L3 traffic in 500ns, a big improvement over store and forward switches that had 10x higher latency. Combined with Arista EOS®, our products were well received by financial trading and HPC customers.
Every CIO needs to adopt a cloud strategy typically moving some e-commerce workloads to the public cloud. Yet, the migration path for the modern enterprise can be constrained by legacy barriers. With mission-critical applications that run in a diverse suite of legacy mainframe to helpdesk to IoT devices, how does one get started and what does this entail?
The reality for any enterprise whose core business is driven by a reliance on corporate-owned technology structure with strict ownership of critical assets is that it operates with many constraints. The cloudification and multi-cloud strategy requires a more pragmatic and systematic approach balancing workloads in the cloud and on-premise enterprise networks.
Every CIO needs to adopt a cloud strategy typically moving some e-commerce workloads to the public cloud. Yet, the migration path for the modern enterprise can be constrained by legacy barriers. With mission-critical applications that run in a diverse suite of legacy mainframe to helpdesk to IoT devices, how does one get started and what does this entail?
The reality for any enterprise whose core business is driven by a reliance on corporate-owned technology structure with strict ownership of critical assets is that it operates with many constraints. The cloudification and multi-cloud strategy requires a more pragmatic and systematic approach balancing workloads in the cloud and on-premise enterprise networks.
Arista is trusted and powers the world’s largest data centers and cloud providers based on the quality, support and performance of its products. The experience gained from working with over 7000 customers has helped redefine software defined networking and many of our customers have asked us how we plan to address security. To us, security must be a holistic and inherent part of the network. Our customers have been subjected to the fatigue of point products, reactive solutions, proprietary vendor lock-ins and most of all, operational silos created between CloudOps, NetOps, DevOps and SecOps. By leveraging cloud principles, Arista’s cloud network architectures bring disparate operations together to secure all digital assets across client to IoT, campus, data center and cloud protecting them from threats, thefts and compromises.