In the previous blog post in this series, we explored some of the reasons IP uses per-interface (and not per-node) IP addresses. That model worked well when routers had few interfaces and mostly routed between a few LAN segments (often large subnets of a Class A network assigned to an academic institution) and a few WAN uplinks. In those days, the WAN networks were often implemented with non-IP technologies like Frame Relay or ATM (with an occasional pinch of X.25).
The first sign of troubles in paradise probably occurred when someone wanted to use a dial-up modem to connect to a LAN segment. What subnet (and IP address) do you assign to the dial-up connection, and how do you tell the other end what to use? Also, what do you do when you want to have a bank of modems and dozens of people dialing in?
Over a decade ago, we entered the high speed switching market with our low latency switches. Our fastest switch then, the 7124, could forward L2/L3 traffic in 500ns, a big improvement over store and forward switches that had 10x higher latency. Combined with Arista EOS®, our products were well received by financial trading and HPC customers.
Over a decade ago, we entered the high speed switching market with our low latency switches. Our fastest switch then, the 7124, could forward L2/L3 traffic in 500ns, a big improvement over store and forward switches that had 10x higher latency. Combined with Arista EOS®, our products were well received by financial trading and HPC customers.
There are halls and corridors in Cloudflare engineering, dangerous places for innocent wanderers, filled with wild project ideas, experiments that we should do, and extremely convincing proponents. A couple of months ago, John Graham-Cumming, our CTO, bumped into me in one of those places and asked: "What if we ported Doom multiplayer to work with our edge network?". He fatally nerd-sniped me.
Aside by John: I nerd-sniped him because I wanted to show how Cloudflare Workers and Durable Objects are a new architectural paradigm where, rather than choosing between two places to write code (the client, the browser or app, and the server, perhaps in a cloud provider availability zone), there’s a third way: put code on the edge.
Writing code that runs on a client (such as JavaScript that runs in a browser or a native app on a phone) has advantages. Because the code runs close to the end-user it can be highly interactive, there’s almost no latency since it’s literally running on the device. But client-side code has security problems: it’s literally in the hands of the end-user and thus can be reverse engineered or modified. And client-side code can be slow to update as it Continue reading
Internet resilience is the ability of a network to maintain an acceptable level of service at all times. The Internet plays a critical role in society and the COVID-19 pandemic has reinforced the importance of reliable and stable Internet connectivity. However, not all countries have Internet infrastructure that is robust enough to provide an acceptable […]
The post MIRA Project to Provide Overview of Internet’s Resiliency in Africa appeared first on Internet Society.
Tekion builds cloud-based applications for the automotive retail industry. The company uses Alkira’s cloud networking product to connect a network of automobile dealerships to an array of cloud-hosted services. In this video, the Packet Pushers’ Ethan Banks talks to Tekion’s Jamie Fox. They discuss how the Alkira platform enables Tekion to leverage multiple clouds with […]
The post Born In The Cloud Enterprise Case Study With Tekion – Packet Pushers LiveStream With Alkira (Video 2) appeared first on Packet Pushers.
When I wrote about the (non)impact of switching latency, I was (also) thinking about packet bursts jamming core data center fabric links when I mentioned the elephants in the room… but when I started writing about them, I realized they might be yet another red herring (together with the supposed need for large buffers in data center switches).
Here’s how it looks like from my ignorant perspective when considering a simple leaf-and-spine network like the one in the following diagram. Please feel free to set me straight, I honestly can’t figure out where I went astray.
Today we’re talking about automating network troubleshooting. We’re sponsored by PathSolutions, maker of the TotalView network monitoring software. TotalView pulls in network and device data and then runs it through a heuristics engine to identify problems such as cabling faults, QoS misconfigurations, VLAN tagging faults, and others. The engine can surface up issues automatically to help network engineers identify and resolve problems. Our guest is Tim Titus, CTO at PathSolutions.
The post Tech Bytes: Automating Network Troubleshooting With PathSolutions (Sponsored) appeared first on Packet Pushers.