Author Archives: Russ
Author Archives: Russ
Google fascinates network engineers because of the sheer scale of their operations, and their obvious influence over the way networks are built and operated. In this episode of the History of Networking, Richard Hay joins Donald Sharp and Russ White to talk about some past designs and stories of failure and success in one of the world’s largest operating networks.
My next upcoming live webinar on Safari Books is How the Internet Really Works. This was originally a three-hour webinar which I’ve added some material to so it’s now four hours… I’m working with Pearson to add further material and break this into two three-hour webinars, so this will probably be the last four-hour version of this I teach.
Token Ring, in its original form, was—on paper—a very capable physical transport. For instance, because of the token passing capabilities, it could make use of more than 90% of the available bandwidth. In contrast, Ethernet systems, particularly early Ethernet systems using a true “single wire” broadcast domain, cannot achieve nearly that kind of utilization—hence “every device on its own switch port.” The Fiber Distributed Data Interface, or FDDI, is like Token Ring in many ways. For instance, FDDI uses a token to determine when a station connected to the ring can transmit, enabling efficient use of bandwidth.
And yet, Ethernet is the common carrier of almost all wired networks today, and even wireless standards mimic Ethernet’s characteristics. What happened to FDDI?
FDDI is a 100Mbit/second optical standard based on token bus, which used an improved timed token version of the token passing scheme developed for token ring networks. This physical layer protocol standard had a number of fairly unique features. It was a 100Mbit/second standard at a time when Ethernet offered a top speed of 10Mbits/second. Since FDDI could operate over a single mode fiber, it could support distances of 200 kilometers, or around 120 miles. Because it was Continue reading
Safari Books Online just published my latest Livelesson—eight hours of technical yet fundamental training on network transports. I co-authored this one with Ethan Banks (yes, that Ethan Banks). I consider this video series a compliment to the transport chapters in Computer Networking Problems and Solutions.
If you ever wanted to really understand transport protocols … this is your chance.
In the argument between OSPF and BGP in the data center fabric over at Justin’s blog, I am decidedly in the camp of IS-IS. Rather than lay my reasons out here, however (a topic for another blog post?), I want to focus on something else Justin said that I think is incredibly important for network engineers to understand.
I think whiteboards are the most important tool for network design currently available, which makes me sad. I wish that wasn’t true, I want much better tools. I can’t even tell you the number of disasters averted by 2-3 great network engineers arguing over a whiteboard.
I remember—way back—when I was working on the problems around making a link-state protocol work well in a Mobile Ad Hoc Network (MANET), we had two competing solutions presented to the IETF. The first solution was primarily based on whiteboarding through various options and coming up with one that should reduce flooding to an acceptable level. The second was less optimal on the whiteboard but supported by simulations showing it should reduce flooding more effectively.
Which solution “won?” I don’t know what “winning” might mean here, but the solution designed on the whiteboard has Continue reading
Chris Romeo is a famous application security expert who has spent the last several years building a consulting and training company called Security Journey. Chris joins Tom and Russ to talk about the state of security and what network engineers need to know about security from an application perspective.
Dawit Bekele began his journey with the Internet while at college—but on returning to Africa, he discovered there was very little connectivity. While he was not involved in the initial stages of engineering the Internet in Africa, he began as an early user and proponent of connecting his home continent, and is now part of the Internet Society, helping to grow connectivity.
A short while back, the Linux Foundation (Networking), or LFN, published a white paper about the open source networking ecosystem. Rather than review the paper, or try to find a single theme, I decided to just write down “random thoughts” as I read through it. This is the (rather experimental) result.
The paper lists five goals of the project which can be reduced to three: reducing costs, increasing operator’s control over the network, and increasing security (by increasing code inspection). One interesting bit is the pairing of cost reduction with increasing control. Increasing control over a network generally means treating it less like an opaque box and more like a disaggregated set of components, each of which can be tuned in some way to improve the fit between network services, network performance, and business requirements. The less a network is an opaque box, however, the more time and effort required to manage it. This only makes sense—tuning a network to perform better requires time and talent, both of which cost money.
The offsetting point here is disaggregation and using open source can save money—although in my experience it never does. Again, running disaggregated software and hardware requires time and talent, Continue reading
The fundamental technologies for creating digital clones of people — text, audio, and video that sound and look like a specific person — have rapidly advanced and are within striking distance of a future in which digital avatars can sound and act like specific people, Tamaghna Basu, co-founder and chief technology officer of neoEYED, a Continue reading
For those not following the current state of the ITU, a proposal has been put forward to (pretty much) reorganize the standards body around “New IP.” Don’t be confused by the name—it’s exactly what it sounds like, a proposal for an entirely new set of transport protocols to replace the current IPv4/IPv6/TCP/QUIC/routing protocol stack nearly 100% of the networks in operation today run on. Ignoring, for the moment, the problem of replacing the entire IP infrastructure, what can we learn from this proposal?
What I’d like to focus on is deterministic networking. Way back in the days when I was in the USAF, one of the various projects I worked on was called PCI. The PCI network was a new system designed to unify the personnel processing across the entire USAF, so there were systems (Z100s, 200s, and 250s) to be installed in every location across the base where personnel actions were managed. Since the only wiring we had on base at the time was an old Strowger mainframe, mechanical crossbars at a dozen or so BDFs, and varying sizes of punch-downs at BDFs and IDFs, everything for this system needed to be punched- or wrapped-down as physical circuits.
In this episode of the Hedge, Scott Burleigh joins Alvaro Retana and Russ White to discuss the Bundle Protocol, which is designed to support delay tolerant data delivery over intermittently available or “stressed” networks. Examples include interstellar communication, email transmission over networks where access points move around (carrying data with them), etc. You can learn more about delay tolerant networking here, and read the most recent draft specification here.
GRE was the first tunneling protocol ever designed and deployed—and although it largely been overtaken by VXLAN and other tunnel protocols, it is still in widespread use today. For this episode of the History of Networking, Stan Hanks, the inventor of GRE—and hence the inventor of the concept of tunneling in packet switched networks—joins us to describe how and why GRE tunneling was invented.
I think we can all agree networks have become too complex—and this complexity is a result of the network often becoming the “final dumping ground” of every problem that seems like it might impact more than one system, or everything no-one else can figure out how to solve. It’s rather humorous, in fact, to see a lot of server and application folks sitting around saying “this networking stuff is so complex—let’s design something better and simpler in our bespoke overlay…” and then falling into the same complexity traps as they start facing the real problems of policy and scale.
This complexity cannot be “automated away.” It can be smeared over with intent, but we’re going to find—soon enough—that smearing intent on top of complexity just makes for a dirty kitchen and a sub-standard meal.
While this is always “top of mind” in my world, what brings it to mind this particular week is a paper by Jen Rexford et al. (I know Jen isn’t on the lead position in the author list, but still…) called A Clean Slate 4D Approach to Network Control and Management. Of course, I can appreciate the paper in part because I agree with a Continue reading
While many network engineers think about getting a certification, not many think about going after a degree. Is there value in getting a degree for the network engineer? If so, what is it? What kinds of things do you learn in a degree program for network engineering? Eric Osterweil, a professor at George Mason University, joins Jeremy Filliben and Russ White on this episode of the Hedge to consider degrees for network engineers.