Author Archives: Russ
Author Archives: Russ
In the argument between OSPF and BGP in the data center fabric over at Justin’s blog, I am decidedly in the camp of IS-IS. Rather than lay my reasons out here, however (a topic for another blog post?), I want to focus on something else Justin said that I think is incredibly important for network engineers to understand.
I think whiteboards are the most important tool for network design currently available, which makes me sad. I wish that wasn’t true, I want much better tools. I can’t even tell you the number of disasters averted by 2-3 great network engineers arguing over a whiteboard.
I remember—way back—when I was working on the problems around making a link-state protocol work well in a Mobile Ad Hoc Network (MANET), we had two competing solutions presented to the IETF. The first solution was primarily based on whiteboarding through various options and coming up with one that should reduce flooding to an acceptable level. The second was less optimal on the whiteboard but supported by simulations showing it should reduce flooding more effectively.
Which solution “won?” I don’t know what “winning” might mean here, but the solution designed on the whiteboard has Continue reading
Chris Romeo is a famous application security expert who has spent the last several years building a consulting and training company called Security Journey. Chris joins Tom and Russ to talk about the state of security and what network engineers need to know about security from an application perspective.
Dawit Bekele began his journey with the Internet while at college—but on returning to Africa, he discovered there was very little connectivity. While he was not involved in the initial stages of engineering the Internet in Africa, he began as an early user and proponent of connecting his home continent, and is now part of the Internet Society, helping to grow connectivity.
A short while back, the Linux Foundation (Networking), or LFN, published a white paper about the open source networking ecosystem. Rather than review the paper, or try to find a single theme, I decided to just write down “random thoughts” as I read through it. This is the (rather experimental) result.
The paper lists five goals of the project which can be reduced to three: reducing costs, increasing operator’s control over the network, and increasing security (by increasing code inspection). One interesting bit is the pairing of cost reduction with increasing control. Increasing control over a network generally means treating it less like an opaque box and more like a disaggregated set of components, each of which can be tuned in some way to improve the fit between network services, network performance, and business requirements. The less a network is an opaque box, however, the more time and effort required to manage it. This only makes sense—tuning a network to perform better requires time and talent, both of which cost money.
The offsetting point here is disaggregation and using open source can save money—although in my experience it never does. Again, running disaggregated software and hardware requires time and talent, Continue reading
The fundamental technologies for creating digital clones of people — text, audio, and video that sound and look like a specific person — have rapidly advanced and are within striking distance of a future in which digital avatars can sound and act like specific people, Tamaghna Basu, co-founder and chief technology officer of neoEYED, a Continue reading
For those not following the current state of the ITU, a proposal has been put forward to (pretty much) reorganize the standards body around “New IP.” Don’t be confused by the name—it’s exactly what it sounds like, a proposal for an entirely new set of transport protocols to replace the current IPv4/IPv6/TCP/QUIC/routing protocol stack nearly 100% of the networks in operation today run on. Ignoring, for the moment, the problem of replacing the entire IP infrastructure, what can we learn from this proposal?
What I’d like to focus on is deterministic networking. Way back in the days when I was in the USAF, one of the various projects I worked on was called PCI. The PCI network was a new system designed to unify the personnel processing across the entire USAF, so there were systems (Z100s, 200s, and 250s) to be installed in every location across the base where personnel actions were managed. Since the only wiring we had on base at the time was an old Strowger mainframe, mechanical crossbars at a dozen or so BDFs, and varying sizes of punch-downs at BDFs and IDFs, everything for this system needed to be punched- or wrapped-down as physical circuits.
In this episode of the Hedge, Scott Burleigh joins Alvaro Retana and Russ White to discuss the Bundle Protocol, which is designed to support delay tolerant data delivery over intermittently available or “stressed” networks. Examples include interstellar communication, email transmission over networks where access points move around (carrying data with them), etc. You can learn more about delay tolerant networking here, and read the most recent draft specification here.
GRE was the first tunneling protocol ever designed and deployed—and although it largely been overtaken by VXLAN and other tunnel protocols, it is still in widespread use today. For this episode of the History of Networking, Stan Hanks, the inventor of GRE—and hence the inventor of the concept of tunneling in packet switched networks—joins us to describe how and why GRE tunneling was invented.
I think we can all agree networks have become too complex—and this complexity is a result of the network often becoming the “final dumping ground” of every problem that seems like it might impact more than one system, or everything no-one else can figure out how to solve. It’s rather humorous, in fact, to see a lot of server and application folks sitting around saying “this networking stuff is so complex—let’s design something better and simpler in our bespoke overlay…” and then falling into the same complexity traps as they start facing the real problems of policy and scale.
This complexity cannot be “automated away.” It can be smeared over with intent, but we’re going to find—soon enough—that smearing intent on top of complexity just makes for a dirty kitchen and a sub-standard meal.
While this is always “top of mind” in my world, what brings it to mind this particular week is a paper by Jen Rexford et al. (I know Jen isn’t on the lead position in the author list, but still…) called A Clean Slate 4D Approach to Network Control and Management. Of course, I can appreciate the paper in part because I agree with a Continue reading
While many network engineers think about getting a certification, not many think about going after a degree. Is there value in getting a degree for the network engineer? If so, what is it? What kinds of things do you learn in a degree program for network engineering? Eric Osterweil, a professor at George Mason University, joins Jeremy Filliben and Russ White on this episode of the Hedge to consider degrees for network engineers.
Should the network be dumb or smart? Network vendors have recently focused on making the network as smart as possible because there is a definite feeling that dumb networks are quickly becoming a commodity—and it’s hard to see where and how steep profit margins can be maintained in a commodifying market. Software vendors, on the other hand, have been encroaching on the network space by “building in” overlay network capabilities, especially in virtualization products. VMWare and Docker come immediately to mind; both are either able to, or working towards, running on a plain IP fabric, reducing the number of services provided by the network to a minimum level (of course, I’d have a lot more confidence in these overlay systems if they were a lot smarter about routing … but I’ll leave that alone for the moment).
How can this question be answered? One way is to think through what sorts of things need to be done in processing packets, and then think through where it makes most sense to do those things. Another way is to measure the accuracy or speed at which some of these “packet processing things” can be done so you can decide in a more Continue reading
Blame it on the webinar yesterday… I’m late again.
However, there have been multiple reported attempts to exploit the transfer market as part of malicious operations — such as the hijacking of dormant address space and spamming — which, judging by exchanges between operators in mailing Continue reading
Certifications are a perennial topic (like weeds, perhaps) in the world of network engineering. While we often ask whether you should get a certification or a degree, or whether you should get a certification at all, we don’t often ask—now that you have the certification, how long should you keep it? Do you keep recertifying “forever,” or is there a limit? Join us as Mike Bolitho, Eyvonne Sharp, Tom Ammon, and Russ White discuss when you should give up on that certification.
Scott Bradner was given his first email address in the 1970’s, and his workstation was the gateway for all Internet connectivity at Harvard for some time. Join Donald Sharp and Russ White as Scott recounts the early days of networking at Harvard, including the installation of the first Cisco router, the origins of comparative performance testing and Interop, and the origins of the SHOULD, MUST, and MAY as they are used in IETF standards today.