I’ve started publishing in the Public Discourse on topics of technology and culture; the following is the first article they’ve accepted. Note the contents might be classified as a little controversial.
please note I do not necessarily agree with anything contained in the articles linked here, nor do I necessarily support any of the sites I link to—I gather these links because I think they are interesting and present an interesting point of view worth hearing
Mark Zuckerberg, Sundar Pichai and Jack Dorsey will visit with Congress today, as each has multiple times since last October, to testify about the spread of misinformation (gossip, mistakes and lies) Continue reading
On April 6 at 9 am PDT I’m moderating the second part of a discussion on the evolution of wide area networks. This time we’re going to focus on more of the future rather than the past, relying on our guests, Jeff Tantsura, Brooks Westbrook, and Nick Buraglio to answer questions about putting new WAN technologies to use, and how to choose between private and public wide area options.
When the interests of the end user, the operator, and the vendor come into conflict, who should protocol developers favor? According to RFC8890, the needs and desires of the end user should be the correct answer. According to the RFC:
Mark Nottingham joins Alvaro Retana and Russ White on this episode of the Hedge to discuss why the Internet is for end users.
Why are networks so insecure?
One reason is we don’t take network security seriously. We just don’t think of the network as a serious target of attack. Or we think of security as a problem “over there,” something that exists in the application realm, that needs to be solved by application developers. Or we think the consequences of a network security breach as “well, they can DDoS us, and then we can figure out how to move load around, so if we build with resilience (enough redundancy) we’re already taking care of our security issues.” Or we put our trust in the firewall, which sits there like some magic box solving all our problems.
The problem is–none of this is true. In any system where overall security is important, defense-in-depth is the key to building a secure system. No single part of the system bears the “primary responsibility” for “security.” The network is certainly a part of any defense-in-depth scheme that is going to work.
Which means network protocols need to be secure, at least in some sense, as well. I don’t mean “secure” in the sense of privacy—routes are not (generally) personally identifiable information (there are always Continue reading
Decision making, especially in large organizations, fails in many interesting ways. Understanding these failure modes can help us cope with seemingly difficult situations, and learn how to make decisions better. On this episode of the Hedge, Frederico Lucifredi, Ethan Banks, and Russ White discuss Frederico’s thoughts on developing a taxonomy of indecision. You can find his presentation on this topic here.
While those working in the network engineering world are quite familiar with the expression “it is always something!,” defining this (often exasperated) declaration is a little trickier. The wise folks in the IETF, however, have provided a definition in RFC1925. Rule 7, “it is always something,” is quickly followed with a corollary, rule 7a, which says: “Good, Fast, Cheap: Pick any two (you can’t have all three).”
You can either quickly build a network which works well and is therefore expensive, or take your time and build a network that is cheap and still does not work well, or… Well, you get the idea. There are many other instances of these sorts of three-way tradeoffs in the real world, such as the (in)famous CAP theorem, which states a database can be consistent, available, and partitionable (or partitioned). Eventual consistency, and problems from microloops to surprise package deliveries (when you thought you ordered one thing, but another was placed in your cart because of a database inconsistency) have resulted. Another form of this three-way tradeoff is the much less famous, but equally true, state, optimization, surface tradeoff trio in network design.
It is possible, however, to build a system Continue reading
Jack of all trades, master of none.
This singular saying—a misquote of Benjamin Franklin (more on this in a moment)—is the defining statement of our time. An alternative form might be the fox knows many small things, but the hedgehog knows one big thing.
The rules for success in the modern marketplace, particularly in the technical world, are simple: start early, focus on a single thing, and practice hard.
But when I look around, I find these rules rarely define actual success. Consider my life. I started out with three different interests, starting jazz piano lessons when I was twelve, continuing music through high school, college, and for many years after. At the same time, I was learning electronics—just about everyone in my family is in electronic engineering (or computers, when those came along) in one way or another.
I worked as on airfield electronics for a few years in the US Air Force (one of the reasons I tend to be calm is I’ve faced death up close and personal multiple times, an experience that tends to center your mind), including RADAR, radio, and instrument landing systems. Besides these two, I was highly interested in art and illustration, getting Continue reading
The international pandemic has sent companies scrambling to support lots of new remote workers, which has meant changes in processes, application development, application deployment, connectivity, and even support. Mike Parks joins Eyvonne Sharp and Russ White to discuss these changes on this episode of the Hedge.
While identity is not directly a networking technology, it is closely adjacent to networking, and a critical part of the Internet’s architecture. In this episode of the History of Networking, Pamela Dingle joins Donald Sharpe and Russ White to discuss the humble beginnings of modern identity systems, including NDS and Streettalk.
What percentage of business-impacting application outages are caused by networks? According to a recent survey by the Uptime Institute, about 30% of the 300 operators they surveyed, 29% have experienced network related outages in the last three years—the highest percentage of causes for IT failures across the period.
A secondary question on the survey attempted to “dig a little deeper” to understand the reasons for network failure; the chart below shows the result.
We can be almost certain the third-party failures, if the providers were queried, would break down along the same lines. Is there a pattern among the reasons for failure?
Configuration change—while this could be somewhat managed through automation, these kinds of failures are more generally the result of complexity. Firmware and software failures? The more complex the pieces of software, the more likely it is to have mission-impacting errors of some kind—so again, complexity related. Corrupted policies and routing tables are also complexity related. The only item among the top preventable causes that does not seem, at first, to relate directly to complexity is network overload and/or congestion problems. Many of these cases, however, might also be complexity related.
The Uptime Institute draws this same lesson, though Continue reading
I’ll be joining Jeff Tantsura, Nick Buraglio, and Brooks Westbrook for a roundtable on March 16, 9 am PST (that’s tomorrow if you’re reading this the day it publishes) about the development of wide area networking technologies up until today. This is the first part of a two part series on changes in the wide area network.
Crossing from the domain of test pilots to the domain of network engineering might seem like a large leap indeed—but user interfaces and their tradeoffs are common across physical and virtual spaces. Brian Keys, Eyvonne Sharp, Tom Ammon, and Russ White as we start with user interfaces and move into a wider discussion around attitudes and beliefs in the network engineering world.
Many within the network engineering community have heard of the OSI seven-layer model, and some may have heard of the Recursive Internet Architecture (RINA) model. The truth is, however, that while protocol designers may talk about these things and network designers study them, very few networks today are built using any of these models. What is often used instead is what might be called the Infinitely Layered Functional Indirection (ILFI) model of network engineering. In this model, nothing is solved at a particular layer of the network if it can be moved to another layer, whether successfully or not.
For instance, Ethernet is the physical and data link layer of choice over almost all types of physical medium, including optical and copper. No new type of physical transport layer (other than wireless) can succeed unless if can be described as “Ethernet” in some regard or another, much like almost no new networking software can success unless it has a Command Line Interface (CLI) similar to the one a particular vendor developed some twenty years ago. It’s not that these things are necessarily better, but they are well-known.
Ethernet, however, goes far beyond providing physical layer connectivity. Because many applications rely Continue reading
A while back, I was sitting in a meeting where the presenter described switching from a “traditional, hierarchical data center fabric” to a spine-and-leaf (while drawing CLOS, in all capital letters, on the whiteboard). He pointed out that the spine-and-leaf design is simpler because it only has two tiers rather than three.
There is so much wrong with this I almost winced in physical pain. Traditional hierarchical designs are not fabrics. Spine-and-leaf fabrics are not CLOS, but Clos, fabrics. Clos fabrics have three stages, not two—even if we draw them “folded” so you only see two apparent levels to the fabric. In fact, all spine-and-leaf fabrics always have an odd number of stages, and they are stages, not tiers.
More recently, I heard someone talking about an operating system that was built using microservices. I thought—“that would be at neat trick.” To build something with microservices does not just mean a piece of software using modules—this would be modular application (or operating system) design. Microservices architectures break the application up into the most basic components possible and then scale each kind of component out (rather than up) by spinning new copies of each service as needed. I cannot imagine Continue reading