Author Archives: Russ
Author Archives: Russ
The Internet Society exists to support the growth of the global ‘net across the world by working with stakeholders, building local connectivity like IXs and community based networks, and encouraging the use of open standards. On this episode of the Hedge, Dan York joins us to talk about the Open Standards Everywhere project which is part of the Internet Society. More information about Open Standards Everywhere can be found—
Ivan Pepelnjak was a founding member of the first IX in Slovenia twenty-five years ago. He joins us to describe the origins of the Internet, from the first dial-up circuits to the founding of the first IX and local DNS services here on the History of Networking. Ivan is an independent consultant and trainer; his work can be found at https://ipspace.net.
QUIC is a relatively new data transport protocol developed by Google, and currently in line to become the default transport for the upcoming HTTP standard. Because of this, it behooves every network engineer to understand a little about this protocol, how it operates, and what impact it will have on the network. We did record a History of Networking episode on QUIC, if you want some background.
In a recent Communications of the ACM article, a group of researchers (Kakhi et al.) used a modified implementation of QUIC to measure its performance under different network conditions, directly comparing it to TCPs performance under the same conditions. Since the current implementations of QUIC use the same congestion control as TCP—Cubic—the only differences in performance should be code tuning in estimating the round-trip timer (RTT) for congestion control, QUIC’s ability to form a session in a single RTT, and QUIC’s ability to carry multiple streams in a single connection. The researchers asked two questions in this paper: how does QUIC interact with TCP flows on the same network, and does UIC perform better than TCP in all situations, or only some?
To answer the first question, the authors tried running QUIC Continue reading
Memory isolation is a cornerstone security feature in the construction of every modern computer system. Allowing the simultaneous execution of multiple mutually distrusting applications at the same time on the same hardware, it is the basis of enabling secure execution of multiple processes on the same machine or in the cloud. The operating system is in charge of enforcing this isolation, as well Continue reading
Personal branding and marketing are two key topics that surface from time to time, but very few people talk about how to actually do these things. For this episode of the Hedge, Evan Knox from Caffeine Marketing to talk about the importance of personal marketing and branding, and some tips and tricks network engineers can follow to improve their personal brand.
When you are building a data center fabric, should you run a control plane all the way to the host? This is question I encounter more often as operators deploy eVPN-based spine-and-leaf fabrics in their data centers (for those who are actually deploying scale-out spine-and-leaf—I see a lot of people deploying hybrid sorts of networks designed as “mini-hierarchical” designs and just calling them spine-and-leaf fabrics, but this is probably a topic for another day). Three reasons are generally given for deploying the control plane all on the hosts attached to the fabric: faster down detection, load sharing, and traffic engineering. Let’s consider each of these in turn.
Faster Down Detection. There’s no simple way for ToR switches to determine when the connection to a host has failed, whether the host is single or dual-homed. Somehow the set of routes reachable through the host must be related to the interface state, or some underlying fast hello state (such as BFD), so that if a link fails the ToR knows to pull the correct set of routes from the routing table. It’s simpler to just let the host itself advertise the correct reachability information; when the link fails, the routing session will Continue reading
In this episode of the Hedge, Stephane Bortzmeyer joins Alvaro Retana and Russ White to discuss draft-ietf-dprive-rfc7626-bis, which “describes the privacy issues associated with the use of the DNS by Internet users.” Not many network engineers think about the privacy implications of DNS, a important part of the infrastructure we all rely on to make the Internet work.
LINX is one of the first European Internet Exchanges created. Keith Mitchel joins the History of Networking to talk about the origins of LINX, and the important decisions that shaped it success and the IX community throughout Europe.
Many years ago I attended a presentation by Dave Meyers on network complexity—which set off an entire line of thinking about how we build networks that are just too complex. While it might be interesting to dive into our motivations for building networks that are just too complex, I starting thinking about how to classify and understand the complexity I was seeing in all the networks I touched. Of course, my primary interest is in how to build networks that are less complex, rather than just understanding complexity…
This led me to do a lot of reading, write some drafts, and then write a book. During this process, I ended coining what I call the complexity triad—State, Optimization, and Surface. If you read the book on complexity, you can see my views on what the triad consisted of changed through in the writing—I started out with volume (of state), speed (of state), and optimization. Somehow, though, interaction surfaces need to play a role in the complexity puzzle.
First, you create interaction surface when you modularize anything—and you modularize to control state (the scope to set apart failure domains, the speed and volume to enable scaling). Second, adding interaction surfaces Continue reading
The Living Computers History Museum and Labs was founding by Paul Allen to collect early computer systems and keep the constrained resource coding practices used on these systems alive. Over time it has developed into a living museum and lab, with hands-on access to some of the earliest examples of computing history. Rich Alderson joins us for this episode of the Hedge to describe the museum and its exhibits.
Post-mortem reviews seem to be quite common in the software engineering and application development sides of the IT world—but I do not recall a lot of post-mortems in network engineering across my 30 years. This puzzling observation sprang to mind while I was reading a post over at the ACM this last week about how to effectively learn from the post-mortem exercise.
The common pattern seems to be setting aside a one hour meeting, inviting a lot of people, trying to shift blame while not actually saying you are shifting blame (because we are all supposed to live in a blame-free environment now—fix the problem, not the blame!), and then … a list is created on a whiteboard, pictures are taken, and everyone walks away with a rock-solid plan to never do that again.
In a few months’ time, the same team will be in the same room, draw the same drawings, and say the same things all over again. At least that is the way it seems to me. If there is an effective post-mortem process in use by a company someplace, I do not think I have seen it.
From the article—
When you think of new Ethernet standards, you probably think about faster and optical. There is, however, an entire world of buildings out there with older copper cabling, particularly in the industrial realm, that could see dramatic improvements in productivity if their control and monitoring systems could be moved to IP. In these cases, what is needed is an Ethernet standard that runs over a single copper pair, and yet offers enough speed to support industrial use cases. Peter Jones joins Jeremy Filliben and Russ White to discuss single pair Ethernet.
Way back in the day, when telephone lines were first being installed, running the physical infrastructure was quite expensive. The first attempt to maximize the infrastructure was the party line. In modern terms, the party line is just an Ethernet segment for the telephone. Anyone can pick up and talk to anyone else who happens to be listening. In order to schedule things, a user could contact an operator, who could then “ring” the appropriate phone to signal another user to “pick up.” CSMA/CA, in essence, with a human scheduler.
This proved to be somewhat unacceptable to everyone other than various intelligence agencies, so the operator’s position was “upgraded.” A line was run to each structure (house or business) and terminated at a switchboard. Each line terminated into a jack, and patch cables were supplied to the operator, who could then connect two telephone lines by inserting a jumper cable between the appropriate jacks.
An important concept: this kind of operator driven system is nonblocking. If Joe calls Susan, then Joe and Susan cannot also talk to someone other than one another for the duration of their call. If Joe’s line is tied up, when someone tries to Continue reading
I’ve created a history of networking page which lists all the current episodes of the History of Networking podcast by technology. I have a long list of guests still to schedule, so the new page will be updated as new episodes are recorded and released.
The longer you work on one system or application, the deeper the attachment. For years you have been investing in it—adding new features, updating functionality, fixing bugs and corner cases, polishing, and refactoring. If the product serves a need, you likely reap satisfaction for a job well done (and maybe you even received some raises or promotions as a result of your great work).
Attachment is a two-edged sword—without some form of attachment, it seems there is no way to have pride in your work. On the other hand, attachment leads to poorly designed solutions. For instance, we all know the hyper-certified person who knows every in and out of a particular vendor’s solution, and hence solves every problem in terms of that vendor’s products. Or the person who knows a particular network automation system and, as a result, solves every problem through automation.
The most pernicious forms of attachment in the network engineering world are to a single technology or vendor. One of the cycles I have seen play out many times across the last Continue reading