Archive

Category Archives for "Russ White"

Technologies that Didn’t: Network Operating Systems

For those with a long memory—no, even longer than that—there were once things called Network Operating Systems (NOS’s). These were not the kinds of NOS’s we have today, like Cisco IOS Software, or Arista EOS, or even SONiC. Rather, these were designed for servers. The most common example was Novell’s Netware. These operating systems were the “bread and butter” of the networking world for many years. I was a Certified Netware Expert (CNE) version 4.0, and then 4.11, before I moved into the routing and switching world. I also deployed Banyan’s Vines, IBM’s OS/2, and a much simpler system called LANtastic, among others.

What were these pieces of software? They were largely built around providing a complete environment for the network user. These systems began with file sharing and directory services and included a small driver that would need to be installed on each host accessing the file share. This small driver was actually a network stack for a proprietary set of protocols. For Vines, this was VIP; for Netware, it was IPX. Over time, these systems began to include email, and then, as a natural outgrowth of file sharing and email, directory services. For some time, there Continue reading

Hints and Principles: Applied to Networks

While software design is not the same as network design, there is enough overlap for network designers to learn from software designers. A recent paper published by Butler Lampson, updating a paper he wrote in 1983, is a perfect illustration of this principle. The paper is caleld Hints and Principles for Computer System Design. I’m not going to write a full review here–you should really go read the paper for yourself–but rather just point out some useful bits of the paper.

The first really useful point of this paper is Lampson breaks down the entire field of software design into three basic questions: What, How, and When (or who)? Each of these corresponds to the goals, techniques, and processes used to design and develop software. These same questions and answers apply to network design–if you are missing one of these three areas, then you are probably missing some important set of questions you have not answered yet. Each of these is also represented by an acronym: what? is STEADY, how? is AID, and when? is ART. Let’s look at a couple of these in a little more detail to see how Lampson’s system works.

STEADY stands for simple, timely, efficient, Continue reading

History of Networking: Mark Nottingham and HTTP

The HyperText Transfer Protocol (HTTP) carries the vast majority of all the traffic on the Internet today, and even the vast majority of traffic carried on private networks. How did this protocol originate, and what was the interplay between standards organizations in it’s creation, curation, and widespread deployment? Mark Nottingham joins Donald and I on this episode of the History of Networking to answer our questions.

<em><a href=”https://historyofnetworking.s3.amazonaws.com/Mark-N_HTTP.mp3″>download</a></em>

Underhanded Code and Automation

So, software is eating the world—and you thought this was going to make things simpler, right? If you haven’t found the tradeoffs, you haven’t looked hard enough. I should trademark that or something! ? While a lot of folks are thinking about code quality and supply chain are common concerns, there are a lot of little “side trails” organizations do not tend to think about. One such was recently covered in a paper on underhanded code, which is code designed to pass a standard review which be used to harm the system later on. For instance, you might see at some spot—

if (buffer_size=REALLYLONGDECLAREDVARIABLENAMEHERE) {
/* do some stuff here */
} /* end of if */

Can you spot what the problem might be? In C, the = is different than the ==. Which should it really be here? Even astute reviewers can easily miss this kind of detail—not least because it could be an intentional construction. Using a strongly typed language can help prevent this kind of thing, like Rust (listen to this episode of the Hedge for more information on Rust), but nothing beats having really good code formatting rules, even if they are apparently arbitrary, for catching Continue reading

RFC1925 Rule 2

According to RFC1925, the second fundamental truth of networking is: No matter how hard you push and no matter what the priority, you can’t increase the speed of light.

However early in the world of network engineering this problem was first observed (see, for instance, Tanenbaum’s “station wagon example” in Computer Networks), human impatience is forever trying to overcome the limitations of the physical world, and push more data down the pipe than mother nature intended (or Shannon’s theory allows).

One attempt at solving this problem is the description of an infinitely fat pipe (helpfully called an “infan(t)”) described in RFC5984. While packets would still need to be clocked onto such a network, incurring serialization delay, the ability to clock an infinite number of packets onto the network at the same moment in time would represent a massive gain in a network’s ability, potentially reaching speeds faster than the speed of light. The authors of RFC5984 describe several attempts to build such a network, including black fiber, on which the lack of light implies data transmission. This is problematic, however, because a lack of information can be interpreted differently depending on the context. A pregnant pause has far different meaning Continue reading

Reducing Complexity through Interaction Surfaces

A recent paper on network control and management (which includes Jennifer Rexford on the author list—anything with Jennifer on the author list is worth reading) proposes a clean slate 4d approach to solving much of the complexity we encounter in modern networks. While the paper is interesting, it’s very unlikely we will ever see a clean slate design like the one described, not least because there will always be differences between what the proper splits are—what should go where.

There is one section of the paper that eloquently speaks to current architecture, however. The authors describe a situation where routing and packet filters are used together to prevent one set of hosts from reaching another set of hosts. Changes in the network, however, cause the packet filters to be bypassed, opening up communications between these two sets of hosts.

This is exactly the problem we so often face in network engineering today—overlapping systems used to solve a single problem do not pay attention to the same signals or information to do their jobs. So here’s a thought about an obvious way to reduce the complexity of your network—try to use one tool to do one job. Before the days of automation, this was much harder to do. There was no way to distribute QoS configurations, for instance, or access lists, much less what might be considered an “easy way.” Because of this, it made some kind of sense to use routing protocols as a sort of distributed database and policy engine to move filters and the like around.

Today, however, we have automation. Because of this, it makes more sense to use automation to manage as much data plane policy as you can, leaving the routing protocol to do its job—provide reachability across an ever-changing network. There are still things, like traffic steering and prefix distribution rules, which should stay inside routing. But when you put routing filters in place to solve a data plane problem, it might be worth thinking about whether that is the right thing to do any longer.

Automation, in this case, can change everything.

Random Thoughts

This week is very busy for me, so rather than writing a single long, post, I’m throwing together some things that have been sitting in my pile to write about for a long while.

From Dalton Sweeny:

A physicist loses half the value of their physics knowledge in just four years whereas an English professor would take over 25 years to lose half the value of the knowledge they had at the beginning of their career. . . Software engineers with a traditional computer science background learn things that never expire with age: data structures, algorithms, compilers, distributed systems, etc. But most of us don’t work with these concepts directly. Abstractions and frameworks are built on top of these well studied ideas so we don’t have to get into the nitty-gritty details on the job (at least most of the time).

This is precisely the way network engineering is. There is value in the kinds of knowledge that expire, such as individual product lines, etc.—but the closer you are to the configuration, the more ephemeral the knowledge is. This is one of the entire points of rule 11 is your friend. Learn the foundational things that make learning the ephemeral things Continue reading

The Hedge Podcast #53: Deprecating Interdomain ASM

Interdomain Any-source Multicast has proven to be an unscalable solution, and is actually blocking the deployment of other solutions. To move interdomain multicast forward, Lenny Giuliano, Tim Chown, and Toerless Eckhert wrote RFC 8815, BCP 229, recommending providers “deprecate the use of Any-Source Multicast (ASM) for interdomain multicast, leaving Source-Specific Multicast (SSM) as the recommended interdomain mode of multicast.”

download

Technologies that Didn’t: CLNS

Note: RFC1925, rule 11, reminds us that: “Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works.” Understanding the past not only helps us to understand the future, it also helps us to take a more balanced and realistic view of the technologies being created and promoted for current and future use.

The Open Systems Interconnect (OSI) model is the most often taught model of data transmission—although it is not all that useful in terms of describing how modern networks work. What many engineers who have come into network engineering more recently do not know is there was an entire protocol suite that went with the OSI model. Each of the layers within the OSI model, in fact, had multiple protocols specified to fill the functions of that layer. For instance, X.25, while older than the OSI model, was adopted into the OSI suite to provide point-to-point connectivity over some specific kinds of physical circuits. Moving up the stack a little, there were several protocols that provided much the same service as the widely used Internet Protocol (IP).

The Connection Oriented Network Service, or CONS, ran on top Continue reading

Reducing RPKI Single Point of Takedown Risk

The RPKI, for those who do not know, ties the origin AS to a prefix using a certificate (the Route Origin Authorization, or ROA) signed by a third party. The third party, in this case, is validating that the AS in the ROA is authorized to advertise the destination prefix in the ROA—if ROA’s were self-signed, the security would be no better than simply advertising the prefix in BGP. Who should be able to sign these ROAs? The assigning authority makes the most sense—the Regional Internet Registries (RIRs), since they (should) know which company owns which set of AS numbers and prefixes.

The general idea makes sense—you should not accept routes from “just anyone,” as they might be advertising the route for any number of reasons. An operator could advertise routes to source spam or phishing emails, or some government agency might advertise a route to redirect traffic, or block access to some web site. But … if you haven’t found the tradeoffs, you haven’t looked hard enough. Security, in particular, is replete with tradeoffs.

Every time you deploy some new security mechanism, you create some new attack surface—sometimes more than one. Deploy a stateful packet filter to protect a Continue reading

1 30 31 32 33 34 164