Russ

Author Archives: Russ

The CORD Architecture

Edge provider networks, supporting DSL, voice, and other services to consumers and small businesses, tend to be more heavily bound by vendor specific equipment and hardware centric standards. These networks are built around the more closed telephone standards, rather than the more open internetworking standards, and hence they tend to be more expensive to operate and manage. As one friend said about a company that supplies this equipment, “they just print money.” The large edge providers, such as AT&T and Verizon, however, are not endless pools of money. These providers are in a squeeze between the content providers, who are taking large shares of revenue, and consumers, who are always looking for a lower price option for communications and new and interesting services.

If this seems like an area that’s ripe for virtualization to you, you’re not alone. AT&T has been working on a project called CORD for a few years in this area; they published a series of papers on the topic that make for interesting reading:

Reaction: Should routing react to the data plane?

Over at Packet Pushers, there’s an interesting post asking why we don’t use actual user traffic to detect network failures, and hence to drive routing protocol convergence—or rather, asking why routing doesn’t react to the data place.

I have been looking at convergence from a user perspective, in that the real aim of convergence is to provide a stable network for the users traversing the network, or more specifically the user traffic traversing the network. As such, I found myself asking this question: “What is the minimum diameter (or radius) of a network so that the ‘loss’ of traffic from a TCP/UDP ‘stream’ seen locally would indicate a network outage FASTER than a routing update?”

This is, indeed, an interesting question—and ones that’s highly relevant in our current software defined/drive world. So why not? Let me give you two lines of thinking that might be used to answer this question.

First, let’s consider the larger problem of fast convergence. Anyone who’s spent time in any of my books, or sat through any of my presentations, should know the four steps to convergence—but just in case, let’s cover them again, using a slide from my forthcoming LiveLesson on IS-IS:

Convergence Steps

There Continue reading

The Design Mindset (3)

So you’ve spent time asking what, observing the network as a system, and considering what has actually been done in the past. And you’ve spent time asking why, trying to figure out the purpose (or lack of purpose) behind the configuration and design choices made in the past. You’ve followed the design mindset to this point, so now you can jump in and make like a wrecking ball (or a bull in a china shop), changing things so they’re better, and the new requirements you have can fit right in. Right?

Wrong.

As an example, I want to take you back to another part of a story I told here about my early days in the networking world. Before losing the war over Banyan Vines, I actually encountered an obstacle that should have been telling—but I was too much of a noob at the time to recognize it for the warning it really was. At the time, I had written a short paper comparing Vines to Netware; the paper was, perhaps, ten pages long, and I thought it did a pretty good job of comparing the two network operating systems. Heck, I’d even put together a page showing how Vines Continue reading

CAP Theorem and Routing

In 2000, Eric Brewer was observing and discussing the various characteristics of database systems. Through this work, he observed that a database generally has three characteristics—

  • Consistency, which means the database will produce the same result for any two readers who happen to read a value at the same moment in time.
  • Availability, which means the database can be read by any reader at any moment in time.
  • Partionability, which means the database can be partitioned.

Brewer, in explaining the relationship between the three in a 2012 article, says—

The easiest way to understand CAP is to think of two nodes on opposite sides of a partition. Allowing at least one node to update state will cause the nodes to become inconsistent, thus forfeiting C (consistency). Likewise, if the choice is to preserve consistency, one side of the partition must act as if it is unavailable, thus forfeiting A (availability).

The CAP theorem, therefore, represents a two out of three situation—yet another two out of three “set” we encounter in the real world, probably grounded someplace in the larger space of complexity. We’ll leave the relationship to complexity on the side for the moment, however, and just look at how Continue reading

Book Winners!

Lots of good suggestions in my inbox—thanks to all who gave me some great design ideas to blog about. I eventually chose two winners, as I uncovered another copy of the book to give away! The two winners are Patrick Watson and Matthew Sabin. I’m going to try and run something like this every three or four months, so look for another one in the future.

LinkedInTwitterGoogle+FacebookPinterest

The post Book Winners! appeared first on 'net work.