Russ

Author Archives: Russ

Open19: A New Step for Data Centers

While most network engineers do not spend a lot of time thinking about environmentals, like power and cooling, physical space problems are actually one of the major hurdles to building truly large scale data centers. Consider this: a typical 1ru rack mount router weighs in at around 30 pounds, including the power supplies. Centralizing rack power, and removing the sheet metal, can probably reduce this by about 25% (if not more). By extension, centralizing power and removing the sheet metal from an entire data center’s worth of equipment could reduce the weight on the floor by about 10-15%—or rather, allow about 10-15% more equipment to be stacked into the same physical space. Cooling, cabling, and other considerations are similar—even paying for the sheet metal around each box to be formed and shipped adds costs.

What about blade mount systems? Most of these are designed for rather specialized environments, or they are designed for a single vendor’s blades. In the routing space, most of these solutions are actually chassis based systems, which are fraught with problems in large scale data center buildouts. The solution? Some form of open, foundation based standard that can be used by all vendors to build equipment Continue reading

Anycast and Latency

One of the things I hear from time to time is how smaller Internet facing service deployments, with just a few instances, cannot really benefit from anycast. Particularly in the active-active data center use case, where customers can connect to one data center or another, the cost of advertising the service as an anycast, and the resulting requirement to keep the backend databases tightly synchronized, is often played as a eating a lot of complexity for the simplicity of having a single address in the DNS system, and hence not losing customer interaction time while the DNS records are timing out so the customer can reconnect to the service.

There is, in fact, some interesting recent research in this area. The research is directed at the DNS root servers themselves, probably because they are publicly accessible, and a well known system that has relied on anycast for many years (so the operators of the root DNS servers are probably well versed in the ways of anycast). One interesting chart from the post over at APNIC’s blog is—

The C root has 8 servers, while the L root has around 144 (according to the article pointed to above). Why is it Continue reading

Network Slices

There has been a lot of chatter recently in the 5G wireless world about network slices. A draft was recently published in the IETF on network slices—draft-gdmb-netslices-intro-and-ps-02. But what, precisely, is a network slice?

Perhaps it is better to begin with a concept most network engineers already know (and love)—a virtual topology. A virtual topology is a set of links, with some subset of connected devices (either virtual or real), that act as a subset of the network. Isn’t such a subset of the network a “slice” if you look at it from a different angle? To ask the question in a different way: how are network slices different from virtual network overlays?

To begin, consider the control plane. In the world of virtual topologies, there is generally one control plane that provides reachability, as well as sorting reachability into each virtual topology. For instance, BGP carries a route target and a route discriminator to indicate which virtual topology any particular destination belongs to. A network slice, by contrast, actually has multiple control planes—one for each slice. There will still be one “supervisor control plane,” of course, much like there is a hypervisor that manages the resources of each Continue reading

1 89 90 91 92 93 163