Russ

Author Archives: Russ

Open19: A New Step for Data Centers

While most network engineers do not spend a lot of time thinking about environmentals, like power and cooling, physical space problems are actually one of the major hurdles to building truly large scale data centers. Consider this: a typical 1ru rack mount router weighs in at around 30 pounds, including the power supplies. Centralizing rack power, and removing the sheet metal, can probably reduce this by about 25% (if not more). By extension, centralizing power and removing the sheet metal from an entire data center’s worth of equipment could reduce the weight on the floor by about 10-15%—or rather, allow about 10-15% more equipment to be stacked into the same physical space. Cooling, cabling, and other considerations are similar—even paying for the sheet metal around each box to be formed and shipped adds costs.

What about blade mount systems? Most of these are designed for rather specialized environments, or they are designed for a single vendor’s blades. In the routing space, most of these solutions are actually chassis based systems, which are fraught with problems in large scale data center buildouts. The solution? Some form of open, foundation based standard that can be used by all vendors to build equipment Continue reading

Anycast and Latency

One of the things I hear from time to time is how smaller Internet facing service deployments, with just a few instances, cannot really benefit from anycast. Particularly in the active-active data center use case, where customers can connect to one data center or another, the cost of advertising the service as an anycast, and the resulting requirement to keep the backend databases tightly synchronized, is often played as a eating a lot of complexity for the simplicity of having a single address in the DNS system, and hence not losing customer interaction time while the DNS records are timing out so the customer can reconnect to the service.

There is, in fact, some interesting recent research in this area. The research is directed at the DNS root servers themselves, probably because they are publicly accessible, and a well known system that has relied on anycast for many years (so the operators of the root DNS servers are probably well versed in the ways of anycast). One interesting chart from the post over at APNIC’s blog is—

The C root has 8 servers, while the L root has around 144 (according to the article pointed to above). Why is it Continue reading

1 85 86 87 88 89 159