We are witnessing a major shift from traditional enterprise data centers to much larger warehouse-scale cloud data centers. This is driven by the economics of scale and the benefits of cloud computing, and is happening for both for public and private clouds.
These large data centers need a much higher performance networks that bears little resemblance with traditional enterprise networks. A cloud data center network needs to interconnect many thousands of servers with predictable bandwidth and low-latency.
Our original goal was a switch that could connect 10,000 servers with a simple, 2-stage network, that would deliver predictable Gigabit performance for each server, and do this at a price point that is compatible with web and cloud business models. Just to be clear, such a network requires 10 Terabits/second throughput (10,000 x 1 Gbps), active-active load-sharing redundancy to avoid any single point of failure, and the ability to run 24×7 since there are no maintenance windows in the cloud world.
I am very pleased with the product that resulted from this development, the Arista 7500 data center switch. It turned out really great, even better than we originally anticipated.
The Arista 7500 switch is the highest throughput 10G Ethernet switch in Continue reading
This is probably one of the most difficult entries I have ever written. I have decided to leave my job at Oracle. Don’t have a forward destination yet but I intend to take some time thinking about it before I take the next step. I am leaving Oracle but I will still be involved with Solaris and OpenSolaris in some form or the other. Having spent 14 years writing million+ lines of code and architecting some of the most complex subsystems, I don’t intend to just walk away.
The last 2-3 days have been a very emotional journey for me. I thought I was a very strong willed person but it was amazing how many times I came close to tears when so many people stopped by. All I can say is that I am so grateful that the community feels that I had done something useful (both personally and professionally) for Solaris. The journey has been nothing but wonderful and I will surely miss everyone. But I have learned one thing in last several years – to not say goodbye ever because our paths will cross again!!
Best of luck to everyone in the Solaris community who help it Continue reading
It’s one of those technologies that many have only had cursory awareness of. It is certainly not a ‘mainstream’ technology in comparison to IP, Ethernet or even Fibre Channel. Those who have awareness of it know Infiniband as a high performance compute clustering technology that is typically used for very short interconnects within the Data Center. While this is true, it’s uses and capacity have been expanded into many areas that were once thought to be out of its realm. In addition, many of the distance limitations that have prevented it’s expanded use are being extended. In some instances to rather amazing distances that rival the more Internet oriented networking technologies. This article will look closely at this networking technology from both historical and evolutionary perspectives. We will also look at some of the unique solutions that are offered by its use.
Not your mother’s Infiniband
The InfiniBand (IB) specification defines the methods & architecture of the interconnect that establishes the interconnection of the I/O subsystems of next generation of servers, otherwise known as compute clustering. The architecture is based on a serial, switched fabric that currently defines link bandwidths between 2.5 and 120 Gbits/sec. It effectively resolves the Continue reading