David Sinn

Author Archives: David Sinn

Improve Your Open Networking Experience with Cumulus® VX™

In past jobs, when I was responsible for the architecture and engineering of networks, my peers and I would often spend measurable time working in the lab and testing out the setup of new network designs or approaches that we were looking to implement.

As anyone who has had to build a lab themselves will attest, you never have enough gear, power or space to do all of the testing you would like.  Between the problems of having to build the network from gear that’s been cast-off from the production network to not being able to run the latest software, you can end up questioning your testing results.  From being limited on cooling and power to having to find and run the cables to connect it all together, it can be a lot of work that may not answer everything you need for production.

In the compute space, this has been less of an issue in recent years. With the introduction of accessible virtualization, the application teams could simulate entire solution stacks on their desktop.  While you wouldn’t want to run your production environment on many of them, you could at least simulate all of the components in the solution and verify what you were doing different was viable. Continue reading

Open Networking for the Whole of Your Data Center Network

RMP_Landing_Page_980x270

In the past, I’ve designed, deployed and operated networks of various sizes, needs and scopes. One of the perennial design points common to all of them is how to approach the out-of-band (OOB) network. When it comes to making sure your production network operates in the face of issues, the OOB network is often a critical component. But it also raises the question of how to build it, what components to use and how much they affect the “day job” of running the production network. These decisions haven’t always been easy.

Generally, there is a spectrum of approaches.  On one end is the choice to go with the same gear that you are deploying in the production network. On the other end is the decision to just build the OOB network out of what you can get from the local or online electronics superstore.  One can cause you budget problems; the other raises the question if your OOB network will be there when you most need it.  All too often the most frugal designs win, and this can cause you to have to troubleshoot the OOB network before you can troubleshoot the production network. So the issue is more than just the initial acquisition cost, Continue reading

Accelerating Hadoop With Cumulus Linux

One of the questions I’ve encountered in talking to our customers has been “What environments are a good example of working on top of the Layer-3 Clos design?”  Most engineers are familiar with the classic Layer-2 based Core/Distribution/Access/Edge model for building a data center.  And while that has served us well in the older client-server north-south traffic flow approaches and in smaller deployments, modern distributed applications stress the approach to its breaking point.  Since L2 designs normally need to be built around pairs of devices, relying on individual platforms to carry 50% of your data center traffic can present a risk at scale.  On top of this you have to have a long list of protocols that can result in a brittle and operationally complex environment as you deploy 10′s of devices.

Hence the rise of the L3 Clos approach allowing for combining many small boxes, each carrying only a subset of your traffic, along with running industry standard protocols that have a long history of operational stability and troubleshooting ease.  And, while the approach can be applied to many different problems, building a practical implementation of a problem is the best way to show Continue reading