Archive

Category Archives for "Plexxi Blog"

Resiliency in Controller based Network Architectures

Last week Ivan Pepelnjak wrote an article about the failure domains of controller based network architectures. At the core of SDN solutions is the concept of a controller, which in most cases lives outside the network devices themselves. A controller as a central entity controlling the network (hence its name) provides very significant values and capabilities to the network. We have talked about these in this blog many times.

Centralized Control

When introducing a centralized entity into any inherently distributed system, the architecture of such a system needs to carefully consider failure domains and scenarios. Networks have been distributed entities, with each device more or less independent and a huge suite of protocols defined to manage the distributed state between all of them. When you think about it, it’s actually quite impressive to think about the extend of distribution we have created in networks. We have created an extremely large distributed system with local decision making and control. I am not sure there are too many other examples of complex distributed systems that truly run without some form of central authority.

It is exactly that last point that we networking folks tend to forget or ignore. Many control systems in Continue reading

Nontraditional network integrations

If you listen to the chatter around the network industry, you are starting to see a lot more discussion about integration. While particularly clueful individuals have been hip to the fact for awhile, it seems the industry at large is just awakening to the idea that the network does not exist in isolation. Put differently, the idea that the network can be oblivious to everything around it (and they to the network) is losing steam as orchestration frameworks like OpenStack take deeper root.

Having glue between the major infrastructure components is critical to having seamless operation across all the resources required to satisfy an application or tenant workload. But there is additional (potentially greater!) advantage to be had by performing some less traditional integrations.

Where do integrations matter?

There are two primary reasons to have integrated infrastructure: to cut down on time and to cut down on mistakes. Integration is almost always in support of automation. Depending on the exact integration, that automation is in support of making things faster and cheaper, or in making things less prone to Layer 8 issues.

The point here is that integration is rarely done just for the sake of integration. Companies need Continue reading

After cheap, what is important for cloud services?

Amazon is indisputably the biggest name in cloud service providers. They have built up a strong market presence primarily on the argument that access to cheap compute and storage resources is attractive to companies looking to shed IT costs as they move from on-premises solutions to the cloud. But after the initial push for cheap resources, how will this market develop?

Is cheap really cheap?

Amazon has cut prices to their cloud offering more than 40 times since introducing the service in 2006. The way this gets translated in press circles is that cloud services pricing is approaching some floor. But is that true?

In October 2013, Ben Kepes over at Forbes wrote an interesting article that included a discussion of AWS pricing. In the article, he quotes some work done by Profitbricks that shows AWS pricing relative to Moore’s Law. The article is here, and the image from the article is below:aws-moores-law

Moore’s Law tells us that performance will roughly double every two years. Of course it is not really a law but more a principle useful in forecasting how generalized compute and storage performance will track over time. The other side of this law is that we have Continue reading

Cracking the cloud code

The cloud is one of those technology trends that seems to be perpetually on the cusp of becoming ubiquitous. But if recent analyst reports are any indication, cloud’s breakthrough moment is imminent. Late last year, Gartner predicted that in 2016, the bulk of new IT spend would shift to the public cloud, and that by the end of 2017, nearly half of all enterprises will have hybrid cloud deployments.

But if cloud has been around for so long, why will it take so long for cloud to become the dominant source of IT spend?

Psychology vs Technology

The determinant for most change is the underlying psychology that drives individuals and organizations. The IT industry as a whole has been underpinned by a deep-seeded need for control. The reason that most companies keep expertise in-house is that they want to maintain control—over their data, over the integration with their business workflows, over their schedules, and over their spend.

Of course control is under constant attack by cost. While traffic is booming, IT spend in most organizations continues to trend flat to down. This means that organizations need to constantly provide more compute resources, more storage, and faster interconnect while working with Continue reading

1 6 7 8