Author Archives: mike.bushong

IT’s march towards mass customization

[Unbeknownst to me, Matt Oswalt (@mierden on Twitter) posted a thematically similar post a few days before me. While I did not see that post, it seems disingenuous not to reference it, so please read his thoughts here: http://keepingitclassless.net/2014/11/mass-customization/]

IT is constantly evolving, from mainframes to disaggregated components to an integrated set of infrastructure elements working in support of applications. But that description is more about how individual infrastructure is packaged and less about the role that these solutions play. There is a more subtle but perhaps more profound change in IT that is simultaneously taking place: a shift in how IT architectures are actually being designed.

So what is at the heart of this change?

Single purpose infrastructure

IT was born with the mainframe. Mainframes were basically entire IT ecosystems in a box. They included compute, storage, networking and applications. But what is most notable about these solutions is that the entire system was aimed at providing a single outcome. That is to say that the mainframe itself was a composed system that was designed with a single purpose in mind: deliver some application.

In the early days of IT, there was no need for systems to run different Continue reading

Dealing with vs. Celebrating failure

There’s a meme that has been making the rounds through leadership circles for some time around celebrating failure. If you aren’t failing, you aren’t pushing the boundaries. The original premise of this line of thinking is that failure is not something to be feared. But there is a difference between using failure to learn well-earned lessons and declaring success after blowing up on the launchpad.

The failure cliches

It’s worth starting with some of the most common cliches around failure:

  • I have not failed. I’ve just found 10,000 ways that won’t work. — Thomas Edison
  • Success is not final, failure is not fatal: it is the courage to continue that counts. — Winston Churchill
  • There is only one thing that makes a dream impossible to achieve: the fear of failure. — Paulo Coelho
  • Only those who dare to fail greatly can ever achieve greatly. — Robert F Kennedy

Doing a simple web search for failure quotes yields hundreds more. The basic gist of the resulting tome of sayings? Anything worth doing is difficult, and achieving anything great is unlikely to happen on the first try.

The side note no one mentions

It is absolutely true that forging a new path Continue reading

Networking’s UX victims

Our perception of nirvana is impacted mightily by current conditions. For people who live in third world countries, for example, merely having running water or reliable electricity can be a life-altering boon. Meanwhile, those of us who are more accustomed to the creature comforts of life consider slow internet or a poorly seasoned meal worthy of public scorn (even if we add the hashtag #firstworldproblems).

So how is the current state of networking impacting its user base?

A new normal

Perhaps the most insidious effect of poor conditions is that prolonged exposure can actually cause us to reset our baseline for normal. When we are subjected to extended periods of great or even long periods of suck, we adjust our expectations.

In networking, this means that our current normal has been forged through diligent neglect of actual user experience for decades. It’s not so much purposeful behavior by the incumbent networking players so much as placing focus elsewhere. For at least the last few decades, the future of networking has always been defined by the next protocol or knob. That is to say that the focus for product development has always been about bolstering device capability.

With the focus Continue reading

The requirements for IT’s Third Platform

With the blurring of technology lines, the rise of competitive companies, and a shift in buying models all before us, it would appear we are at the cusp of ushering in the next era in IT—the Third Platform Era. But as with the other transitions, it is not the technology or the vendors that trigger a change in buying patterns. There must be fundamental shifts in buying behavior driven by business objectives.

The IT industry at large is in the midst of a massive rewrite of key business applications in response to two technology trends: the proliferation of data (read: Big Data) and the need for additional performance and scale. In many regards, the first begets the second. As data becomes more available—via traditional datacenters, and both public and private cloud environments—applications look to use that data, which means the applications themselves have to go through an evolution to account for the scale and performance required.

Scale up or scale out?

When the industry talks about scale, people typically trot out Moore’s Law to explain how capacity doubles every 18 months. Strictly speaking, Moore’s Law is more principle than law, and it was initially applied to the number of transistors Continue reading

The power of Clustering Illusion when managing image

As humans, we are predisposed to finding order out of otherwise random data. When we look at clouds or even a mountain ridge, we find shapes that are familiar to us. When we see data, we instinctively search for patterns to help make sense of what might appear to be random information. It might be our inherent need for understanding. Or maybe we are just programmed to compare things to stuff we already know. Whatever the underlying cause, it’s a powerful trait that virtually all of us share.

Understanding that people want to put information into buckets and draw conclusions, are there things that we can be doing to help manage our own image?

Walking a Vegas game floor

Maybe you have walked a gaming floor in Las Vegas, turning your head as you are assaulted by the lights and noise that accompany the gambling experience. While perusing the various games, have you ever spotted a roulette table and noticed that the last 6 spins have all come up black? The next spin is bound to be red!

Of course we all know that the likelihood of a red on the next spin is statistically the same, regardless of what Continue reading

SDN and legacy companies: laggards or pragmatists?

There was an interesting Twitter thread over the weekend initiated by Ethan Banks (@ecbanks). He commented that there was too much technique churn in SDN and NetOps (the networking equivalent of DevOps). His point was that in the face of all the change in how to do things, it left users in an impossible spot. How can up pick up a new technology if the frameworks around how to use it are consistently changing?

His conclusion was that we cannot herd these cats. But what is really going on?

No consensus on operating models

The most basic truth here is that there is no real consensus on operating models around any of the new technology. While there are rough agreements on a few architectural principles (and even there, far more is in the air than well grounded), there is really not a lot of best practices to which companies can pin their operations.

Sure, it might be obvious to people that SDN is here to stay. But what exactly does that mean? And which SDN do I evaluate, purchase, and eventually deploy? Do I go with OpenFlow because ONF has convinced me that openness is the primary tenet? Do I Continue reading

Conformity as an inhibitor to strategy

Early in life, we are all made acutely aware of the power of peer pressure. Most of us probably attribute it to a deep need for belonging. But what if that deep sense of belonging is less about social acceptance and more about how we are psychologically wired? In fact, the pursuit of conformity goes beyond mere social dynamics; it is rooted in how our cognitive selves.

While this plays out in very obvious ways for individuals, the dynamics actually hold true for organizations. And for companies, the stakes might be even higher.

A guy named Solomon

In the 1950s, an American psychologist named Solomon Asch ran through a series of experiments to test the effects of conformity on individuals. His studies have been published several times, but one test in particular gives a fascinating look into how we operate.

Asch took a number of participants and asked them very simple cognitive questions. To conduct the study, Asch brought participants into a room that had seven other people. However, these seven people were actually part of the study. The eight individuals were shown a card with a line on it, followed by a card with three lines on it. The Continue reading

Networking’s atomic unit: Going small to scale up

The major IT trends are all being driven by what can probably best be summarized as more. Some of the stats are actually fairly eye-popping:

  • 40% of the world’s 7 billion people connected in 2014
  • 3 devices per person by 2018
  • Traffic will triple by 2018
  • 100 hours of Youtube video are uploaded every minute
  • Datacenter traffic alone will grow with a 25% CAGR

The point is not that things are growing, but that they are growing exceedingly fast. And trends like the Internet of Things and Big Data, along with the continued proliferation of media-heavy communications, are acting as further accelerant.

So how do we scale?

Taking a page out of the storage and compute play books

Storage and compute have gone through architectural changes to alleviate their initial limitations. While networking is not the same as storage or compute, there are interesting lessons to be learned. So what did they do?

The history lesson here is probably largely unnecessary, but the punch lines are fairly meaningful. From a storage perspective, the atomic unit shifted from the spinning disk down to a block. Ultimately, to scale up, what storage did was reduce the size of the useful atomic unit Continue reading

Outcome bias and the psychology that prevents sustained success

In psychology, there is a phenomenon called Outcome Bias, which basically means that we tend to judge the efficacy of a decision based primarily on how things turn out. After a decision is made, we rarely examine the conditions that existed at the time of the decision, choosing instead to evaluate performance based solely (or mostly) on whether the end result was positive or not.

But what happens as luck plays a role in outcomes? Did we actually make the best decision? Or was the result really a product of conditions outside of our control?

Understanding Outcome Bias

A relatively strong example of Outcome Bias can be found in the gambling world. Take poker, for instance. Many players will overplay the cards they are dealt. Imagine that you have four cards to a straight. There are two remaining cards to play. You might make bets that are statistically weak, but if the card you were looking for shows up, you will evaluate your own performance as strong for the hand. After all, you did win, right?

The challenge with Outcome Bias is that the fortuitous turn of events leads you play other hands in a similar way. Despite the fact Continue reading

SDN Market Sizing Redux

In April 2013, Plexxi teamed up with SDNCentral to take a look at how the SDN market might emerge. The original post along with supporting info graphic and written analysis can be found here. At a high level, the major takeaway was that we predicted that between 30 and 40 percent of the networking market would be influenced by SDN by 2018. At the time, this was BY FAR the most aggressive take on SDN. IDC had been projecting a little more than $3B by 2018, which would put their estimates somewhere around 5% of the overall networking spend.

So 18 months later, how do I feel about the analysis?

SDN spend is largely substitutive

In the original analysis, I made the point that SDN spend is not likely to be net-new dollars coming into the networking industry but rather a shift in dollars from traditional networking equipment to SDN-enabled equipment.

How’d I do? I’d say that this was spot on. Of course, this was the easiest of the predictions at the time. It is rare that dollars are created; they are usually shifted from somewhere else. Here, all I was really predicting was that the somewhere else was other Continue reading

Theory of Constraints and common staffing mistakes

In many companies—both large and small—getting staffing right is a challenge. Critical teams are always starved for resources, and the common peanut butter approach to distributing headcount rarely maps to an explicit strategy. In fact, over time, some teams will grow disproportionately large, leaving other teams (system test, anyone?) struggling to keep pace.

But why is it that staffing is so difficult to get right?

Theory of Constraints

In yesterday’s blog post, I referenced Eliyahu Goldratt’s seminal business book The Goal, and highlighted the Theory of Constraints. I’ll summarize the theory here (if you read yesterday, you can safely skip to the next section).

The Theory of Constraints is basically a management paradigm that promotes attention to bottleneck resources above all else. In any complex system that has multiple subsystems required to deliver something of value, there will be one or more bottleneck resources. A bottleneck resource is the limiting subsystem in the overall system. To increase the overall output of the system as a whole, you have to address the bottlenecks.

The book focuses on a manufacturing plant. Basically, there are different stations in the manufacturing process. Each makes a different part of the widget. To Continue reading

Applying the Theory of Constraints to network transport

For those of you into expanding your experience through reading, there is a foundational reference at the core of many MBA programs. The book, Eliyahu Goldratt’s The Goal, introduces a concept call the Theory of Constraints. Put simply, the Theory of Constraints is the premise that systems will tend to be limited by a very small number of constraints (or bottlenecks). By focusing primarily on the bottlenecks, you can remove limitations and increase system throughput.

The book uses this theory to talk through management paradigms as the main character works through a manufacturing problem. But the theory actually applies to all systems, making its application useful in more scenarios than management or manufacturing.

Understanding the Theory of Constraints

Before we get into networking applications, it is worth walking through some basics about the Theory of Constraints. Imagine a simple set of subsystems strung together in a larger system. Perhaps, for example, software development requires two development teams, a QA team, and a regressions team before new code can be delivered.

If output relies on each of these subsystems, then the total output of the system as a whole is determined by the lowest-output subsystem. For instance, imagine that SW1 Continue reading

Dependency management and organic IT integrations

If the future of IT is about integrated infrastructure, where will this integration take place? Most people will naturally tend to integrate systems and tools that occupy adjacent spaces in common workflows. That is to say that where two systems must interact (typically through some manual intervention), integration will take place. If left unattended, integration will grow up organically out of the infrastructure.

But is organic growth ideally suited for creating a sustainable infrastructure?

A with B with C

In the most basic sense, integration will tend to occur at system boundaries. If A and B share a boundary in some workflow, then integrating A with B makes perfect sense. And if B and C share a boundary in a different (or even further down the same) workflow, then it makes equal sense to integrate B with C.

In less abstract terms, if you use a monitoring application to detect warning conditions on the network, then integrating the monitoring application and the network makes good sense. If that system then flags issues that trigger some troubleshooting process, then integrating the tools with your help desk ticketing system might make sense to automatically open up trouble tickets as issues arise.

In Continue reading

On choice-supportive bias and the need for paranoid optimism

In cognitive science, choice-supportive bias is the tendency to view decisions you have made in the most favorable light. Essentially, we are all hardwired to subconsciously reinforce decisions we have made. We tend to retroactively ascribe positive characteristics to our selections, allowing us to actually strengthen our conviction after the point of decision. This is why we become more resolute in our positions after the initial choice is made.

In corporate culture, this is a powerful driver behind corporate conviction. But in rapidly-shifting landscapes, it can be a dangerous mindset. Consistently finding reasons to reinforce a decision can insulate companies from other feedback that might otherwise initiate a different response. A more productive mindset, especially for companies experiencing rapid evolution, is paranoid optimism.

The need for choice-supportive bias

Choice-supportive bias can actually be a powerful unifier in companies for whom the right path is not immediately obvious. Throughout most of the high-tech space, strategic direction is murky at best. Direction tends to be argued internally before some rough consensus is reached. But even then, the constantly changing technology that surrounds solutions in high-tech means that it can be difficult to stage a lasting rally around a particular direction.

Failing some Continue reading

OpEx savings and the ever-present emergence of SDN

Software-defined networking is fundamentally about two things: the centralization of network intelligence to make smarter decisions, and the creation of a single (or smaller number of) administrative touch points to allow for streamlined operations and to promote workflow automation. The former can potentially lead to new capabilities that make networks better (or create new revenue streams), and the latter is about reducing the overall operating costs of managing a network.

Generating revenue makes perfect sense for the service providers who use their network primarily as a means to drive the business. But most enterprises use the network as an enabling entity, which means they are more interested in the bottom line than the top. For these network technology consumers, the notion of reducing costs can be extremely powerful.

But how do those OpEx savings manifest themselves?

OpEx you can measure

When we consider OpEx, it’s easy to point to the things that are measurable: space, power and cooling. So as enterprise customers examine various solutions, they will look at how many devices are required, and then how those devices consume space, power, and cooling. It is relatively straightforward to do these calculations and line up competing solutions. Essentially, you calculate Continue reading

Datacenter resource fragmentation

The concept of resource fragmentation is common in the IT world. In the simplest of contexts, resource fragmentation occurs when blocks of capacity (compute, storage, whatever) are allocated, freed, and ultimately re-allocated to create noncontiguous blocks. While the most familiar setting for fragmentation is memory allocation, the phenomenon plays itself out within the datacenter as well.

But what does resource fragmentation look like in the datacenter? And more importantly, what is the remediation?

The impacts of virtualization

Server virtualization does for applications and compute what fragmentation and noncontiguous memory blocks did for storage. By creating virtual machines on servers, each with a customizable resource footprint, the once large contiguous blocks of compute capacity (each server) can be divided into much smaller subdivisions. And as applications take advantage of this architectural compute model, they become more distributed.

The result of this is an application environment where individual components are distributed across multiple devices, effectively occupying a noncontiguous set of compute resources that must be unified via the network. It is not a stretch to say that for server virtualization to deliver against its promise of higher utilization, the network must act as the Great Uniter.

Not just a virtual phenomenon

While Continue reading

Using frameworks for effective sales presos

Anyone who has ever delivered a presentation or even listened to one knows that the key to an effective presentation is telling a story. If you peruse even a few pages of any of the books about how to deliver a solid presentation, you will find references to storytelling and its role in passing along information throughout history. Yes, we must tell stories. But not all stories work.

So how do you pick a story or a framework for a presentation that will be effective?

Stories vs frameworks

Let me start off by saying that you need both stories and frameworks. When you think about the structure of the points you want to convey, think about frameworks. When you want to make a point real, use a story. When you are delivering a technical presentation especially, you are very unlikely to find a single story that can weave in all the points you want to make. You are, after all, a presenter not a comedian. Don’t try to force all of your points into a long story.

So that leaves you searching for a framework. A framework is simply a way of organizing your points. It is ultimately the framework Continue reading

High availability in horizontally-scaled applications

The networking industry has a somewhat unique relationship with high availability. For compute, storage, and applications, failures are somewhat tolerable because they tend to be more isolated (a single server going down rarely impacts the rest of the servers). However, the network’s central role in connecting resources makes it harder to contain failures. Because of this, availability has been an exercise in driving uptime to near 100 percent.

It is absolutely good to minimize unnecessary downtime, but is the pursuit of perfect availability the right endeavor?

Device uptime vs application availability

We should be crystal clear on one thing: the purpose of the network is not about providing connectivity so much as it is about making sure applications and tenants have what they need. Insofar as connectivity is a requirement, it is important, but the job doesn’t end just because packets make it from one side to the other. Application availability and application experience are far more dominant in determining whether infrastructure is meeting expectations.

With that in mind, the focus on individual device uptime is an interesting but somewhat myopic approach to declaring IT infrastructure success. By focusing on building in availability at the device level, it is easy Continue reading

Nontraditional network integrations

If you listen to the chatter around the network industry, you are starting to see a lot more discussion about integration. While particularly clueful individuals have been hip to the fact for awhile, it seems the industry at large is just awakening to the idea that the network does not exist in isolation. Put differently, the idea that the network can be oblivious to everything around it (and they to the network) is losing steam as orchestration frameworks like OpenStack take deeper root.

Having glue between the major infrastructure components is critical to having seamless operation across all the resources required to satisfy an application or tenant workload. But there is additional (potentially greater!) advantage to be had by performing some less traditional integrations.

Where do integrations matter?

There are two primary reasons to have integrated infrastructure: to cut down on time and to cut down on mistakes. Integration is almost always in support of automation. Depending on the exact integration, that automation is in support of making things faster and cheaper, or in making things less prone to Layer 8 issues.

The point here is that integration is rarely done just for the sake of integration. Companies need Continue reading

After cheap, what is important for cloud services?

Amazon is indisputably the biggest name in cloud service providers. They have built up a strong market presence primarily on the argument that access to cheap compute and storage resources is attractive to companies looking to shed IT costs as they move from on-premises solutions to the cloud. But after the initial push for cheap resources, how will this market develop?

Is cheap really cheap?

Amazon has cut prices to their cloud offering more than 40 times since introducing the service in 2006. The way this gets translated in press circles is that cloud services pricing is approaching some floor. But is that true?

In October 2013, Ben Kepes over at Forbes wrote an interesting article that included a discussion of AWS pricing. In the article, he quotes some work done by Profitbricks that shows AWS pricing relative to Moore’s Law. The article is here, and the image from the article is below:aws-moores-law

Moore’s Law tells us that performance will roughly double every two years. Of course it is not really a law but more a principle useful in forecasting how generalized compute and storage performance will track over time. The other side of this law is that we have Continue reading