Archive

Category Archives for "Plexxi Blog"

The network won’t fit in your head anymore

Triggered by a discussion with a customer yesterday, it occurred to me (again?) that network engineers are creatures of habit and control. We have strong beliefs of how networks should be architected, designed and build. We have done so for long times and understand it well. We have tweaked our methods, our tools, our configuration templates. We understand our networks inside out. We have a very clear mental view of how they behave and how packets get forwarded, how they should be forwarded. It’s comfort, it’s habit, we feel (mostly) in control of the network because we have a clear model in our head.

I don’t believe this is a network engineering trait per se. Software engineers want to understand algorithms inside out, they want to understand the data modeling, types structures and relationships.

Uncomfortable

Many of us know the feeling. Something new comes around and it’s hard to put your head around it. It challenges the status quo, it changes how we do things, it changes what we (think we) know. When we are giving responsibility of something new, there is a desire to understand “it” inside out, as a mechanism to be able to control “it”.

I Continue reading

Certified Application to Network Isomorphism Engineer, anyone?

There has been a recurrent debate over the last few years on the future of CCIEs, or more broadly network engineers as we know (and love) them today. While certainly calls for the “death of the CCIE” are certain to grab eyeballs, as always the more probably truth is more nuanced and tricky to predict. Certainly change is coming, but it is important to understand the true value of the present network engineer and how that maps into the future we expect.

Why would network engineers die?

Apart from a deadly virus outbreak transmitted by TCPDump, or for the true preppers out there – the distant alien race whose network engineers have all died out and who are coming to claim ours – the vast disappearance of all network engineers is most likely hyperbole. Yet it is probably fair to say that specific skills that network engineers have long used to compare or present their own value, attributes like the CCIE certification, are diminishing in the value they present to the market. The reason is pretty simple – while a CCIE (or JNCIE, etc) certification implies a thorough knowledge of overall network engineering theories and concepts and a detailed understanding Continue reading

Theory of Constraints and common staffing mistakes

In many companies—both large and small—getting staffing right is a challenge. Critical teams are always starved for resources, and the common peanut butter approach to distributing headcount rarely maps to an explicit strategy. In fact, over time, some teams will grow disproportionately large, leaving other teams (system test, anyone?) struggling to keep pace.

But why is it that staffing is so difficult to get right?

Theory of Constraints

In yesterday’s blog post, I referenced Eliyahu Goldratt’s seminal business book The Goal, and highlighted the Theory of Constraints. I’ll summarize the theory here (if you read yesterday, you can safely skip to the next section).

The Theory of Constraints is basically a management paradigm that promotes attention to bottleneck resources above all else. In any complex system that has multiple subsystems required to deliver something of value, there will be one or more bottleneck resources. A bottleneck resource is the limiting subsystem in the overall system. To increase the overall output of the system as a whole, you have to address the bottlenecks.

The book focuses on a manufacturing plant. Basically, there are different stations in the manufacturing process. Each makes a different part of the widget. To Continue reading

Applying the Theory of Constraints to network transport

For those of you into expanding your experience through reading, there is a foundational reference at the core of many MBA programs. The book, Eliyahu Goldratt’s The Goal, introduces a concept call the Theory of Constraints. Put simply, the Theory of Constraints is the premise that systems will tend to be limited by a very small number of constraints (or bottlenecks). By focusing primarily on the bottlenecks, you can remove limitations and increase system throughput.

The book uses this theory to talk through management paradigms as the main character works through a manufacturing problem. But the theory actually applies to all systems, making its application useful in more scenarios than management or manufacturing.

Understanding the Theory of Constraints

Before we get into networking applications, it is worth walking through some basics about the Theory of Constraints. Imagine a simple set of subsystems strung together in a larger system. Perhaps, for example, software development requires two development teams, a QA team, and a regressions team before new code can be delivered.

If output relies on each of these subsystems, then the total output of the system as a whole is determined by the lowest-output subsystem. For instance, imagine that SW1 Continue reading

PlexxiPulse—Mark Your Calendar: DemoFriday is 10/24

Plexxi is teaming up with SDNCentral to host DemoFriday on October 24 at 10 a.m. PST. Tune in to hear our own Ed Henry and Nils Stewart demonstrate how to build scalable and manageable Big Data fabrics that easily integrate with systems such as OpenStack and Cloudera. You can register to attend here.

In this week’s PlexxiTube of the week, Dan Backman explains how Plexxi’s Big Data fabric solution is applicable beyond Big Data.

SDN: Unshackling the Network Application Environment

Art Cole claims that SDN will enable the development of a robust ecosystem of network applications in a recent article for Enterprise Networking Planet. As we look at applications, it is worth making the distinction between network apps (things that run on the network) and business apps (apps the network enables). The real value in SDN will permit the business apps to influence the network (whether that is automated or not is an interesting side conversation). To bring this to life there has to be a focus on policy abstraction. This is why Congress (part of OpenStack) and OpenDaylight are potentially powerful. If we can agree on policy abstraction, then the applications can interact with the network and Continue reading

Training Wheels and Protective Gear

Throughout the development cycle of new features and functions for any network platform (or probably most other products not targeted at the mass market consumer) this one question will always come up: should we protect the user of our product from doing this? And “this” is always something that would allow the user of the product to really mess things up if not done right. As a product management organization you almost have to take a philosophical stand when it comes to these questions.

Protect the user

Sure enough, the question came up last week as part of the development of one our features. When putting the finishing touches on a feature that allows very direct control over some of the fundamental portions of what creates a Plexxi fabric, our QA team (very appropriately) raised the concern: if the user does this, bad things can happen, should we not allow the user to change this portion of the feature?

This balancing act is part of what as made networking as complex as it has become. As an industry we have been extremely flexible in what we have exposed to our users. We have given access to portions of our products Continue reading

Dependency management and organic IT integrations

If the future of IT is about integrated infrastructure, where will this integration take place? Most people will naturally tend to integrate systems and tools that occupy adjacent spaces in common workflows. That is to say that where two systems must interact (typically through some manual intervention), integration will take place. If left unattended, integration will grow up organically out of the infrastructure.

But is organic growth ideally suited for creating a sustainable infrastructure?

A with B with C

In the most basic sense, integration will tend to occur at system boundaries. If A and B share a boundary in some workflow, then integrating A with B makes perfect sense. And if B and C share a boundary in a different (or even further down the same) workflow, then it makes equal sense to integrate B with C.

In less abstract terms, if you use a monitoring application to detect warning conditions on the network, then integrating the monitoring application and the network makes good sense. If that system then flags issues that trigger some troubleshooting process, then integrating the tools with your help desk ticketing system might make sense to automatically open up trouble tickets as issues arise.

In Continue reading

On choice-supportive bias and the need for paranoid optimism

In cognitive science, choice-supportive bias is the tendency to view decisions you have made in the most favorable light. Essentially, we are all hardwired to subconsciously reinforce decisions we have made. We tend to retroactively ascribe positive characteristics to our selections, allowing us to actually strengthen our conviction after the point of decision. This is why we become more resolute in our positions after the initial choice is made.

In corporate culture, this is a powerful driver behind corporate conviction. But in rapidly-shifting landscapes, it can be a dangerous mindset. Consistently finding reasons to reinforce a decision can insulate companies from other feedback that might otherwise initiate a different response. A more productive mindset, especially for companies experiencing rapid evolution, is paranoid optimism.

The need for choice-supportive bias

Choice-supportive bias can actually be a powerful unifier in companies for whom the right path is not immediately obvious. Throughout most of the high-tech space, strategic direction is murky at best. Direction tends to be argued internally before some rough consensus is reached. But even then, the constantly changing technology that surrounds solutions in high-tech means that it can be difficult to stage a lasting rally around a particular direction.

Failing some Continue reading

OpEx savings and the ever-present emergence of SDN

Software-defined networking is fundamentally about two things: the centralization of network intelligence to make smarter decisions, and the creation of a single (or smaller number of) administrative touch points to allow for streamlined operations and to promote workflow automation. The former can potentially lead to new capabilities that make networks better (or create new revenue streams), and the latter is about reducing the overall operating costs of managing a network.

Generating revenue makes perfect sense for the service providers who use their network primarily as a means to drive the business. But most enterprises use the network as an enabling entity, which means they are more interested in the bottom line than the top. For these network technology consumers, the notion of reducing costs can be extremely powerful.

But how do those OpEx savings manifest themselves?

OpEx you can measure

When we consider OpEx, it’s easy to point to the things that are measurable: space, power and cooling. So as enterprise customers examine various solutions, they will look at how many devices are required, and then how those devices consume space, power, and cooling. It is relatively straightforward to do these calculations and line up competing solutions. Essentially, you calculate Continue reading

Plexxi Pulse—HadoopWorld 2014: Is your network ready for Big Data?

We are two short weeks away from HadoopWorld, one of the world’s largest Big Data conferences. October 15—17 our team will be in in New York City to demo our Big Data fabric and answer questions about preparing networks for Big Data. Stop by booth 552 to catch up with our team and pick up a pair of Plexxi Socks. We look forward to seeing you there.

hw3.2

In this week’s PlexxiTube of the week, Dan Backman describes how Plexxi manages load balancing in Big Data networks.

Check out what we’ve been up to on social media this week. Have a great weekend!

The post Plexxi Pulse—HadoopWorld 2014: Is your network ready for Big Data? appeared first on Plexxi.

Cables, Transceivers and 10GBASE-T

In the past few weeks at Plexxi we spend probably an unreasonable amount of time talking about, discussing and even arguing over ethernet cables and connectors. As mundane as it may sound, the options, variations, restrictions and cost variations of something that is usually an afterthought is mind boggling. And as a buyer of ethernet networks, you have probably felt that the choices you make will significantly change the price you pay for the total solution.

During our quarterly Product Management get together, my colleague Andre Viera took 25GbE as a trigger to walk the rest of the team through all the variations of cables and transceivers. As a vendor it is a rather complicated topic and as a customer I can only imagine how the choices may put you in a bad mood.

Most of today’s 10GbE switches ship with SFP+ cages and a handful of QSFP cages. Now comes the hard part. What do I plug into these cages? There are lots of choices all with their own pros and cons.

Direct Attach Cable

The cheapest solution is a Direct Attach Cable or DAC. These are copper based cables that have SFP+ transceivers molded onto the cable. It Continue reading

Datacenter resource fragmentation

The concept of resource fragmentation is common in the IT world. In the simplest of contexts, resource fragmentation occurs when blocks of capacity (compute, storage, whatever) are allocated, freed, and ultimately re-allocated to create noncontiguous blocks. While the most familiar setting for fragmentation is memory allocation, the phenomenon plays itself out within the datacenter as well.

But what does resource fragmentation look like in the datacenter? And more importantly, what is the remediation?

The impacts of virtualization

Server virtualization does for applications and compute what fragmentation and noncontiguous memory blocks did for storage. By creating virtual machines on servers, each with a customizable resource footprint, the once large contiguous blocks of compute capacity (each server) can be divided into much smaller subdivisions. And as applications take advantage of this architectural compute model, they become more distributed.

The result of this is an application environment where individual components are distributed across multiple devices, effectively occupying a noncontiguous set of compute resources that must be unified via the network. It is not a stretch to say that for server virtualization to deliver against its promise of higher utilization, the network must act as the Great Uniter.

Not just a virtual phenomenon

While Continue reading

Using frameworks for effective sales presos

Anyone who has ever delivered a presentation or even listened to one knows that the key to an effective presentation is telling a story. If you peruse even a few pages of any of the books about how to deliver a solid presentation, you will find references to storytelling and its role in passing along information throughout history. Yes, we must tell stories. But not all stories work.

So how do you pick a story or a framework for a presentation that will be effective?

Stories vs frameworks

Let me start off by saying that you need both stories and frameworks. When you think about the structure of the points you want to convey, think about frameworks. When you want to make a point real, use a story. When you are delivering a technical presentation especially, you are very unlikely to find a single story that can weave in all the points you want to make. You are, after all, a presenter not a comedian. Don’t try to force all of your points into a long story.

So that leaves you searching for a framework. A framework is simply a way of organizing your points. It is ultimately the framework Continue reading

High availability in horizontally-scaled applications

The networking industry has a somewhat unique relationship with high availability. For compute, storage, and applications, failures are somewhat tolerable because they tend to be more isolated (a single server going down rarely impacts the rest of the servers). However, the network’s central role in connecting resources makes it harder to contain failures. Because of this, availability has been an exercise in driving uptime to near 100 percent.

It is absolutely good to minimize unnecessary downtime, but is the pursuit of perfect availability the right endeavor?

Device uptime vs application availability

We should be crystal clear on one thing: the purpose of the network is not about providing connectivity so much as it is about making sure applications and tenants have what they need. Insofar as connectivity is a requirement, it is important, but the job doesn’t end just because packets make it from one side to the other. Application availability and application experience are far more dominant in determining whether infrastructure is meeting expectations.

With that in mind, the focus on individual device uptime is an interesting but somewhat myopic approach to declaring IT infrastructure success. By focusing on building in availability at the device level, it is easy Continue reading

Plexxi Pulse—Preparing for Big Data

Plexxi Pulse—Preparing for Big Data

As enterprises launch Big Data platforms, it is necessary to tailor network infrastructure to support increased activity. Big Data networks must be constructed to handle distributed resources that are simultaneously working on a single task—a functionality that can be taxing on existing infrastructure. Our own Mike Bushong contributed an article to TechRadar Pro this week on this very subject where he outlines the necessary steps to prepare networks for Big Data deployments. He also identifies how software-defined networking can be used as a tool to alleviate bandwidth issues and support application requirements when scaling for Big Data. It’s definitely worth a read before you head out for the weekend.

In this week’s PlexxiTube of the week, Dan Backman explains how Plexxi’s Big Data fabric mitigates incast problems.

Check out what we’ve been up to on social media this September. Enjoy!

The post Plexxi Pulse—Preparing for Big Data appeared first on Plexxi.

Policy versus ACLs, it’s those exposed implementation details again

In a blog week dedicated to the application and the policies that govern them, I wanted to add some detail on a discussion I have with customers quite often. It should be clear that we at Plexxi believe in application policy driven network behaviors. Our Affinities allow you to specify desired behavior between network endpoints, which will evolve with the enormous amount of policy work Mat described in his 3 piece article earlier this week.

ACL

Many times when I discuss Affinities and policies with customers or more generically with network engineering types, the explanation almost always lands at Access Control Lists (ACLs). Cisco created the concept of ACLs (and its many variations used for other policy constructs) way way back as a mechanism to instruct the switching chips inside their routers to accept or drop traffic. It started with a very simple “traffic from this source to this destination is dropped” and has very significantly evolved since then in Cisco’s implementation and many other of the router and switch vendors.

There are 2 basic components in an ACL:

1. what should I match a packet on

2. what is the action I take once I found a match.

Both Continue reading

It’s the Applications, Stupid (Part 3 of 3)!

If you missed the first 2 parts of this series, you can catch them here and here. The short version is that there are Enterprise customers that are actively seeking to automate the production deployment of their workloads, which leads them to discover that capturing business policy as part of the process is critical. We’ve arrived here at the point that once policy can be encapsulated in the process of application workload orchestration, it is then necessary to have infrastructure that understands how to enact and enforce that policy. This is largely a networking discussion, and to-date, networking has largely been about any-to-any all equal connectivity (at least in Data Centers), which in many ways means no policy. This post looks at how networking infrastructure can be envisioned differently in the face of applications that can express their own policy.

[As an aside, Rich Fichera over at Forrester researcher wrote a great piece on this topic (which unfortunately is behind a pretty hefty paywall unless you're a Forrester client, but I'll provide a link anyway). Rich coins the term "Systems of Engagement" to describe new models for Enterprise applications that depart from the legacy "Systems of Record." If you have access Continue reading

It’s the Applications, Stupid (Part 2 of 3)!

In part 1 of this series, I mentioned a customer that was starting to understand how to build application policy into their deployment processes and in turn was building new infrastructure that could understand those policies. That’s a lot of usage of the word “policy” so it’s probably a good idea to go into a bit more detail on what that means.

In this context, policy refers to how specific IT resources are used in accordance with a business’s rules or practices. A much more detailed discussion of policy in the data center is covered in this most excellent networkheresy blog post (with great additional discussions here and here).  But suffice it to say that getting to full self-service IT nirvana requires that we codify business-centric policy and encapsulate the applications with that policy.

The goals of the previously mentioned customer were pretty simple, actually. They wanted to provide self-service compute, storage, networking, and a choice of application software stacks to their vast army of developers. They wanted this self-service capability to extend beyond development and test workloads to full production workloads, including fully automated deployment. They wanted to provide costs back to the business that were on par Continue reading

It’s the Applications, Stupid (Part 1 of 3)!

I remember when we first started talking to customers about the concepts of applications driving networks, about 3 years ago (This was a very different conversation from other networking era’s where we talked about ‘intelligent’ networks that could better understand and adapt to applications.) While most customers loved the concepts of a scale-out network that leveraged dynamic photonic connections instead of hard-wired paths, most of them also told us that they “didn’t really know (or want to know)” about the applications at all. Some even said they didn’t want their networks to understand the applications at all!

Hmm.. this was very strange. After all, we were talking to Data Center networking folks, and wasn’t the purpose of the data center network to provide connectivity solutions for applications? How could the folks in charge of these networks not know (and worse, not want to know!) about the whole purpose of their network in the first place?

But of course, it wasn’t really strange. After all, networking, like many IT disciplines, had developed into a nice neat silo that defined nice neat operational boundaries that allowed folks within those boundaries to say “I don’t know, and I don’t want Continue reading

Plexxi Pulse—Adding Flexibility to the Cloud

It’s been a busy week here at Plexxi. On Tuesday, we announced our partnership with Cari.net, a high-performance, scalable and flexible hosting platform based on Microsoft Cloud OS. CARI.net’s newly released CARIcloud service is powered by Plexxi and uses software-defined networking to allow companies to automatically adjust to conditions on their networks and make sure that the most important applications are never starved for performance. The platform enables customers to manage organizations and scale their data centers without being restricted to a single cloud service provider.

In this week’s PlexxiTube of the week, Dan Backman explains how Plexxi’s datacenter fabric transport solution is different from a more traditional WAN gateway.

Hardware Customization in a Software-Driven Universe

Art Cole contributed an interesting piece to Enterprise Networking Planet this week on customizing IT hardware in a “software-driven” universe. In my opinion, we tend to think about the discrete layers within information technology hardware—the boxes that make up the network, the servers that make up compute, and the devices that make up storage. Having flexibility in each layer of hardware is crucial, but we also want the same flexibility in the interconnect that ties them all together. We want programmability Continue reading