More exciting things happening at Plexxi’s offices this week. Wednesday marked a company milestone for Plexxi as we hosted the kick-off for our new partner program, the Plexxi Pulse Partner Summit. The day-long event covered the fast-growing networking market, scale-out applications and new architectural requirements of the 3rd Platform IT era.
Attendees saw presentations from Plexxi’s executive team, including CEO Rich Napolitano; SVP of Sales and Support Tim Lieto; Founder and EVP of Products / CTO Dave Husak; and myself. We were also honored to have Cloudera’s Big Data Evangelist Amy O’Connor present to our attendees.
Participants from around the country attended including channel partners, systems integrators, technology partners and distributors.
The post Introducing the Plexxi Pulse Partner Summit appeared first on Plexxi.
If you’ve been following our blog and/or keeping up with Plexxi on social media, you may have noticed that there is a lot going on here. We recently introduced new product starter kits, began construction on our new office expansion, are working on some exciting projects, and have grown by 20 percent in the past two months. Are you interested in being a part of the Plexxi team? If so, we would love to hear from you. Check out our careers page for more information.
Below you will find our top picks for stories in the networking space this week. Have a great weekend!
In this week’s PlexxiTube video of the week, Dan Backman explains how Plexxi’s datacenter transport fabric works with optical transport gear.
Light Reading: New Plexxi Chief Makes His Mark
By Mitch Wagner
As he enters his third month in the big chair, new Plexxi CEO Rich Napolitano is retooling the company’s messaging to focus more on the benefits of software networks — using Plexxi technology, of course — and less on the abstract benefits of SDN. Napolitano took over as CEO in November after 30 years in the technology industry, most recently at EMC Continue reading
We’re dodging scaffolding and flying paint cans around the office in our Nashua, N.H. headquarters this week as work crews knock down walls to expand our current office space to keep pace with Plexxi’s growth. Since Rich Napolitano was announced as CEO two months ago, the company has grown 20 percent (and we’re hiring across the board in development, sales, support and marketing!).
The office expansion will increase our headquarters footprint by 5,600 square feet to 23,000 square feet. The new space that’s being set up this week will house our growing sales, marketing and business operations. It will also feature customer meeting and demonstration areas that will be up and running soon.
We’ll keep you posted on our growth and promise to share more pictures once the space is complete. Now if I could only figure out where my desk was moved to…
The post Plexxi Is Growing Again appeared first on Plexxi.
Many years ago Gartner introduced their technology Hype Cycle, which maps visibility against maturity for new technology. The Hype Cycle in essence states that many new technologies get a large amount of visibility early in their maturity cycle. The visibility and enthusiasm drops significantly when reality sets in: technologies early in their maturity cycle will have low adoption rates. The vast majority of customers of technology are conservative in their choices, especially if this new technology is not (yet) fundamental to this customer’s business.
I call it common sense reality, Garter calls it the Trough of Disillusionment, fine. It is that realization that the technology may have lots of promises, but isn’t ready to be consumed.
That is where the real work starts, maturing the technology, driving solutions and use cases, creating the economic viability of the technology and tons of other stuff that needs to be done to get a customer base to actually buy into this technology. Not with words and attention, but with the only thing that matters ultimately, money. Gartner calls delivering these absolutely necessary components the Slope of Enlightenment.
Not every technology follows this cycle, not every technology survives the downward turn after the inflated Continue reading
Happy New Year! The year is off to a great start and we are excited to see what 2015 will bring to the networking space. I have a few predictions of my own (think policy and disaggregation) that were recently published in Network World. What are your networking predictions for 2015? Below are our top picks for networking stories this week.
In this week’s PlexxiTube #FBF (Flashback Friday) video of the week, Dan Backman interviews VMworld 2014 attendees and asks what they’ve done to help their networks accommodate to Big Data.
Network World: SDN, data center predictions for 2015
By Jim Duffy
The predictions for data center and SDN in 2015 are still rolling in. Technology Business Research says software will pervade the data center while start-up Plexxi believes policy and disaggregation will be front and center. Here’s the link to TBR’s 2015 Data Center Predictions. Some of the more interesting prognostications in it are the acceleration of SDN in the enterprise and the ability of hyperconvergence to converge.
Network Computing: 10 SDN Startups On The Cutting Edge
By Marcia Savage
Small companies flush with VC money have led the way in software-defined networking. Here are 10 of the hottest Continue reading
Whenever we get to the end of a year we have this tendency to reflect on what has happened in the past year and how we can improve in the coming year. It’s natural to use the change of calendar year as a point in time to think back, even though practically speaking it is usually the most chaotic time of the year between shopping, family and year and quarter end at work.
Almost every industry will go through waves of change and transformation. Real change and transformation is driven by powerful market forces of demand coupled with technology leaps that allow an escape from incremental changes that drive day to day improvements. Networking has gone through several of these transformations. From dedicated main frame based connectivity, to coax based shared ethernet to switches ethernet in local area networks. From 1200 baud dialup serial connections through X.25 (yes, that’s the European in me) to leased T1 to ATM, to Frame Relay, to Packet over SONET to MPLS and various flavors of wide area ethernet services. Some of these were incremental, some of them truly transformational.
When you look back, each of these changes in network technology was very much Continue reading
This week, we announced new product starter kits that will make it easier for companies to adopt software-defined networking in a way that fits their unique networking environments. The kits are designed for three distinct uses — agile datacenters, distributed cloud environments and Big Data applications — avoiding the “one-size-fits-all” starter kit approach of some other vendors. Visit our product page to learn more. Below are our top picks for networking stories this week.
In this week’s PlexxiTube of the week, Dan Backman discusses the benefits of Plexxi’s Big Data fabric beyond Hadoop applications.
eWEEK: Plexxi Launches SDN Starter Kits for Cloud, Big Data
By Jeff Burt
Plexxi officials want to make it easier for organizations to adopt software-defined networking. Plexxi, a startup in the increasingly crowded software-defined networking (SDN) space, is unveiling three starter kits aimed at agile data centers, cloud environments and big data applications. Company officials said the goal of the starter kits is to give businesses and service providers the tools they need to deploy SDN infrastructures that are tailored to their particular workloads, avoiding what they said is a more one-size-fits-all approach that other vendors are taking.
TechTarget: Networking pros describe their 2015 SDN projects
Continue reading
Recently Arista released a white paper surrounding the idea that having deeper buffers running within the network can help to alleviate the incast congestion patterns that can present when a large number of many-to-one connections are happening within a network. Also known as the TCP incast problem. They pointedly targeted Hadoop clusters, as the incast problem can rear its ugly head when utilizing the Hadoop Cluster for MapReduce functions. The study used an example of 20 servers hanging off of a single ToR switch that has 40Gbps of uplink capacity within a Leaf/Spine network, presenting a 5:1 oversubscription ratio. This type of oversubscription was just seen in the recent release of the Facebook network that is used within their data centers. So its safe to assume that these types of oversubscription ratios are seen in the wild. I know I’ve run my fair share of oversubscribed networks in the past.
This particular study actually prods at what is the achilles heel of the traditional leaf/spine network design. All nodes being within 3 switch hops, (ToR <-> Spine <-> ToR), does provide a predictable pathing within the minds of the network operators of today, but I posit that Continue reading
2014 was a busy year in networking, and our friend Marcia Savage did a great job of summarizing the industry highs and lows – from ACI to white box switches – this week in a slideshow for Network Computing. It’s definitely worth a read before you head out for the weekend. Check out Marcia’s year end wrap up below as well as other happenings in the networking space this week.
In this week’s PlexxiTube video of the week, Dan Bachman explains how Plexxi incorporates optical transport into datacenter transport fabrics.
Computer Weekly: Cisco is missing the transition to software-defined networks
By Alex Scroxton
Little doubt remains that the future of networking will be defined by software, but market-watchers warn Cisco is missing this move. Cisco’s hardware forms the backbone of most enterprise networks around the world. But this world is changing and many buyers no longer see compute, storage and networking as distinct silos… Software-defined networking (SDN) company Plexxi, which recently appointed former EMC executive Richard Napolitano as its CEO, is one such company looking at the networking industry’s transition from networking towards an application and data-focused world. “We stand today at a transition point in the IT landscape,” says Continue reading
Through http://blog.ipspace.net I landed on this article on acm.org discussing the complexity of distributed systems. Through some good examples, George Neville-Neil makes it clear that creating and scaling distributed systems is very complex and “any one that tells you it is easy is either drunk or lying, and possibly both”.
Networks are of course inherently distributed systems. Most everyone that has managed a good sized network before knows that like the example in the article, minor changes in traffic or connectivity can have huge implications on the overall performance of a network. In my time supporting some very large networks I have seen huge chain reactions of events based on what appear to be some minor issues.
Very few networks are extensively modeled before they are implemented. Manufacturers of machines, cars and many other things go through extensive modeling to understand the behaviors of what they created and their design choices. Using modeling they will look at all possible inputs and outputs, conditions, failure scenarios and anything else we can think of to see how their product behaves.
There are few if any true modeling tools for networks. We build networks with extensive distributed protocols to control connectivity Continue reading
The internet has been buzzing about Facebook’s redesigned datacenter architecture. Facebook, which is used by more than 1.35 billion people, recently restructured their infrastructure to increase flexibility and agility to rapidly adjust to application requirements. Our own Marten Terpstra shared his take on the redesigned infrastructure this week on the Plexxi blog—it’s definitely worth a read. Below we share some of the articles that covered Facebook’s new datacenter architecture, as well as other happenings in the networking space this week.
In this week’s PlexxiTube of the week, Dan Backman explains how much fiber is required to connect datacenters using Plexxi’s datacenter transport fabric solution.
Gigaom: Facebook Redesigned the Data Center Network: 3 Reasons It Matters
By Derrick Harris
Earlier this month, Facebook announced a new data center networking architecture that it calls, fittingly, “data center fabric.” The company explained the design and the rationale in an engineering blog post, and Gigaom’s Jonathan Vanian covered the news, but it’s a big enough deal that we had Facebook Director of Network Engineering Najam Ahmad on the Structure Show podcast this week to talk about the new fabric in more detail.
CIO: How (and Why) Facebook Excels at Data Center Continue reading
A few weeks ago Facebook announced their new datacenter architecture in a post on their network engineering blog. Facebook is one of the few large web scale companies that is fairly open about their network architecture and designs and it gives many others the opportunity to see how a network can be scaled, even though the scale is well beyond what most will need in the foreseeable future, if not forever.
In the post, Alexey walks through some of the thought process behind the architecture, which is ultimately the most important part of any architecture and design. Too often we simply build whatever seems to be popular or common, or mandated/pushed by a specific vendor. The network however is a product, a deliverable, and has requirements like just about anything else we produce.
Facebook’s and the other web properties’ scale is at a different order of magnitude from most everyone else, but their requirements should sound pretty familiar to many:
One way or another, all data center networks exhibit at least 6 different functional areas that their operators need to engineer, implement, and operate with a differing set of needs and requirements. Similarly, in one way or another, most of the available SDN and virtualized network solutions available today or in progress aim to deal with issues in one or more of these areas to improve their functional effectiveness, cost, automated-ness, or integrated-ness. Yet some areas receive an inordinate amount of focus/attention and those areas may not necessarily have the most opportunity for improvement. Let’s take a look at these 6 requirements in order of the opportunity value to bring new levels of effectiveness to data centers.
Edge switching loosely covers the function of providing switching between end points, whether they be virtual servers, physical servers, storage devices, or terminating services devices (load balancers, firewalls, etc.) It is important to note that in a virtualized server environment, there is typically 2 layers of edge – a set of virtual switches that connect together VMs and a set of physical switches that connect the physical hosts.
Much of the attention and Continue reading
I always enjoy reading the IPspace blog and as Ivan has stated about our blog, I don’t always agree with his opinion, but they are informative and cover just about everything networking. So this may come as a surprise, but in response to his “Do we have too many knobs” post from about a week ago I have one simple response: “Amen”.
Networking is unnecessarily complicated. We have written several blogs on this topic and related items. I used to run the sustaining organization for all data products at my previous company and when you do the analysis of the customer reported issues that come in to the support organization, you find that a very large percentage stem from configuration mistakes.
Many of those mistakes are not typos. We like to refer to fat fingered configurations often as a reason to move to a more automated configuration and provisioning environment, but most of the configuration mistakes that are made are simply because we have made it so difficult to configure these devices. Type something in the wrong order and it may not work right or behave slightly differently. Simple checks across configurations that could avoid many problems are Continue reading
I recently received a note from a colleague from ZeroHedge (http://www.zerohedge.com/news/2014-11-21/not-so-fab-1-billion-valuation-15-million-year) that was officially calling the beginning of the bubble bursting based on the untimely (or timely depending on your perspective) demise of the startup Fab. I had never heard of Fab, but according to ZeroHedge, Fab “started out as a dating site for the gay community and then relaunched as a flash sale site for home decor – raised $150 million just over a year ago (at a $1 billion valuation), but as TechCrunch reports today, multiple sources have confirmed that Fab is in talks to sell to PCH International for $15 million in a half cash and half stock deal. Pets.com?”
Its a fair question indeed – are we seeing the same pattern we saw in the last bubble (i.e. Dot-Com 1.0) being repeated? Certainly, crazy valuations of equally crazy or non-existent business models are a cause for concern, but more important than that are the fundamentals of what is driving the speculation in the first place. In Dot-Com 1.0 we saw simultaneous speculative investment across at least 3 major areas: internet backbone infrastructure, internet edge/access Continue reading
As many of you know, it’s been a busy month here at Plexxi. It’s hard to believe that November is coming to a close and that Thanksgiving is next week. We have a lot to be thankful for this year, particularly our new CEO Rich Napolitano and for the support of our skilled and dynamic team members – both new and old. Wishing everyone a safe and happy Thanksgiving holiday!
In this week’s PlexxiTube of the week, our own Dan Backman explains how Plexxi’s datacenter transport fabric can be used in a datacenter or on campus.
Check out what we’ve been up to over the past few weeks on social media!
The post PlexxiPulse— Plenty To Be Thankful For At Plexxi appeared first on Plexxi.
A few weeks ago I read this article from Craig Matsumoto on SDN Central.
At first I read it with a bit of a smile, but for some reason it has actually started to bother me a little. In this article, Craig summarizes a talk by Scott Shenker about SDN and a proposal for an SDNv2 that would fix the things that are wrong with SDNv1. In a way this represents what is wrong with our industry. We create a new version or create a new name for a concept that is not particularly well defined to begin with, and in many interpretations is far broader than is assumed in the pre-fixed version.
Many folks still believe that OpenFlow defines SDN. And that all the limitations of a basic protocol invalidate or limit the capabilities of an evolving concept like SDN. Why do we feel such a need to increment a version of an undefined term to make it sound like we are creating something new and different?
In SDNv2, we would still have separation of control and date (at least all that work is not wasted), but there are three major differences between it and the “old” SDN concepts. Continue reading
[Unbeknownst to me, Matt Oswalt (@mierden on Twitter) posted a thematically similar post a few days before me. While I did not see that post, it seems disingenuous not to reference it, so please read his thoughts here: http://keepingitclassless.net/2014/11/mass-customization/]
IT is constantly evolving, from mainframes to disaggregated components to an integrated set of infrastructure elements working in support of applications. But that description is more about how individual infrastructure is packaged and less about the role that these solutions play. There is a more subtle but perhaps more profound change in IT that is simultaneously taking place: a shift in how IT architectures are actually being designed.
So what is at the heart of this change?
IT was born with the mainframe. Mainframes were basically entire IT ecosystems in a box. They included compute, storage, networking and applications. But what is most notable about these solutions is that the entire system was aimed at providing a single outcome. That is to say that the mainframe itself was a composed system that was designed with a single purpose in mind: deliver some application.
In the early days of IT, there was no need for systems to run different Continue reading
There’s a meme that has been making the rounds through leadership circles for some time around celebrating failure. If you aren’t failing, you aren’t pushing the boundaries. The original premise of this line of thinking is that failure is not something to be feared. But there is a difference between using failure to learn well-earned lessons and declaring success after blowing up on the launchpad.
It’s worth starting with some of the most common cliches around failure:
Doing a simple web search for failure quotes yields hundreds more. The basic gist of the resulting tome of sayings? Anything worth doing is difficult, and achieving anything great is unlikely to happen on the first try.
It is absolutely true that forging a new path Continue reading
Our perception of nirvana is impacted mightily by current conditions. For people who live in third world countries, for example, merely having running water or reliable electricity can be a life-altering boon. Meanwhile, those of us who are more accustomed to the creature comforts of life consider slow internet or a poorly seasoned meal worthy of public scorn (even if we add the hashtag #firstworldproblems).
So how is the current state of networking impacting its user base?
Perhaps the most insidious effect of poor conditions is that prolonged exposure can actually cause us to reset our baseline for normal. When we are subjected to extended periods of great or even long periods of suck, we adjust our expectations.
In networking, this means that our current normal has been forged through diligent neglect of actual user experience for decades. It’s not so much purposeful behavior by the incumbent networking players so much as placing focus elsewhere. For at least the last few decades, the future of networking has always been defined by the next protocol or knob. That is to say that the focus for product development has always been about bolstering device capability.
With the focus Continue reading