Thanks to Tech Field Day, I fly a lot. As Southwest is my airline of choice and I have status, I tend to find myself sitting the slightly more comfortable exit row seating. One of the things that any air passenger sitting in the exit row knows by heart is the exit row briefing. You must listen to the flight attendant brief you on the exit door operation and the evacuation plan. You are also required to answer with a verbal acknowledgment.
I know that verbal acknowledgment is a federal law. I’ve also seen some people blatantly disregard the need to verbal accept responsibility for their seating choice, leading to some hilarious stories. But it also made me think about why making people talk to you is the best way to make them understand what you’re saying
Today’s society full of distractions from auto-play videos on Facebook to Pokemon Go parks at midnight is designed to capture the attention span of a human for a few fleeting seconds. Even playing a mini-trailer before a movie trailer is designed to capture someone’s attention for a moment. That’s fine in a world where distraction is assumed and people try Continue reading
Networking Field Day 12 starts today. There are a lot of great presenters lined up. As I talk to more and more networking companies, it’s becoming obvious that simply moving packets is not the way to go now. Instead, the real sizzle is in telling you all about those packets instead. Not packet inspection but analytics.
Ask any networking professional and they’ll tell you that the systems they manage have a wealth of information. SNMP can give you monitoring data for a set of points defined in database files. Other protocols like NetFlow or sFlow can give you more granular data about a particular packet group of data flow in your network. Even more advanced projects like Intel’s Snap are building on the idea of using telemetry to collect disparate data sources and build collection methodologies to do something with them.
The concern that becomes quickly apparent is the overwhelming amount of data being received from all these sources. It reminds me a bit of this scene:
How can you drink from this firehose? Maybe you should be asking if you should instead?
Data is useless. We need to perform analysis Continue reading
It all comes back to people. People are the users of the system. They are the source of great imagination and great innovation. They are also the reason why security professionals pull their hair out day in and day out. Because computer systems don’t have the capability to bypass, invalidated, and otherwise screw up security quite like a living, breathing human being.
Security is designed to make us feel safe. Door locks keep out casual prowlers. Alarm systems alert us when our home or business is violated. That warm fuzzy feeling we get when we know the locks are engaged and we are truly secure is one of bliss.
But when security gets in our way, it’s annoying. Think of all the things in your life that would be easier if people just stopped trying to make you secure. Airport security is the first that comes to mind. Or the annoying habit of needing to show your ID when you make a credit card purchase. How about systems that scan your email for data loss prevention (DLP) purposes and kick back emails with sensitive data that you absolutely need to share?
Security only benefits us when it’s Continue reading
I’ve had a week to get over my Cisco Live hangover this year. I’ve been going to Cisco Live for ten years and been involved in the social community for five of them. And I couldn’t be prouder of what I’ve seen. As the picture above shows, the community is growing by leaps and bounds.
I was asked many, many times about Tom’s Corner. What was it? Why was it important? Did you really start it? The real answer is that I’m a bit curious. I want to meet people. I want to talk to them and learn their stories. I want to understand what drives people to learn about networking or wireless or fax machines. Talking to a person is one of the best parts of my job, whether it be my Bruce Wayne day job or my Batman night job.
Social media helps us all stay in touch when we aren’t face-to-face, but meeting people in real life is as important too. You know who likes to hug. You find out who tells good stories. Little things matter like finding out how tall someone is in Continue reading
I stirred up quite the hornet’s nest last week, didn’t I? I posted about how I thought the CCIE Routing and Switching Written Exam needed to be fixed. I got 75 favorites on Twitter and 40 retweets of my post, not to mention the countless people that shared it on a variety of forums and other sites. Since I was at Cisco Live, I had a lot of people coming up to me saying that they agreed with my views. I also had quite a few people that weren’t thrilled with my perspective. Thankfully, I had the chance to sit down with Yusuf Bhaiji, head of the CCIE program, and chat about things. I wanted to share some thoughts here.
One of the biggest complaints that I’ve heard is that I was being “malicious” in my post with regards to the CCIE. I was also told that it was a case of “sour grapes” and even that the exam was as hard as it was on purpose because the CCIE is supposed to be hard. Mostly, I felt upset that people were under the impression that my post was designed to destroy, harm, or otherwise defame the Continue reading
I’m having a great time at Cisco Live this year talking to networking professionals about the state of things. Most are optimistic about where their jobs are going to fit in with networking and software and the new way of doing things. But there is an undercurrent of dissatisfaction with one of the most fundamental pieces of network training in the world. The discontent is palpable. From what I’ve heard around Las Vegas this week, it’s time to fix the CCIE Written Exam.
The CCIE written is the bellwether of network training. It’s a chance for network engineers that use Cisco gear to prove they have what it takes to complete a difficult regimen of training to connect networks of impressive size. It’s also a rite of passage to show others that you know how to study, prep, and complete a difficult practical examination without losing your cool. But all that hard work starts with a written test.
The CCIE written has always been a tough test. It’s the only barrier to entry to the CCIE lab. Because the CCIE has never had prerequisites and likely never will due to long standing tradition, the only thing standing Continue reading
Complexity is the enemy of understanding. Think about how much time you spend in your day trying to simplify things. Complexity is the reason why things like Reddit’s Explain Like I’m Five exist. We strive in our daily lives to find ways to simplify the way things are done. Well, except in networking.
Networking hasn’t always been a super complex thing. Back when bridges tied together two sections of Ethernet, networking was fairly simple. We’ve spent years trying to make the network do bigger and better things faster with less input. Routing protocols have become more complicated. Network topologies grow and become harder to understand. Protocols do magical things with very little documentation beyond “Pure Freaking Magic”.
Part of this comes from applications. I’ve made my feelings on application development clear. Ivan Pepelnjak had some great comments on this post as well from Steve Chalmers and Derick Winkworth (@CloudToad). I especially like this one:
Derick is right. The application Continue reading
The behemoth merger of Dell and EMC is nearing conclusion. The first week of August is the target date for the final wrap up of all the financial and legal parts of the acquisition. After that is done, the long task of analyzing product lines and finding a way to reduce complexity and product sprawl begins. We’ve already seen the spin out of Quest and Sonicwall into a separate entity to raise cash for the final stretch of the acquisition. No doubt other storage and compute products are going to face a go/no go decision in the future. But one product line which is in real danger of disappearing is networking.
The first indicator of the problems with Dell and networking comes from whitebox switching. Dell released OS 10 earlier this year as a way to capitalize on the growing market of free operating systems running on commodity hardware. Right now, OS 10 can run on Dell equipment. In the future, they are hoping to spread it out to whitebox devices. That assumes that soon you’ll see Dell branded OSes running on switches purchased from non-Dell sources booting with ONIE.
Once OS 10 pushes forward, what does that Continue reading
I got to spend a couple of days this week at DockerCon and learn a bit more about software containers. I’d always assumed that containers were a slightly different form of virtualization, but thankfully I’ve learned my lesson there. What I did find out about containers gives me a bit of hope about the future of applications and security.
One of the things that made me excited about Docker is that the process isolation idea behind building a container to do one thing has fascinating ramifications for application developers. In the past, we’ve spent out time building servers to do things. We build hardware, boot it with an operating system, and then we install the applications or the components thereof. When we started to virtualize hardware into VMs, the natural progression was to take the hardware resource and turn it into a VM. Thanks to tools that would migrate a physical resource to a virtual one in a single step, most of the first generation VMs were just physical copies of servers. Right down to phantom drivers in the Windows Device Manager.
As we started building infrastructure around the idea of virtualization, we stopped migrating physical boxes Continue reading
The big announcement this week is that Barefoot Networks leaped out of stealth mode and announced that they’re working on a very, very fast datacenter switch. The Barefoot Tofino can do up to 6.5 Tbps of throughput. That’s a pretty significant number. But what sets the Tofino apart is that it also uses the open source P4 programming language to configure the device for everything, from forwarding packets to making routing decisions. Here’s why that may be bigger than another fast switch.
Barefoot admits in their announcement post that one of the ways they were able to drive the performance of the Tofino platform higher was to remove a lot of the accumulated cruft that has been added to switch software for the past twenty years. For Barefoot, this is mostly about pushing P4 as the software component of their switch platform and driving adoption of it in a wider market.
Let’s take a look at what this really means for you. Modern network operating systems typically fall into one of two categories. The first is the “kitchen sink” system. This OS has every possible feature you could ever want built in at runtime. Sure, you get Continue reading
SDN may have made networking more exciting thanks to making hardware less important than it has been in the past, but that’s not to say that hardware isn’t important at all. The certainty with which new hardware will come out and make things a little bit faster than before is right there with death and taxes. One of the big announcements yesterday from Hewlett Packard Enterprise (HPE) during HPE Discover was support for a new 25GbE / 100GbE switch architecture built around the FlexFabric 5950 and 12900 products. This may be the tipping point for things.
I haven’t always been high on 25GbE. Almost two years ago I couldn’t see the point. Things haven’t gotten much different in the last 24 months from a speed perspective. So why the change now? What make this 25GbE offering any different than things from the nascent ideas presented by Arista?
First and foremost, the 25GbE released by HPE this week is based on the Broadcom Tomahawk chipset. When 25GbE was first presented, it was a collection of vendors trying to convince you to upgrade to their slightly faster Ethernet. But in the past two years, most of the Continue reading
Last week at Storage Field Day 10, I got a chance to see Pure Storage and their new FlashBlade product. Storage is an interesting creature, especially now that we’ve got flash memory technology changing the way we think about high performance. Flash transformed the industry from slow spinning gyroscopes of rust into a flat out drag race to see who could provide enough input/output operations per second (IOPS) to get to the moon and back.
Take a look at this video about the hardware architecture behind FlashBlade:
It’s pretty impressive. Very fast flash storage on blades that can outrun just about anything on the market. But this post isn’t really about storage. It’s about transport.
Look at the backplane of the FlashBlade chassis. It’s not something custom or even typical for a unit like that. The key is when the presenter says that the architecture of the unit is more like a blade server chassis than a traditional SAN. In essence, Pure has taken the concept of a midplane and used it very effectively here. But their choice of midplane is interesting in this case.
Pure is using the Broadcom Trident II switch as their Continue reading
We have officially reached the point in our long and storied IT careers where we, as old fogies, have earned the right to complain about the next generation of users and professionals. Just as the gray beards before us complained about the way we did things, so too is it our turn to moan about the state of affairs. Today, I’d like to point out how driving IT to the point of pushing simple buttons is destroying the way we do things.
The fact that IT work has been able to be distilled into a series of simple button pushing exercises is very thrilling. We’ve spent a lot of time and effort building devices and frameworks that take the hard part out of building devices and frameworks. We no longer have to invent languages to build things or hardware to do things. Instead, we can refine our programming capabilities or use general purpose hardware in new combinations to provide environments for our users.
That’s one of the things that is driving people to the cloud. Cloud isn’t just about exciting hardware or keeping your data in other places. It is just as much about predictable, repeatable frameworks and Continue reading
There was an interesting article last week from Fastly talking about using BGP to scale their network. This was but the latest in a long line of discussions around using BGP as a transport protocol between areas of the data center, even down to the Top-of-Rack (ToR) switch level. LinkedIn made a huge splash with it a few months ago with their Project Altair solution. Now it seems company after company is racing to implement BGP as the solution to their transport woes. And all because developers have finally pulled their heads out of the sand.
BGP is a very scalable protocol. It’s used the world over to exchange routes and keep the Internet running smoothly. But it has other power as well. It can be extended to operate in other ways beyond the original specification. Unlike rigid protocols like RIP or OSPF, BGP was designed in part to be extended and expanded as needs changes. IS-IS is a very similar protocol in that respect. It can be upgraded and adjusted to work with both old and new systems at the same time. Both can be extended without the need to change protocol versions Continue reading
Networking has come a long way in the last few years. We’ve realized that hardware and ASICs aren’t the constant that we could rely on to make decisions in the next three to five years. We’ve thrown in with software and the quick development cycles that allow us to iterate and roll out new features weekly or even daily. But the hardware versus software battle has played out a little differently than we all expected. And the primary casualty of that battle was TRILL.
Transparent Interconnection of Lots of Links (TRILL) was proposed as a solution to the complexity of spanning tree. Radia Perlman realized that her bridging loop solution wouldn’t scale in modern networks. So she worked with the IEEE to solve the problem with TRILL. We also received Shortest Path Bridging (SPB) along the way as an alternative solution to the layer 2 issues with spanning tree. The motive was sound, but the industry has rejected the premise entirely.
Large layer 2 networks have all kinds of issues. ARP traffic, broadcast amplification, and many other numerous issues plague layer 2 when it tries to scale to multiple hundreds or a few thousand nodes. The general rule Continue reading
I’m at Interop this week talking all things networking with a great group of people. There are quite a few members of the community here presenting, listening and discussing. There’s a great exchange of ideas flowing back and forth. Yet one thing I keep hearing in quiet corners of the room is a hushed discussion of the continued viability of Interop as a conference. Is it time to write the Interop obituary?
Some of the arguments are as old as tech itself. People claim that getting vendors to interoperate today is an afterthought thanks to protocols like OSPF. All of the important bits in a network are standardized now. Use of APIs and other open technologies are driving vendors to play nice with each other. The need to show up in a faraway place and do the work has long passed.
There’s also the discussion around the bigger conferences out in the world. Vendor conferences like Cisco Live and VMworld draw tens of thousands. New product announcements are dropping left and right during these events. People also want to fracture into tool-specific events like OpenStack Summit or DockerCon. Or the various analyst events or company days that Continue reading
I’m at the OpenStack Summit this week and there’s a lot of talk around about building stacks and offering everything needed to get your organization ready for a shift toward service provider models and such. It’s a far cry from the battles over software networking and hardware dominance that I’m so used to seeing in my space. But one thing came to mind that made me think a little harder about architecture and how foundations are important.
The foundation for the modern cloud doesn’t live in fancy orchestration software or data modeling. It’s not because a retailer built a self-service system or a search engine giant decided to build a cloud lab. The real reason we have a growing market for cloud providers today is because of Linux. Linux is the underpinning of so much technology today that it’s become nothing short of ubiquitous. Servers are built on it. Mobile operating systems use it. But no one knows that’s what they are using. It’s all just something running under the surface to enable applications to be processed on top.
Linux is the vodka of operating systems. It can run in a stripped down manner on a variety Continue reading
A few recent conversations that I’ve seen and had with professionals about automation have been very enlightening. It all started with a post on StackExchange about an unsuspecting user that tried to automate a cleanup process with Ansible and accidentally erased the entire server farm at a service provider. The post was later determined to be a viral marketing hoax but was quite believable to the community because of the power of automation to make bad ideas spread very quickly.
Everyone in networking has been in a place where they’ve typed in something they shouldn’t have. Whether you removed the management network you were using to access the switch or created an access list that denied packets that locked you out of something. Or perhaps you typed an errant debug command that forced you to drive an hour to reboot a switch that was no longer responding. All of these things seem to happen to people as part of the learning process.
But how many times have we typed something in to create a change and found that it broke more than we expected? Like changing a native VLAN on a trunk and bringing down Continue reading
A couple of months ago, I was on a panel at TechUnplugged where we talked about scaling systems to large sizes. Here’s a link to the video of that panel:
One of the things that we discussed in that panel was applications. Toward the end of the discussion we got into a bit of a back-and-forth about applications and the systems they run on. I feel like it’s time to develop those ideas a bit more.
My comments about legacy applications are pointed. If a company is spending thousands of dollars and multiples hours of time in the engineering team to reconfigure the network or the storage systems to support an old application, my response was simple: go out of business.
It does sound a bit flippant to think that a company making a profit should just close the shutters and walk away. But that’s just the problem that we’re facing in the market today. We’ve spent an inordinate amount of time creating bespoke, custom networks and systems to support applications that were written years, or even decades, ago in alien environments.
We do it every day without thinking. We have to install this specific Java version Continue reading
Networking is undergoing a huge transformation. Software is surely a huge driver for enabling technology to grow by leaps and bounds and increase functionality. But the hardware underneath is growing just as much. We don’t seem to notice as much because the port speeds we deal with on a regular basis haven’t gotten much faster than the specs we read about years go. But the chips behind the ports are where the real action is right now.
Intel has jumped into networking with both feet and is looking to land on someone. Their work on the Data Plane Development Kit (DPDK) is helping developers write code that is highly portable across CPU architecture. We used to deal with specific microprocessors in unique configurations. A good example is Dynamips.
Most everyone is familiar with this program or the projects that spawned, Dynagen and GNS3. Dynamips worked at first because it emulated the MIPS processor found in Cisco 7200 routers. It just happened that the software used the same code for those routers all the way up to the first releases of the 15.x train. Dynamips allowed for the emulation of Cisco router software but it Continue reading