Author Archives: packetmischief.ca
Author Archives: packetmischief.ca
It seems appropriate to write a FFF post about Virtual Extensible LAN (VXLAN) now since VXLAN is the new hotness in the data center these days. With VMware's NSX using VLXAN (among other overlays) as a core part of its overall solution and the recent announcement of Cisco's Application Centric Infrastructure (ACI) and the accompanying Nexus 9000 switch, both of which leverage VXLAN for delivering a network fabric, it seems inevitable that network engineers will have to use and understand VXLAN in the not too distant future.
As usual, this post is not meant to be an introduction to the technology; I assume you have at least a passing familiarity with VXLAN. Instead, I will jump right into 5 operational/technical/functional aspects of the protocol.
For more information on VXLAN, check out the draft at the IETF.
I was prompted to write this when I observed someone the other day who was sitting in the same training as me taking notes in a self-addressed email. No offense to people who do this, but W. T. F. How are you going to keep track of that email among the dozens/hundreds you receive every single day?
I take a lot of notes for research, certification study, and training. I use MediaWiki for almost all of these notes. Here's why.
Following on from my previous “triple-F” article (Five Functional Facts about FabricPath), I thought I would apply the same concept to the topic of Overlay Transport Virtualization (OTV). This post will not describe much of the foundational concepts of OTV, but will dive right into how it actually functions in practice. A reasonable introduction to OTV can be found in my series on Data Center Interconnects.
So without any more preamble, here are five functional facts about OTV.
Normally I talk about overlays in the context of data center/SDN/cloud but today I'm going out into left field and am going to talk about voice! :-)
I freely admit that I'm a noob when it comes to Cisco voice so I'm not sure if the behavior I'm about to describe is obvious or not. It wasn't obvious to me and I only figured it out after running into the issue for real and troubleshooting it to resolution.
The issue stems from my misunderstanding about how dual-line ephone-dns function when used in an overlay.
This post is about finding and fixing a memory leak I discovered in the SNMP daemon, snmpd(8), in OpenBSD. This sort of analysis is foreign territory for me; I'm not a software hacker by day. However, using instructions written by Otto Moerbeek as my Rosetta stone and Google to fill in the blanks when it came to usage of the GNU debugger, gdb(1), I was able to find and fix the memory leak.
I'm documenting the steps I used for my future self and for others.
Let's step back for a minute. So far in this series of blog posts on DCI, I've been focusing on extending the Layer 2 domain between data centers with the goal of supporting hot migrations — ie, moving a virtual machine between sites while it's online and servicing users.
Is that the only objective with DCI?
I had an opportunity recently to sit in a Cisco onePK lab and it opened my eyes to exactly what Cisco is doing with onePK, why it's going to be so important as Software Defined Networking (SDN) continues to gain traction, and why onePK is different than what anyone else is doing in the industry.
onePK is a key element within Cisco's announced Open Network Environment SDN strategy. onePK is an easy-to-use toolkit for development, automation, rapid service creation and more. It enables you to access the valuable data inside your network via easy-to-use APIs.
Source: www.cisco.com/go/onepk
Since having my own eyes opened, I've been pondering how to explain my new found understanding in a way that others will grasp. In particular to business decision makers (BDMs) and technical decision makers (TDMs). I'm really, really, struggling to come up with a good analogy for BDMs. I'm still working on that one. Surprisingly, I'm also struggling to come up with a sound analogy that will work with the majority of TDMs that I know. Maybe I shouldn't be so surprised at that since all the TDMs I deal with are on the infrastructure side of things (networks, storage, Continue reading
We're halfway through 2013 and we have our second new member of the Nexus family of switches for the year: the Nexus 7700. Here are the highlights:
Here's a topic that comes up more and more now that FabricPath is getting more exposure and people are getting more familiar with the technology: Can FabricPath be used to interconnecting data centers?
For a primer on FabricPath, see my pervious article Five Functional Facts about FabricPath
.
FabricPath has some characteristics that make it appealing for DCI. Namely, it extends Layer 2 domains while maintaining Layer 3 — ie, routing — semantics. End host MAC addresses are learned via a control plane, FP frames contain a Time To Live (TTL) field which purge looping packets from the network, and there are no such thing as blocked links — all links are forwarding and Equal Cost Multi-Pathing (ECMP) is used within the fabric. In addition, since FabricPath does not mandate a particular physical network topology, it can be used in spine/leaf architectures within the data center or point-to-point connections between data centers.
Sounds great. Now what are the caveats?
I've had enough real life experience with replacing drives in the ZFS pool in my home NAS that I feel comfortable sharing this information with the community.
Wow the title of this post is a mouthful.
Similar to my previous post on the Nexus 2000 (Nexus 2000 Model Number Cheat Sheet), this post will explain what the letters and numbers mean in the Nexus 7000 IO module part numbers. This will allow you to quickly identify the characteristics of the card just by looking at the part number which in turn should help you out as you're building BOMs and picking the right card for the job.
Update July 2, 2013: Updated to reflect release of the Nexus 7700 and F3 modules.
In the previous article in this DCI series (Why is there a “Wrong Way” to Interconnect Datacenters?) I explained the business case for having multiple data centers and then closed by warning that extending Layer 2 domains was a path to disaster and undermined the resiliency of having two data centers.
Why then is stretching Layer 2 a) needed and b) a go-to maneuver for DCI.
Let's look at it from two points of view: technology and business.
There's certainly a lot of focus on data center interconnection (DCI) right now. And understandably so since there are many trends in the industry that are making IT organizations look at data center redundancy. Among these trends are:
In this post I'm going to talk about how IT can address item #1 above — the business need — in a manner Continue reading
Anyway, I thought it would be neat to document the tools I'm using today. It'll be interesting to read this in a couple of years to see how things have changed again and maybe it'll give a fellow cert-chaser some ideas for today.
I've been working on something that at this point in my career I never thought I'd be doing: another Cisco Certified Network Associate (CCNA) certification. The CCNA Voice, to be exact. Now that I'm in a job role where I'm expected to be somewhat of a jack-of-all-trades, I can no longer avoid learning voice :-) For a long time I've focused on just the underlying network bits and left the voice “stuff” to others. Since I now need to talk intelligently about Cisco voice solutions, products, and architectures, I decided to go through the CCNA Voice curriculum as a way to establish some foundational knowledge.
This post is about the tools and methods I used to build a small lab to support my studies.
There's a new Nexus in the family, the Nexus 6000. Here are the highlights.
Nexus 6001 | Nexus 6004 | |
---|---|---|
Size | 1 RU | 4 RU |
Ports | 48 x 10G + 4 x 40G | 48 x 40G fixed + 48 x 40G expansion |
Interface type | SFP+ / QSFP+ | QSFP+ |
Performance | Line rate Layer 2 and Layer 3 | |
Latency | 1μs port to port | |
Scalability | 128K MAC + 128K ARP/ND (flexible config), 32K route table, 1024-way ECMP, 31 SPAN sessions | |
Features | L2/L3, vPC, FabricPath/TRILL, Adapter FEX, VM-FEX | |
Storage | FCoE | |
Visibility | Sampled Netflow, buffer monitoring, latency monitoring, microburst monitoring, SPAN on drop/high latency |
I'm not sure why I've taken such an interest in mDNS, service discovery, and the Bonjour protocol, but I have. It probably has something to do with my not being able to use AirPlay at home for such a long time because, like any true network geek, I put my wireless devices on a separate VLAN from my home media devices. I mean, duh. So now I keep an eye out for different methods of enabling mDNS in the network in anticipation of my own experience in my home network becoming one of my customer's experience in their enterprise network.
I installed OmniOS on my home filer over the Christmas break. Jumping from a Solaris Nevada build to OmniOS meant figuring out what software packages are available in the OmniOS repositories, what third-party repos are available and what software I would have to compile by hand. Given that this machine is only acting as a filer and isn't running any other services to speak of, the list of software to get up and running is small; however a critical component is apcupsd which talks to the Uninterruptible Power Supply (UPS) and cleanly powers down the filer if the power goes out for an extended time.
The hangup for me is that my UPS connects to the filer via USB, not a serial connection. It took me some hours to figure out how to get apcupsd installed with USB support. Here's how.