A while back I wrote about the problems with using some of the newer 3rd generation blade hardware from Cisco with older generations of the chassis FEX/IOM. Because of the way that the VIC and the chassis IOM interact, certain combinations yield different amounts of aggregate bandwidth, and certain combinations don’t work at all, as was evidenced in that post.
As a reminder, here are the valid combinations (these are still accurate to my knowledge, but may change in a few weeks if any new tech is announced at Cisco Live) of FEX and blade VIC:
Many of those that have supported a vSphere-based virtualization infrastructure for any length of time have probably heard of the Cisco Nexus 1000v. I’ve written a few posts that mention it, and I’ve been deploying the product quite successfully for the past few years. Even cooler, the Nexus 1000v is now available for Hyper-V as well.
For those that are not familiar with the idea of distributed switches in general, I’ll overview the concept briefly.
About a year and a half ago, arguably well before the biggest of all the SDN hype that we’ve come to know and love, Stephen Foskett and company organized a fantastic OpenFlow Symposium aimed at getting deep into the state of the protocol at that time and what was being done with it at some of the leading tech companies like Google, Yahoo, Cisco, Brocade, and others.
For those keeping track, Dave Meyer was on the panel at the time representing Cisco but is now CTO and Chief Scientist with Brocade and getting to do some really cool stuff with OpenDaylight.
I have heard so many sweeping statements in the past few weeks like “network engineers’ jobs are in danger” or “will my CCIE have any value when networking is run in the hypervisor”? Clearly the social media community is preaching “software or bust” these days, clearly leaving those that are not used to this kind of talk, or have been doing infrastructure the same way for years, quite alienated. I want to make one thing extremely clear - It’s okay to be an infrastructure person.
About a year and a half ago, arguably well before the biggest of all the SDN hype that we’ve come to know and love, Stephen Foskett and company organized a fantastic OpenFlow Symposium aimed at getting deep into the state of the protocol at that time and what was being done with it at some of the leading tech companies like Google, Yahoo, Cisco, Brocade, and others.
For those keeping track, Dave Meyer was on the panel at the time representing Cisco but is now CTO and Chief Scientist with Brocade and getting to do some really cool stuff with OpenDaylight.
———————————————————————- # Name: PowerOnUCSBlades.ps1 # Author: Matthew Oswalt # Created: 3/30/2012 # Revision: v0.2 - BETA # Rev. Date: 4/30/2013 # Description: A script that powers on blades in a UCS system. # Can be configured to boot all blades, or # only those associated to service profiles in a # given sub-organization. # ---------------------------------------------------------------------- # Import the Cisco UCS PowerTool module Import-Module CiscoUcsPs #Enable Multiple System Config Mode Set-UcsPowerToolConfiguration -SupportMultipleDefaultUcs $true ##################################################################################################################### # AUTHENTICATION # #################################### #Stored method of authentication - change the two values shown below $user = "admin" $password = "password" | ConvertTo-SecureString -AsPlainText -Force $cred = New-Object system.
I received a comment on an old post regarding the identification of outgoing interface for learned routes through BGP. In fact, it’s not the first time I’ve had a discussion in the comment section regarding the interaction between the control plane and the forwarding plane.
So, let’s work backwards from the point where our packet leaves some interface on a router, which would be considered purely an act of the forwarding plane.
Moving along in my “Virtual Routing” series, I’d like to switch gears and talk a little more “big picture”. In the previous posts, we’ve discussed a few different things:
Part 1 - A first look at the CSR 1000v from Cisco
Part 2 - An examinations of using FHRPs in a virtual environment
Part 3 - A comparison of virtual routing redundancy options
Seeing as these were all pretty technical configuration-oriented posts, I wanted to take a step back and think about some of the reasons why one would want to perform routing in a virtual environment.
Moving along in my “Virtual Routing” series, I’d like to switch gears and talk a little more “big picture”. In the previous posts, we’ve discussed a few different things:
Part 1 - A first look at the CSR 1000v from Cisco
Part 2 - An examinations of using FHRPs in a virtual environment
Part 3 - A comparison of virtual routing redundancy options
Seeing as these were all pretty technical configuration-oriented posts, I wanted to take a step back and think about some of the reasons why one would want to perform routing in a virtual environment.
I would make the argument that the term “converged networks” is not really a buzzword the way it used to be, since the world now generally understands the concept. Rather than have isolated physical networks, lets make a very popular network topology more robust in terms of capacity, but also features. After all, the networks and protocols we’re combining have some pretty stringent requirements, and we want to make sure that this transition actually works.
The past two years have been nothing short of a whirlwind for me. I had the privilege of helping to create the Data Center practice for a technology startup in Cincinnati, and as a result, I’ve figuratively been drinking from a fire hydrant non stop. In the past two years I’ve learned more about technology than I could have ever imagined, part of which was the fact that what I have learned only scratches the surface of what’s likely in store for me in the rest of my career.
There are a million articles out there on ESXi vSwitch Load Balancing, many of which correctly point out that the option for routing traffic based on IP Hash is probably the best option, if your upstream switch is running 802.3ad link aggregation to the ESXi hosts. It offers minimal complexity, while also providing the best load-balancing capabilities for network devices utilizing a vSwitch (Virtual Machine OR vmkernel). So…this article will be catered towards a very specific problem.
Don’t ask me what kind of daydreaming it took to arrive at this, but….I just realized that the following was possible:
Which was ALREADY terrifying enough - but then my mind went here:
Have a great weekend.
I’m going to talk a little bit about performing QoS functions from within the Nexus 1000v. Since it’s been awhile since I made the last post in this series, a recap is in order:
In my first post, I explained what the different types of QoS policies were in the context of Cisco’s MQC
In my second post, I went through the actual configuration on specific platforms like the Cisco Nexus and Unified Compute platforms, as well as a brief mention of vSphere’s participation, but less on the QoS aspects and more on MTU.
I’m going to talk a little bit about performing QoS functions from within the Nexus 1000v. Since it’s been awhile since I made the last post in this series, a recap is in order:
In my first post, I explained what the different types of QoS policies were in the context of Cisco’s MQC
In my second post, I went through the actual configuration on specific platforms like the Cisco Nexus and Unified Compute platforms, as well as a brief mention of vSphere’s participation, but less on the QoS aspects and more on MTU.
There’s been a ton of attention lately around the concept of using commodity hardware in an area of the industry that is currently dominated by proprietary ASIC-based solutions - networking. When it comes to crossing paths between open source and networking, the obvious low-hanging fruit has been software-based switching solutions like Open vSwitch, or cool ways to make virtual switching do bigger, better stuff for cloud providers like Openstack Quantum (awesome, by the way).
My post a few weeks ago about the CSR 1000v made a pretty big splash - it’s clear that the industry is giving a lot of attention to IP routing within a virtual environment. No doubt, Vyatta is largely credited for this, as they’ve been pushing this idea for a long time. When Brocade announced that they were acquiring Vyatta, and Cisco announced they were working on a “Cloud Services Router”, this idea became all the more legitimate, and as you can tell from this series, it’s of particular interest to me.
My post a few weeks ago about the CSR 1000v made a pretty big splash - it’s clear that the industry is giving a lot of attention to IP routing within a virtual environment. No doubt, Vyatta is largely credited for this, as they’ve been pushing this idea for a long time. When Brocade announced that they were acquiring Vyatta, and Cisco announced they were working on a “Cloud Services Router”, this idea became all the more legitimate, and as you can tell from this series, it’s of particular interest to me.