I have heard so many sweeping statements in the past few weeks like “network engineers’ jobs are in danger” or “will my CCIE have any value when networking is run in the hypervisor”? Clearly the social media community is preaching “software or bust” these days, clearly leaving those that are not used to this kind of talk, or have been doing infrastructure the same way for years, quite alienated. I want to make one thing extremely clear - It’s okay to be an infrastructure person.
About a year and a half ago, arguably well before the biggest of all the SDN hype that we’ve come to know and love, Stephen Foskett and company organized a fantastic OpenFlow Symposium aimed at getting deep into the state of the protocol at that time and what was being done with it at some of the leading tech companies like Google, Yahoo, Cisco, Brocade, and others.
For those keeping track, Dave Meyer was on the panel at the time representing Cisco but is now CTO and Chief Scientist with Brocade and getting to do some really cool stuff with OpenDaylight.
I have heard so many sweeping statements in the past few weeks like “network engineers’ jobs are in danger” or “will my CCIE have any value when networking is run in the hypervisor”? Clearly the social media community is preaching “software or bust” these days, clearly leaving those that are not used to this kind of talk, or have been doing infrastructure the same way for years, quite alienated. I want to make one thing extremely clear - It’s okay to be an infrastructure person.
About a year and a half ago, arguably well before the biggest of all the SDN hype that we’ve come to know and love, Stephen Foskett and company organized a fantastic OpenFlow Symposium aimed at getting deep into the state of the protocol at that time and what was being done with it at some of the leading tech companies like Google, Yahoo, Cisco, Brocade, and others.
For those keeping track, Dave Meyer was on the panel at the time representing Cisco but is now CTO and Chief Scientist with Brocade and getting to do some really cool stuff with OpenDaylight.
———————————————————————- # Name: PowerOnUCSBlades.ps1 # Author: Matthew Oswalt # Created: 3/30/2012 # Revision: v0.2 - BETA # Rev. Date: 4/30/2013 # Description: A script that powers on blades in a UCS system. # Can be configured to boot all blades, or # only those associated to service profiles in a # given sub-organization. # ---------------------------------------------------------------------- # Import the Cisco UCS PowerTool module Import-Module CiscoUcsPs #Enable Multiple System Config Mode Set-UcsPowerToolConfiguration -SupportMultipleDefaultUcs $true ##################################################################################################################### # AUTHENTICATION # #################################### #Stored method of authentication - change the two values shown below $user = "admin" $password = "password" | ConvertTo-SecureString -AsPlainText -Force $cred = New-Object system.
# ---------------------------------------------------------------------- # Name: PowerOnUCSBlades.ps1 # Author: Matthew Oswalt # Created: 3/30/2012 # Revision: v0.2 - BETA # Rev. Date: 4/30/2013 # Description: A script that powers on blades in a UCS system. # Can be configured to boot all blades, or # only those associated to service profiles in a # given sub-organization. # ---------------------------------------------------------------------- # Import the Cisco UCS PowerTool module Import-Module CiscoUcsPs #Enable Multiple System Config Mode Set-UcsPowerToolConfiguration -SupportMultipleDefaultUcs $true ##################################################################################################################### # AUTHENTICATION # #################################### #Stored method of authentication - change the two values shown below $user = "admin" $password = "password" | ConvertTo-SecureString -AsPlainText -Force $cred = New-Object system.
I received a comment on an old post regarding the identification of outgoing interface for learned routes through BGP. In fact, it’s not the first time I’ve had a discussion in the comment section regarding the interaction between the control plane and the forwarding plane.
So, let’s work backwards from the point where our packet leaves some interface on a router, which would be considered purely an act of the forwarding plane.
I received a comment on an old post regarding the identification of outgoing interface for learned routes through BGP. In fact, it’s not the first time I’ve had a discussion in the comment section regarding the interaction between the control plane and the forwarding plane.
So, let’s work backwards from the point where our packet leaves some interface on a router, which would be considered purely an act of the forwarding plane.
Moving along in my “Virtual Routing” series, I’d like to switch gears and talk a little more “big picture”. In the previous posts, we’ve discussed a few different things:
Part 1 - A first look at the CSR 1000v from Cisco
Part 2 - An examinations of using FHRPs in a virtual environment
Part 3 - A comparison of virtual routing redundancy options
Seeing as these were all pretty technical configuration-oriented posts, I wanted to take a step back and think about some of the reasons why one would want to perform routing in a virtual environment.
Moving along in my “Virtual Routing” series, I’d like to switch gears and talk a little more “big picture”. In the previous posts, we’ve discussed a few different things:
Part 1 - A first look at the CSR 1000v from Cisco
Part 2 - An examinations of using FHRPs in a virtual environment
Part 3 - A comparison of virtual routing redundancy options
Seeing as these were all pretty technical configuration-oriented posts, I wanted to take a step back and think about some of the reasons why one would want to perform routing in a virtual environment.
Moving along in my “Virtual Routing” series, I’d like to switch gears and talk a little more “big picture”. In the previous posts, we’ve discussed a few different things:
Part 1 - A first look at the CSR 1000v from Cisco
Part 2 - An examinations of using FHRPs in a virtual environment
Part 3 - A comparison of virtual routing redundancy options
Seeing as these were all pretty technical configuration-oriented posts, I wanted to take a step back and think about some of the reasons why one would want to perform routing in a virtual environment.
I would make the argument that the term “converged networks” is not really a buzzword the way it used to be, since the world now generally understands the concept. Rather than have isolated physical networks, lets make a very popular network topology more robust in terms of capacity, but also features. After all, the networks and protocols we’re combining have some pretty stringent requirements, and we want to make sure that this transition actually works.
The past two years have been nothing short of a whirlwind for me. I had the privilege of helping to create the Data Center practice for a technology startup in Cincinnati, and as a result, I’ve figuratively been drinking from a fire hydrant non stop. In the past two years I’ve learned more about technology than I could have ever imagined, part of which was the fact that what I have learned only scratches the surface of what’s likely in store for me in the rest of my career.
The past two years have been nothing short of a whirlwind for me. I had the privilege of helping to create the Data Center practice for a technology startup in Cincinnati, and as a result, I’ve figuratively been drinking from a fire hydrant non stop. In the past two years I’ve learned more about technology than I could have ever imagined, part of which was the fact that what I have learned only scratches the surface of what’s likely in store for me in the rest of my career.
I would make the argument that the term “converged networks” is not really a buzzword the way it used to be, since the world now generally understands the concept. Rather than have isolated physical networks, lets make a very popular network topology more robust in terms of capacity, but also features. After all, the networks and protocols we’re combining have some pretty stringent requirements, and we want to make sure that this transition actually works.
There are a million articles out there on ESXi vSwitch Load Balancing, many of which correctly point out that the option for routing traffic based on IP Hash is probably the best option, if your upstream switch is running 802.3ad link aggregation to the ESXi hosts. It offers minimal complexity, while also providing the best load-balancing capabilities for network devices utilizing a vSwitch (Virtual Machine OR vmkernel). So…this article will be catered towards a very specific problem.
There are a million articles out there on ESXi vSwitch Load Balancing, many of which correctly point out that the option for routing traffic based on IP Hash is probably the best option, if your upstream switch is running 802.3ad link aggregation to the ESXi hosts. It offers minimal complexity, while also providing the best load-balancing capabilities for network devices utilizing a vSwitch (Virtual Machine OR vmkernel). So…this article will be catered towards a very specific problem.
Don’t ask me what kind of daydreaming it took to arrive at this, but….I just realized that the following was possible:
Which was ALREADY terrifying enough - but then my mind went here:
Have a great weekend.
I’m going to talk a little bit about performing QoS functions from within the Nexus 1000v. Since it’s been awhile since I made the last post in this series, a recap is in order:
In my first post, I explained what the different types of QoS policies were in the context of Cisco’s MQC
In my second post, I went through the actual configuration on specific platforms like the Cisco Nexus and Unified Compute platforms, as well as a brief mention of vSphere’s participation, but less on the QoS aspects and more on MTU.
I’m going to talk a little bit about performing QoS functions from within the Nexus 1000v. Since it’s been awhile since I made the last post in this series, a recap is in order:
In my first post, I explained what the different types of QoS policies were in the context of Cisco’s MQC
In my second post, I went through the actual configuration on specific platforms like the Cisco Nexus and Unified Compute platforms, as well as a brief mention of vSphere’s participation, but less on the QoS aspects and more on MTU.