A little detour from the networking topics today to show off a little weekend tech project.
I recently ran into some overheating problems with my home BYO PC. Core Temp was showing upwards of 70 degrees Celsius during normal operation, and under load, it would sometimes just shut down completely.
Here’s the setup I had as of 2 days ago:
The rear fan, which takes air in, was not working due to a short.
I have always used the “network 0.0.0.0 0.0.0.0” statement to describe “all interfaces” when configuring a routing protocol like EIGRP. I know that it’s not correct, but I never stopped to wonder why my bad habit still worked.
Then, I found this good article by @jdsilva explains this is IOS just assuming you had a “brain fart” and meant to type the proper “network 0.0.0.0 255.255.255.255”
I’m studying for the CCIE and it can be really good to identify these bad habits that, while in real life may not be too bad, especially this kind, where the result is the same, but on exams can mean the difference between failure and success.
This is a specific video on Rapid Spanning Tree Synchronization.
See my full writeup on RSTP in general.
Forgive the low(er) quality of this particular video, it was recorded at an hour in the morning that frankly should not exist.
This post picks up where the previous left off. Again, a CCNP-level knowledge of STP is recommended.
So…Spanning Tree didn’t converge quickly enough for some people, and enabling PortFast everywhere kind of defeats the purpose, so 802.1w Rapid Spanning Tree was born. RSTP in essence puts into place some additional features to speed up STP reconvergence. Old-school 802.1D meant that you had to wait at least 30 seconds to get a port from blocking to forwarding, and this means that recovering from a failure takes at least that much time (sometimes more depending on other factors).
This is a video on some of the “nerd knobs” that we get to play with in a traditional spanning tree environment.
I have done a full writeup on the topic that goes along with this video - you can view this video on that page as well.
I wrote this post not only to put out some information on one of the least-understood facets of networking (especially in data center, as most technology today is aimed at making STP irrelevant) but also to help get something on paper for me, seeing as I am going down the CCIE path full force now, and this has always been a weak area of mine. This post will assume you have CCNP-level knowledge about Spanning Tree Protocol (STP).
I mentioned in a previous post regarding the connectivity options to each blade if you’re using the appropriate hardware.
If you’re using a 2208 FEX, you have 8 upstream ports, each at 10GbE. This means the FEX can support up to 80 Gbps total. You can provide potentially 4:1 oversubscription (math later) to each blade by connecting a 2208 FEX into a blade chassis with blades that can also support 80Gbps each.
Cisco UCS offers a few policies that are applied globally to all equipment in a given UCS domain. These policies are found by selecting the “Equipment” node under the “equipment” tab. (You can also change on an individual chassis basis but the default behavior is for all chassis to inherit this global policy)
This is specifically referring to the connectivity between the Fabric Interconnects and the Chassis FEX modules or I/O modules (IOM).
I was introduced by a colleague and mentor a few years ago to the concept of powerless words. Words like “try”, “but”, and “maybe/might”, among others, seem to be our mind’s way of protecting itself against the unknown. After all, we’re only human, right? We can’t control what the world throws at us, right?
I encourage you to read the article I linked to as well as this one, which the first article refers to.
To say that Ethernet as a L2 protocol is well-known is an understatement - it’s in every PC network card, and every network closet. Back during the inception of Ethernet, the world needed an open, efficient, standardized method of communicating between nodes on a LAN. Widely regarded as the “mother of the Internet” for many reasons - not the least of which is the invention of the Spanning Tree Protocol - Radia Perlman equated the wide proliferation of Ethernet to the same events that have made English such as popular language on Earth.
Name: UltimateUCSBuild.ps1
Author: Matthew Oswalt
Created: 6/10/2013
Current Version: v0.2 (ALPHA)
Revision Date: 6/18/2013
Description:
–THIS SCRIPT IS VERY NEW, EXPECT FREQUENT CHANGES AND IMPROVEMENTS–
A script that starts with a completely blank UCS system and configures it to completion.
This version of the script is very non-modular and static, but that will change in future versions.
My long-term vision for this script is to be simple, yet powerful. I want it to have the ability to provision lots of stuff very quickly, with minimal code changes.
A while back I wrote about the problems with using some of the newer 3rd generation blade hardware from Cisco with older generations of the chassis FEX/IOM. Because of the way that the VIC and the chassis IOM interact, certain combinations yield different amounts of aggregate bandwidth, and certain combinations don’t work at all, as was evidenced in that post.
As a reminder, here are the valid combinations (these are still accurate to my knowledge, but may change in a few weeks if any new tech is announced at Cisco Live) of FEX and blade VIC:
Many of those that have supported a vSphere-based virtualization infrastructure for any length of time have probably heard of the Cisco Nexus 1000v. I’ve written a few posts that mention it, and I’ve been deploying the product quite successfully for the past few years. Even cooler, the Nexus 1000v is now available for Hyper-V as well.
For those that are not familiar with the idea of distributed switches in general, I’ll overview the concept briefly.
I have heard so many sweeping statements in the past few weeks like “network engineers’ jobs are in danger” or “will my CCIE have any value when networking is run in the hypervisor”? Clearly the social media community is preaching “software or bust” these days, clearly leaving those that are not used to this kind of talk, or have been doing infrastructure the same way for years, quite alienated. I want to make one thing extremely clear - It’s okay to be an infrastructure person.
About a year and a half ago, arguably well before the biggest of all the SDN hype that we’ve come to know and love, Stephen Foskett and company organized a fantastic OpenFlow Symposium aimed at getting deep into the state of the protocol at that time and what was being done with it at some of the leading tech companies like Google, Yahoo, Cisco, Brocade, and others.
For those keeping track, Dave Meyer was on the panel at the time representing Cisco but is now CTO and Chief Scientist with Brocade and getting to do some really cool stuff with OpenDaylight.
# ---------------------------------------------------------------------- # Name: PowerOnUCSBlades.ps1 # Author: Matthew Oswalt # Created: 3/30/2012 # Revision: v0.2 - BETA # Rev. Date: 4/30/2013 # Description: A script that powers on blades in a UCS system. # Can be configured to boot all blades, or # only those associated to service profiles in a # given sub-organization. # ---------------------------------------------------------------------- # Import the Cisco UCS PowerTool module Import-Module CiscoUcsPs #Enable Multiple System Config Mode Set-UcsPowerToolConfiguration -SupportMultipleDefaultUcs $true ##################################################################################################################### # AUTHENTICATION # #################################### #Stored method of authentication - change the two values shown below $user = "admin" $password = "password" | ConvertTo-SecureString -AsPlainText -Force $cred = New-Object system.
I received a comment on an old post regarding the identification of outgoing interface for learned routes through BGP. In fact, it’s not the first time I’ve had a discussion in the comment section regarding the interaction between the control plane and the forwarding plane.
So, let’s work backwards from the point where our packet leaves some interface on a router, which would be considered purely an act of the forwarding plane.
Moving along in my “Virtual Routing” series, I’d like to switch gears and talk a little more “big picture”. In the previous posts, we’ve discussed a few different things:
Part 1 - A first look at the CSR 1000v from Cisco
Part 2 - An examinations of using FHRPs in a virtual environment
Part 3 - A comparison of virtual routing redundancy options
Seeing as these were all pretty technical configuration-oriented posts, I wanted to take a step back and think about some of the reasons why one would want to perform routing in a virtual environment.