This is a short one. I didn’t see a ton of information on this on the internet so I figured I’d put it forward.
I’m using a pair of Nexus 2K FEX switches (N2K-C2248TP-1GE) for 1GbE copper connectivity off of a pair of Nexus 5548UP switches.
I needed to set one of the 2K ports to access mode and place it in a VLAN. Pretty simple. After configuring one of the 2K ports through the 5K CLI though, I noticed that the port was listed as “down (inactive)”.
This is a short one. I didn’t see a ton of information on this on the internet so I figured I’d put it forward.
I’m using a pair of Nexus 2K FEX switches (N2K-C2248TP-1GE) for 1GbE copper connectivity off of a pair of Nexus 5548UP switches.
I needed to set one of the 2K ports to access mode and place it in a VLAN. Pretty simple. After configuring one of the 2K ports through the 5K CLI though, I noticed that the port was listed as “down (inactive)”.
This is a short one. I didn’t see a ton of information on this on the internet so I figured I’d put it forward.
I’m using a pair of Nexus 2K FEX switches (N2K-C2248TP-1GE) for 1GbE copper connectivity off of a pair of Nexus 5548UP switches.
I needed to set one of the 2K ports to access mode and place it in a VLAN. Pretty simple. After configuring one of the 2K ports through the 5K CLI though, I noticed that the port was listed as “down (inactive)”.
I had a great conversation with a coworker regarding the requirements for the In-Service Software Upgrade (ISSU) feature on Cisco switches. For this post, I’m using Nexus 5548UP switches as a distribution layer to my Cisco UCS environment, and at the core is sitting a pair of Catalyst 6500s, set up in a VSS pair.
For those unfamiliar with ISSU, it is a way for Cisco devices to upgrade their running firmware without the need for a disruptive reboot of the device, which is what has traditionally been used for upgrades to IOS, NX-OS, etc.
I had a great conversation with a coworker regarding the requirements for the In-Service Software Upgrade (ISSU) feature on Cisco switches. For this post, I’m using Nexus 5548UP switches as a distribution layer to my Cisco UCS environment, and at the core is sitting a pair of Catalyst 6500s, set up in a VSS pair.
For those unfamiliar with ISSU, it is a way for Cisco devices to upgrade their running firmware without the need for a disruptive reboot of the device, which is what has traditionally been used for upgrades to IOS, NX-OS, etc.
I strongly believe that every route/switch engineer, even highly experienced ones, should have at least a fundamental understanding of DNS architectures and best practices. More importantly, it should be understood how DNS is being used in today’s service providers and enterprises. DNS is one of those services that has been applied to many different use cases, such as a form of load balancing, or even an additional layer of security.
I strongly believe that every route/switch engineer, even highly experienced ones, should have at least a fundamental understanding of DNS architectures and best practices. More importantly, it should be understood how DNS is being used in today’s service providers and enterprises. DNS is one of those services that has been applied to many different use cases, such as a form of load balancing, or even an additional layer of security.
I am installing ESXi 5 on a Cisco UCS B440 M1 blade, and ran into some local disk issues. I used both the stock ESXi 5 image from VMware, as well as the recently released image from Cisco that contains the latest UCS drivers. Same issue on both.
The issue was that when I got to the disk selection screen on the ESXi installation, I did not see any disks:
I am installing ESXi 5 on a Cisco UCS B440 M1 blade, and ran into some local disk issues. I used both the stock ESXi 5 image from VMware, as well as the recently released image from Cisco that contains the latest UCS drivers. Same issue on both.
The issue was that when I got to the disk selection screen on the ESXi installation, I did not see any disks:
I am installing ESXi 5 on a Cisco UCS B440 M1 blade, and ran into some local disk issues. I used both the stock ESXi 5 image from VMware, as well as the recently released image from Cisco that contains the latest UCS drivers. Same issue on both.
The issue was that when I got to the disk selection screen on the ESXi installation, I did not see any disks:
I am happy to say that I have officially started putting things together for my CCIE R/S studies. I have been and will continue to be pulled in many different directions, but since my CCNP was completed a few months ago, and I recently passed my VCP exam, I decided that the time was now to begin the long journey ahead. I have a few other certifications in mind, and I will have to carefully weigh how they impact (or preferably do not impact) my CCIE studies, but this journey is important to me personally and professionally, so I’m pulling the trigger.
I am happy to say that I have officially started putting things together for my CCIE R/S studies. I have been and will continue to be pulled in many different directions, but since my CCNP was completed a few months ago, and I recently passed my VCP exam, I decided that the time was now to begin the long journey ahead. I have a few other certifications in mind, and I will have to carefully weigh how they impact (or preferably do not impact) my CCIE studies, but this journey is important to me personally and professionally, so I’m pulling the trigger.
I know vXLAN has been around for a year now, but because of the reviews it got from the community immediately upon announcement, I decided to let it mature as an idea before I got involved. Here are some of my thoughts after attending a vXLAN session by Cisco at VMworld 2012.
vXLAN really just solves one problem. Most virtual infrastructures depend on L2 connectivity. vMotion is a good example of this.
I know vXLAN has been around for a year now, but because of the reviews it got from the community immediately upon announcement, I decided to let it mature as an idea before I got involved. Here are some of my thoughts after attending a vXLAN session by Cisco at VMworld 2012.
vXLAN really just solves one problem. Most virtual infrastructures depend on L2 connectivity. vMotion is a good example of this.
I attended the VMworld 2012 session that covered the new features in vSphere 5.1 with regards to networking. Many features were rolled out to both VDS and the standard switch, and other features just had improved functionality.
First off, apparently it’s now VDS, not vDS. This announcement came hours after the announcement that VXLAN was being changed to vXLAN. Um…okay, I guess?
Anyways - The speaker pointed out at the beginning that a big change was that many of these features were being rolled out to both the standard and distributed switches.
I attended the VMworld 2012 session that covered the new features in vSphere 5.1 with regards to networking. Many features were rolled out to both VDS and the standard switch, and other features just had improved functionality.
First off, apparently it’s now VDS, not vDS. This announcement came hours after the announcement that VXLAN was being changed to vXLAN. Um…okay, I guess?
Anyways - The speaker pointed out at the beginning that a big change was that many of these features were being rolled out to both the standard and distributed switches.
I ran into an issue that presented itself two different ways, each at a different customer. I posted a while back about a customer that wanted to use only a single Nexus 5000, since that was all that was available. I wanted to bundle all four CNA ports on the Netapp storage array to the Netapp SAN. However, after I created this port channel and bound the virtual fibre channel (VFC) interface to it, the VFC interface would not come up.
I ran into an issue that presented itself two different ways, each at a different customer. I posted a while back about a customer that wanted to use only a single Nexus 5000, since that was all that was available. I wanted to bundle all four CNA ports on the Netapp storage array to the Netapp SAN. However, after I created this port channel and bound the virtual fibre channel (VFC) interface to it, the VFC interface would not come up.
I ran into an issue that presented itself two different ways, each at a different customer. I posted a while back about a customer that wanted to use only a single Nexus 5000, since that was all that was available. I wanted to bundle all four CNA ports on the Netapp storage array to the Netapp SAN. However, after I created this port channel and bound the virtual fibre channel (VFC) interface to it, the VFC interface would not come up.
I had the opportunity this week to ascertain the feasibility of automating the provisioning of a full Flexpod. For reference, this is considering a “vanilla” Flexpod build:
Pair of Nexus 5ks Pair of Cisco UCS Fabric Interconnects (with a few chassis) Netapp running ONTAP 7-Mode (I tested on FAS6070) Note that this also makes a few assumptions about the build.
FC via Nexus 5000, no MDS No existing vCenter integration or storage migration So - pretty much a green field Flexpod build, pretty close to the specs laid out in the design guide.