I was troubleshooting an MTU related issue for NFS connectivity in a Flexpod (Cisco UCS, Cisco Nexus, and Netapp storage with VMware vSphere, running the Nexus 1000v). Regular-sized frames were making it through, but not jumbo frames. I ensured the endpoints were set up correctly, then moved inwards….in my experience, the problem is usually there.
The original design basically included the use of CoS tag 2 for all NFS traffic, so that it could be honored throughout the network, and given jumbo frames treatment.
I saw this Engineers Unplugged video today and was reminded of a viewpoint I’ve been slowly developing over the last two years or so:
Essentially the discussion is about convergence technologies like FCoE, where we rid ourselves of a completely separate network, and converge FC storage traffic onto our standard Ethernet network. With this technology shift, how does this impact the administration of the technology? Do the teams have to converge as well?
I saw this Engineers Unplugged video today and was reminded of a viewpoint I’ve been slowly developing over the last two years or so:
Essentially the discussion is about convergence technologies like FCoE, where we rid ourselves of a completely separate network, and converge FC storage traffic onto our standard Ethernet network. With this technology shift, how does this impact the administration of the technology? Do the teams have to converge as well?
Man, did I pick a tumultuous time to start a career in technology - there are so many great debates going on right now, with vendors working around the clock churning out new products for the general populace to chew on and talk about. I’m becoming more and more involved with the community nowadays, and top of that, I’m a big nerd to start with. So it’s easy for me to suffer from information overload, and I’d be lying if I said it didn’t happen just about every week.
Man, did I pick a tumultuous time to start a career in technology - there are so many great debates going on right now, with vendors working around the clock churning out new products for the general populace to chew on and talk about. I’m becoming more and more involved with the community nowadays, and top of that, I’m a big nerd to start with. So it’s easy for me to suffer from information overload, and I’d be lying if I said it didn’t happen just about every week.
The role, and the features of the access layer in the datacenter has changed dramatically in such a short time. Prior to virtualization, the DC access layer was still relatively simple. Now that the majority of workloads are virtualized, we’re seeing some pretty crazy shifts. Many simple network functions like routing and security, as well as some advanced functions like load balancing are moving into software. This follows the general best practice of applying policy as close to the edge of your network as possible.
The role, and the features of the access layer in the datacenter has changed dramatically in such a short time. Prior to virtualization, the DC access layer was still relatively simple. Now that the majority of workloads are virtualized, we’re seeing some pretty crazy shifts. Many simple network functions like routing and security, as well as some advanced functions like load balancing are moving into software. This follows the general best practice of applying policy as close to the edge of your network as possible.
I recently had a need to deploy quite a few ESXi hosts on top of Cisco UCS B-Series blades (60+) back-ended by Netapp storage. I needed some kind of method to do this quickly so that I didn’t have to spend days just installing ESXi.
Here were some of the design guidelines:
Needed an ESXi 5.5 installation with the Cisco enic and fnic drivers installed, as well as the Cisco 1000v VEM module
I recently had a need to deploy quite a few ESXi hosts on top of Cisco UCS B-Series blades (60+) back-ended by Netapp storage. I needed some kind of method to do this quickly so that I didn’t have to spend days just installing ESXi.
Here were some of the design guidelines:
Needed an ESXi 5.5 installation with the Cisco enic and fnic drivers installed, as well as the Cisco 1000v VEM module
I was honored to be part of a round table discussion held at the Cisco ACI launch with a lot of smart folks. I recommend a watch, we got into some really cool topics, and helped create the framework for some future blog posts of mine.
For more on Tech Field Day, head over to TechFieldDay.com
I attended the Cisco ACI launch event as a Tech Field Day delegate.
I was honored to be part of a round table discussion held at the Cisco ACI launch with a lot of smart folks. I recommend a watch, we got into some really cool topics, and helped create the framework for some future blog posts of mine.
For more on Tech Field Day, head over to TechFieldDay.com
I attended the Cisco ACI launch event as a Tech Field Day delegate.
So, the industry is sufficiently abuzz about the Cisco ACI launch last week, and the stats on my introductory series I wrote tells me that, like it or not, this is having a pretty big impact.
The focus on the application is clearly the right approach - all of this talk about SDN and network virtualization is taking place because the current network model’s complexity results in bad kluges and long provisioning times, and the applications folks are always waiting on the network to respond.
So, the industry is sufficiently abuzz about the Cisco ACI launch last week, and the stats on my introductory series I wrote tells me that, like it or not, this is having a pretty big impact.
The focus on the application is clearly the right approach - all of this talk about SDN and network virtualization is taking place because the current network model’s complexity results in bad kluges and long provisioning times, and the applications folks are always waiting on the network to respond.
I’m pleased to kick off my 3-part blog series regarding the VERY recently announced data center networking products by Insieme, now (or very soon) part of Cisco.
Nexus 9000 Overview From a hardware perspective, the Nexus 9000 series seems to be a very competitively priced 40GbE switch. As (I think) everyone expected, the basic operation of the switch is to serve up a L3 fabric, using VXLAN as a foundation for overlay networks.
Introduction to Application-Centric Infrastructure In the last post, we discussed the hardware that was being announced from Cisco’s Insieme spin-in. While the hardware that is comprising the new Nexus 9000 series is certainly interesting, it wouldn’t mean nearly as much without some kind of integration on an application level.
Traditionally, Cisco networking has been relatively inaccessible to developers or even infrastructure folks looking to automate provisioning or configuration tasks. It looks like the release of ACI and the Nexus 9000 switch line is aiming to change that.
I’m pleased to kick off my 3-part blog series regarding the VERY recently announced data center networking products by Insieme, now (or very soon) part of Cisco.
Nexus 9000 Overview From a hardware perspective, the Nexus 9000 series seems to be a very competitively priced 40GbE switch. As (I think) everyone expected, the basic operation of the switch is to serve up a L3 fabric, using VXLAN as a foundation for overlay networks.
Introduction to Application-Centric Infrastructure In the last post, we discussed the hardware that was being announced from Cisco’s Insieme spin-in. While the hardware that is comprising the new Nexus 9000 series is certainly interesting, it wouldn’t mean nearly as much without some kind of integration on an application level.
Traditionally, Cisco networking has been relatively inaccessible to developers or even infrastructure folks looking to automate provisioning or configuration tasks. It looks like the release of ACI and the Nexus 9000 switch line is aiming to change that.
I’m pleased to kick off my 3-part blog series regarding the VERY recently announced data center networking products by Insieme, now (or very soon) part of Cisco.
Nexus 9000 Overview From a hardware perspective, the Nexus 9000 series seems to be a very competitively priced 40GbE switch. As (I think) everyone expected, the basic operation of the switch is to serve up a L3 fabric, using VXLAN as a foundation for overlay networks.
Introduction to Application-Centric Infrastructure In the last post, we discussed the hardware that was being announced from Cisco’s Insieme spin-in. While the hardware that is comprising the new Nexus 9000 series is certainly interesting, it wouldn’t mean nearly as much without some kind of integration on an application level.
Traditionally, Cisco networking has been relatively inaccessible to developers or even infrastructure folks looking to automate provisioning or configuration tasks. It looks like the release of ACI and the Nexus 9000 switch line is aiming to change that.
Plexxi was a vendor that presented at Networking Field Day 6, and was one that really got me excited about what’s possible when you think about what kind of metadata your data center contains, and what products like Plexxi can do with that data once abstracted and normalized the right way.
I will be intentionally brief with respect to my thoughts on the hardware - others like Ivan (and more) have already done a better job with this than I ever will.