I’m going to talk a little bit about performing QoS functions from within the Nexus 1000v. Since it’s been awhile since I made the last post in this series, a recap is in order:
In my first post, I explained what the different types of QoS policies were in the context of Cisco’s MQC
In my second post, I went through the actual configuration on specific platforms like the Cisco Nexus and Unified Compute platforms, as well as a brief mention of vSphere’s participation, but less on the QoS aspects and more on MTU.
Don’t ask me what kind of daydreaming it took to arrive at this, but….I just realized that the following was possible:
Which was ALREADY terrifying enough - but then my mind went here:
Have a great weekend.
There’s been a ton of attention lately around the concept of using commodity hardware in an area of the industry that is currently dominated by proprietary ASIC-based solutions - networking. When it comes to crossing paths between open source and networking, the obvious low-hanging fruit has been software-based switching solutions like Open vSwitch, or cool ways to make virtual switching do bigger, better stuff for cloud providers like Openstack Quantum (awesome, by the way).
There’s been a ton of attention lately around the concept of using commodity hardware in an area of the industry that is currently dominated by proprietary ASIC-based solutions - networking. When it comes to crossing paths between open source and networking, the obvious low-hanging fruit has been software-based switching solutions like Open vSwitch, or cool ways to make virtual switching do bigger, better stuff for cloud providers like Openstack Quantum (awesome, by the way).
My post a few weeks ago about the CSR 1000v made a pretty big splash - it’s clear that the industry is giving a lot of attention to IP routing within a virtual environment. No doubt, Vyatta is largely credited for this, as they’ve been pushing this idea for a long time. When Brocade announced that they were acquiring Vyatta, and Cisco announced they were working on a “Cloud Services Router”, this idea became all the more legitimate, and as you can tell from this series, it’s of particular interest to me.
My post a few weeks ago about the CSR 1000v made a pretty big splash - it’s clear that the industry is giving a lot of attention to IP routing within a virtual environment. No doubt, Vyatta is largely credited for this, as they’ve been pushing this idea for a long time. When Brocade announced that they were acquiring Vyatta, and Cisco announced they were working on a “Cloud Services Router”, this idea became all the more legitimate, and as you can tell from this series, it’s of particular interest to me.
My post a few weeks ago about the CSR 1000v made a pretty big splash - it’s clear that the industry is giving a lot of attention to IP routing within a virtual environment. No doubt, Vyatta is largely credited for this, as they’ve been pushing this idea for a long time. When Brocade announced that they were acquiring Vyatta, and Cisco announced they were working on a “Cloud Services Router”, this idea became all the more legitimate, and as you can tell from this series, it’s of particular interest to me.
Just a little unicorn-y brainstorming today.
This post is in some ways an extension of my earlier post on the importance of QoS in a converged environment like FCoE or VoIP (or both). This will be a good example of a case where SDN very efficiently solves a policy problem that is present in an unfortunately large number of networks today.
For many organizations, large or small, the network is approached with a very siloed, “good enough” mentality - meaning that each portion of an organization’s technology implementation is typically allocated to those that have that particular skillset.
Just a little unicorn-y brainstorming today.
This post is in some ways an extension of my earlier post on the importance of QoS in a converged environment like FCoE or VoIP (or both). This will be a good example of a case where SDN very efficiently solves a policy problem that is present in an unfortunately large number of networks today.
For many organizations, large or small, the network is approached with a very siloed, “good enough” mentality - meaning that each portion of an organization’s technology implementation is typically allocated to those that have that particular skillset.
In preparation for an upcoming post, I was reminded about a commonly referred to feature in most IGPs - the concept of Equal-Cost Multipath, or simply ECMP. This is the idea that multiple routes with the same cost to a remote network should get placed in the routing table alongside each other and packets should load-balance over them to use the additional available paths. After all, it’s the same cost, so why not?
In preparation for an upcoming post, I was reminded about a commonly referred to feature in most IGPs - the concept of Equal-Cost Multipath, or simply ECMP. This is the idea that multiple routes with the same cost to a remote network should get placed in the routing table alongside each other and packets should load-balance over them to use the additional available paths. After all, it’s the same cost, so why not?
For anyone keeping tabs on the storage industry these days, you might have noticed the news today regarding FusionIO’s acquisition of Nexgen - one of a myriad of storage startups that have cropped up in the past few years to address the ever-changing needs of the data center industry. After looking through some of the articles that were published today, I think we all understand the financial details behind the transaction.
For anyone keeping tabs on the storage industry these days, you might have noticed the news today regarding FusionIO’s acquisition of Nexgen - one of a myriad of storage startups that have cropped up in the past few years to address the ever-changing needs of the data center industry. After looking through some of the articles that were published today, I think we all understand the financial details behind the transaction.
This post was initiated by a side conversation I had about iSCSI. From time to time I’m required to implement an iSCSI-based solution, and this is not the first time I’ve heard the question: “So why can’t I route iSCSI traffic?” Most folks with any knowledge of storage protocols will have at some point picked up this informal best practice idea; some will vehemently defend this idea as the word of $deity and shun all those who contradict this truth.
This post was initiated by a side conversation I had about iSCSI. From time to time I’m required to implement an iSCSI-based solution, and this is not the first time I’ve heard the question: “So why can’t I route iSCSI traffic?” Most folks with any knowledge of storage protocols will have at some point picked up this informal best practice idea; some will vehemently defend this idea as the word of $deity and shun all those who contradict this truth.
While on my current kick with virtual routing, I stumbled across an interesting concept regarding OSPF, and the flexibility that vendors have in determining the best path through an OSPF network.
The following topology is what I’ve been staring at for the last few days
Pretty simple, right? There’s a single network (192.168.123.0/24) down inside each virtual host where the VMs are to sit. Each host has a router on it (one Cisco CSR 1000v and the other Vyatta Core 6.
While on my current kick with virtual routing, I stumbled across an interesting concept regarding OSPF, and the flexibility that vendors have in determining the best path through an OSPF network.
The following topology is what I’ve been staring at for the last few days
Pretty simple, right? There’s a single network (192.168.123.0/24) down inside each virtual host where the VMs are to sit. Each host has a router on it (one Cisco CSR 1000v and the other Vyatta Core 6.
I was working on a topology for another post regarding interoperability between the recently released Cisco Cloud Services Router (CSR 1000v) and Vyatta when I ran into an issue regarding vSphere network security policies and First Hop Redundancy Procotols (FHRP) such as VRRP.
This post will serve as a precursor to that overall post, but I want to point out a key configuration piece when performing redundant gateways with a FHRP like VRRP.
I was working on a topology for another post regarding interoperability between the recently released Cisco Cloud Services Router (CSR 1000v) and Vyatta when I ran into an issue regarding vSphere network security policies and First Hop Redundancy Procotols (FHRP) such as VRRP.
This post will serve as a precursor to that overall post, but I want to point out a key configuration piece when performing redundant gateways with a FHRP like VRRP.
I was working on a topology for another post regarding interoperability between the recently released Cisco Cloud Services Router (CSR 1000v) and Vyatta when I ran into an issue regarding vSphere network security policies and First Hop Redundancy Procotols (FHRP) such as VRRP.
This post will serve as a precursor to that overall post, but I want to point out a key configuration piece when performing redundant gateways with a FHRP like VRRP.