A friend was looking for some input on low latency queuing yesterday. I thought the exchange we had could be useful for others so I decided to write a quick post.
The query was where the rule about the priority queue being limited to 33% came from. The follow up question is how you handle dual priority queues.
This is one of those rules that are known as a best practice and doesn’t really get challenged. The rule is based on Cisco internal testing within technical marketing. Their testing showed that data applications suffered when the LLQ was assigned a to large portion of the available bandwidth. The background to this rule is that you have a converged network running voice, video and data. It is possibly to break this rule if you are delivering a pure voice or pure video transport where the other traffic in place is not business critical. Other applications are likely to suffer if the LLQ gets too big and if everything is priority then essentially nothing is priority. I have seen implementations using around 50-55% LLQ for VoIP circuits which is a reasonable amount.
How should dual LLQs be deployed? The rule still applies. Continue reading
The source of content goes to mostly Cisco Live & Cisco Validated Designs, but offcoruse it's just an extract of those information.
Feel free to share your idea, I'm always looking to improve this if I find any other material to add here.
The source of content goes to mostly Cisco Live & Cisco Validated Designs, but offcoruse it's just an extract of those information.
Feel free to share your idea, I'm always looking to improve this if I find any other material to add here.
Just wanted to point you to two excellent blog posts recently published by Russ White.
Reaction: DevOps and Dumpster Fires
If teaching coders isn’t going to solve the problem, then what do we do? We need to go to where the money is. Applications aren’t bought by coders, just like networks aren’t.Read more ...
Since 2.0, Multipod for ACI enables provisioning a more fault tolerant fabric comprised of multiple pods with isolated control plane protocols. Also, multipod provides more flexibility with regard to the full mesh cabling between leaf and spine switches. When leaf switches are spread across different floors or different buildings, multipod enables provisioning multiple pods per floor or building and providing connectivity between pods through spine switches.
A new White Paper on ACI Multipod is now available
http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-737855.html?cachemode=refresh
Last week I was in Dublin for business which just so happened to overlap with iNOG9, which was last Wednesday. As luck would have it, I had the opportunity to speak at iNOG9 about network automation.
You can watch the video if you want to see the presentation, but the three mini demos I gave were:
Few words about each.
Usually when the topic of network automation comes up, configuration management is assumed. It should not be as there are so many other forms and types of automation. Here I showed how we can verify cabling (via neighbors) is accurate on a Junos vMX topology. Of course, the hard part here is having the discipline to define the desired cabling topology first. Note: links for sample playbooks can be found below on the GitHub repo.
I built a custom Ansible module built around NETCONF (ncclient), but uses the OpenConfig YANG model for global BGP configuration. For example, this is the Continue reading
Last week I was in Dublin for business which just so happened to overlap with iNOG9, which was last Wednesday. As luck would have it, I had the opportunity to speak at iNOG9 about network automation.
You can watch the video if you want to see the presentation, but the three mini demos I gave were:
Few words about each.
Usually when the topic of network automation comes up, configuration management is assumed. It should not be as there are so many other forms and types of automation. Here I showed how we can verify cabling (via neighbors) is accurate on a Junos vMX topology. Of course, the hard part here is having the discipline to define the desired cabling topology first. Note: links for sample playbooks can be found below on the GitHub repo.
I built a custom Ansible module built around NETCONF (ncclient), but uses the OpenConfig YANG model for global BGP configuration. For example, this is the Continue reading
Rami Rahim presents some big thinking at Juniper's Nxtwork.