We shouldn’t think of NFV as merely porting a cloud to the telcos. We need to understand that true NFV won't happen until we connect all the entities involved in delivering the customer a modern, virtualized service.
Networking vendor refocuses on security with its Software-Defined Secure Networks framework.
Managing traffic with DNS visibility.
Software-defined WAN is hot. Here are some key considerations before making a purchase.
CARRIER ETHERNET DEFINITION Carrier Ethernet is an attempt to expand Ethernet beyond the borders of Local Area Network (LAN), into the Wide Area Networks (WAN). With Carrier Ethernet, customer sites are connected through the Wide Area Network. Carriers have connected the customers with ATM (Asynchronous Transfer Mode) and Frame Relay interfaces in the past. (User to Network Interface/UNI). […]
The post Carrier Ethernet – Definition | Service Types | Requirements appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.
A friend was looking for some input on low latency queuing yesterday. I thought the exchange we had could be useful for others so I decided to write a quick post.
The query was where the rule about the priority queue being limited to 33% came from. The follow up question is how you handle dual priority queues.
This is one of those rules that are known as a best practice and doesn’t really get challenged. The rule is based on Cisco internal testing within technical marketing. Their testing showed that data applications suffered when the LLQ was assigned a to large portion of the available bandwidth. The background to this rule is that you have a converged network running voice, video and data. It is possibly to break this rule if you are delivering a pure voice or pure video transport where the other traffic in place is not business critical. Other applications are likely to suffer if the LLQ gets too big and if everything is priority then essentially nothing is priority. I have seen implementations using around 50-55% LLQ for VoIP circuits which is a reasonable amount.
How should dual LLQs be deployed? The rule still applies. Continue reading
The source of content goes to mostly Cisco Live & Cisco Validated Designs, but offcoruse it's just an extract of those information.
Feel free to share your idea, I'm always looking to improve this if I find any other material to add here.
The source of content goes to mostly Cisco Live & Cisco Validated Designs, but offcoruse it's just an extract of those information.
Feel free to share your idea, I'm always looking to improve this if I find any other material to add here.
Just wanted to point you to two excellent blog posts recently published by Russ White.
Reaction: DevOps and Dumpster Fires
If teaching coders isn’t going to solve the problem, then what do we do? We need to go to where the money is. Applications aren’t bought by coders, just like networks aren’t.Read more ...
Since 2.0, Multipod for ACI enables provisioning a more fault tolerant fabric comprised of multiple pods with isolated control plane protocols. Also, multipod provides more flexibility with regard to the full mesh cabling between leaf and spine switches. When leaf switches are spread across different floors or different buildings, multipod enables provisioning multiple pods per floor or building and providing connectivity between pods through spine switches.
A new White Paper on ACI Multipod is now available
http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-737855.html?cachemode=refresh