This blog post was initially sent to subscribers of my SDN and Network Automation mailing list. Subscribe here.
I made a statement along these lines in an SD-WAN blog post and related email sent to our SDN and Network Automation mailing list:
The architecture of most SD-WAN products is thus much cleaner and easier to configure than traditional hybrid networks. However, do keep in mind that most of them use proprietary protocols, resulting in a perfect lock-in.
While reading that one of my readers sent me a nice email with an interesting question:
Read more ...Understanding lifecycle management complexity of datacenter topologies Zhang et al., NSDI’19
There has been plenty of interesting research on network topologies for datacenters, with Clos-like tree topologies and Expander based graph topologies both shown to scale using widely deployed hardware. This research tends to focus on performance properties such as throughput and latency, together with resilience to failures. Important as these are, note that they’re also what’s right in front of you as a designer, and relatively easy to measure. The great thing about today’s paper is that the authors look beneath the surface to consider the less visible but still very important “lifecycle management” implications of topology design. In networking, this translates into how easy it is to physically deploy the network, and how easy it to subsequently expand. They find a way to quantify the associated lifecycle management costs, and then use this to help drive the design of a new class of topologies, called FatClique.
… we show that existing topology classes have low lifecycle management complexity by some measures, but not by others. Motivated by this, we design a new class of topologies, FatClique, that, while being performance-equivalent to existing topologies, is comparable to, or Continue reading
Time to have some fun in the lab with Inter-AS Option AB. Let’s get our geek on! Inter-AS Option AB – where the data traffic uses the VRF interfaces (or sub-interfaces) and the control plane (BGP VPNv4) uses the... Read More ›
The post Inter-AS Option AB: Fun in the Lab appeared first on Networking with FISH.
The Internet Engineering Task Force (IETF) is the premier Internet standards body, developing open standards through processes to make the Internet work better. It gathers a large, international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. Core Internet technologies such as DNS, routing and traffic encryption use protocols standardized at IETF.
The IETF holds three meetings yearly which are livestreamed and can be followed individually, or with others sharing similar interest at a common venue. The next IETF meeting will be held from 25-29 March 2019 in Prague. The usual audience for an IETF meeting is network engineers, system engineers, developers, and university students or lecturers in information technology fields.
The Internet Society Africa Regional Bureau is running an initiative to encourage remote participation in IETF meetings that aims to promote the work of the IETF. IETF Remote Hubs aim to raise awareness about the IETF and allow those who cannot travel to a meeting to participate in the meeting remotely. The meetings are streamed in English only.
Join one of the following IETF Remote Hubs in your area, raise your awareness about the IETF and engage in the various topics of Continue reading
“When you deal with Kubernetes in the core of a network, dealing with quarterly updates is one...
Will the new QUIC protocol cause the Internet to collapse? Today's Heavy Networking episode tackles this question with guest Christian Huitema. QUIC is an emerging transport protocol that promises advances over TCP and the ability to innovate quickly, but could--possibly--set off an arms race as developers try to game congestion algorithms to their own benefit.
The post Heavy Networking 436: Will QUIC Collapse The Internet? appeared first on Packet Pushers.
Large organizations are married to the VMware suite of products. We can quibble about numbers for adoption of Hyper-V and KVM, but VMware dominates the enterprise virtualization market, just as Kubernetes is the unquestioned champion of containers.
Virtual Machines (VMs) are a mature technology, created and refined before large-scale adoption of public cloud services. Cloud-native workloads are often designed for containers, and containerized workloads are designed to fail. You can tear one down on one cloud, and reinstantiate it on another. Near-instant reinastantiation is the defense against downtime.
VMs take a different approach. A VM is meant to keep existing for long periods of time, despite migrations and outages. Failure is to be avoided as much as possible. This presents a problem as more organizations pursue a multi-cloud IT strategy.
The key technology for highly available VMs is vMotion: the ability to move a VM from one node in a cluster to another with no downtime. However, as data centers themselves become increasingly virtualized, using cloud computing services such as Microsoft Azure, Google Compute Engine, and Amazon EC2, there’s a growing requirement to be able to move VMs between cloud infrastructures. This is not a supported feature of vMotion.
Routed Continue reading
With this blog, I try to inspire and mentor. One person I have a lot of respect for is Joe Onisick. I had the pleasure of interviewing Joe. Joe has really transformed himself and everything about him lately and I thought it would be nice to give you readers some more insight to his journey. Here is Joe’s story:
Q: Hi Joe, welcome to the blog! Please give the readers a short introduction of yourself.
A: I’m a technology executive who’s been in the field for 23 years, with the exception of a five-year break to serve as a US Marine. I started in network/email administration and have spent most of my career in the data center space on all aspects of delivering data center resources, up to IaaS and private-cloud.
Q: Many people probably know you best from your time at Cisco, working for the Insieme BU, responsible for coming up with ACI. What was your time at Cisco like? How were you as a person at that time?
A: I joined a startup called Insieme Networks that was in the early stages of developing what became Cisco ACI and Nexus 9000. When the product was ready to launch, Continue reading
On the heels of its $6.9 billion Mellanox purchase, the chip company used the GPU Technology...
VMware’s latest cloud push includes more VMware Cloud on AWS regions, expanded multi-cloud...
Enterprises are primarily using AI in the cloud to increase revenue and free up employees’ time...
Terry Slattery and Rob Widmar join Donald and I to talk about the history of one of the most ubiquitous elements of network engineering, the Cisco CLI.
Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/
TDC CEO Allison Kirby told local media that the operator “is not blind” to the widely held...