I published a blog post describing how complex the underlay supporting VMware NSX still has to be (because someone keeps pretending a network is just a thick yellow cable), and the tweet announcing it admittedly looked like a clickbait.
[Blog] Do We Need Complex Data Center Switches for VMware NSX Underlay
Martin Casado quickly replied NO (probably before reading the whole article), starting a whole barrage of overlay-focused neteng-versus-devs fun.
What’s the next logical automation step after you cleaned up device configurations and started using configuration templates? It obviously depends on your pain points; for Anne Baretta it was a network inventory database stored in SQL tables (and thus readily accessible from his other projects).
Notes
What’s the next logical automation step after you cleaned up device configurations and started using configuration templates? It obviously depends on your pain points; for Anne Baretta it was a network inventory database stored in SQL tables (and thus readily accessible from his other projects).
Notes
Dinesh Dutt, a pragmatic IP routing guru, the mastermind behind great concepts like simplified BGP configuration, and one of the best ipSpace.net authors, finally decided to start blogging. His first article: describing the impact of having 256 100GE ports in a single ASIC (Tomahawk 4). Hope you’ll enjoy his musings as much as I did ;)
Dinesh Dutt, a pragmatic IP routing guru, the mastermind behind great concepts like simplified BGP configuration, and one of the best ipSpace.net authors, finally decided to start blogging. His first article: describing the impact of having 256 100GE ports in a single ASIC (Tomahawk 4). Hope you’ll enjoy his musings as much as I did ;)
After my response to the BGP is a hot mess topic, Corey Quinn graciously invited me to discuss BGP issues on his podcast. It took us a long while to set it up, but we eventually got there… and the results were published last week. Hope you’ll enjoy our chat.
After my response to the BGP is a hot mess topic, Corey Quinn graciously invited me to discuss BGP issues on his podcast. It took us a long while to set it up, but we eventually got there… and the results were published last week. Hope you’ll enjoy our chat.
Every now and then I find an IT professional claiming we should not be worried about split-brain scenarios because you have redundant links.
I might understand that sentiment coming from software developers, but I also encountered it when discussing stretched clusters or even SDN controllers deployed across multiple data centers.
Finally I found a great analogy you might find useful. A reader of my blog pointed me to the awesome Why Must Systems Be Operated blog post explaining the same problem from the storage perspective, so the next time you might want to use this one: “so you’re saying you don’t need backup because you have RAID disks”. If someone agrees with that, don’t walk away… RUN!
Every now and then I find an IT professional claiming we should not be worried about split-brain scenarios because you have redundant links.
I might understand that sentiment coming from software developers, but I also encountered it when discussing stretched clusters or even SDN controllers deployed across multiple data centers.
Finally I found a great analogy you might find useful. A reader of my blog pointed me to the awesome Why Must Systems Be Operated blog post explaining the same problem from the storage perspective, so the next time you might want to use this one: “so you’re saying you don’t need backup because you have RAID disks”. If someone agrees with that, don’t walk away… RUN!
Got this question from one of ipSpace.net subscribers:
Do we really need those intelligent datacenter switches for underlay now that we have NSX in our datacenter? Now that we have taken a lot of the intelligence out of our underlying network, what must the underlying network really provide?
Reading the marketing white papers the answer would be IP connectivity… but keep in mind that building your infrastructure based on information from vendor white papers usually gives you the results your gullibility deserves.
Read more ...January 2020 was one of the busiest months we ever had:
You can get immediate access to all these webinars with Standard or Expert ipSpace.net subscription.
During a recent workshop I made a comment along the lines “be careful with feature X from vendor Y because it took vendor Z two years to fix all the bugs in a very similar feature”, and someone immediately asked “are you saying it doesn’t work?”
My answer: “I never said that, I just drew inferences from other people’s struggles.”
Networking operating systems are probably some of the most complex pieces of software out there. Distributed systems are hard. Real-time distributed systems are even harder. Real-time distributed systems running on top of eventually-consistent distributed databases are extra fun.
Read more ...After introducing the fallacies of distributed computing in the How Networks Really Work webinar, I focused on the first one: the network is (not) reliable.
While that might be understood by most networking professionals (and ignored by many developers), here’s an interesting shocker: even TCP is not always reliable (see also: Joel Spolsky’s take on Leaky Abstractions).
A journey of a thousand miles begins with one step they say… but what should that first step be if you want to start a network automation journey (and have no idea how to do it)?
Anne Baretta sent me a detailed description of his journey, which (as is often the case) started with the standardized configuration templates.
Aldrin wrote a well-thought-out comment to my EVPN Dilemma blog post explaining why he thinks it makes sense to use Juniper’s IBGP (EVPN) over EBGP (underlay) design. The only problem I have is that I forcefully disagree with many of his assumptions.
He started with an in-depth explanation of why EBGP over directly-connected interfaces makes little sense:
Read more ...If you’re an ipSpace.net subscriber, you might have noticed how busy the last month has been (more about that later). February won’t be much better:
Finally, I’ll run a day-long workshop in Zurich on March 10th describing containers and Docker.
Unless you’re working for a cloud-only startup, you’ll always have to connect applications running in a public cloud with existing systems or databases running in a more traditional environment, or connect your users to public cloud workloads.
Public cloud providers love stable and robust solutions, and they took the same approach when implementing their legacy connectivity solutions: you could use routed Ethernet connections or IPsec VPN, and run BGP across them, turning the problem into a well-understood routing problem.
Read more ...In January 2020 Doug Heckaman documented his experience with VeloCloud SD-WAN. He tried to be positive, but for whatever reason this particular bit caught my interest:
Edge Gateways have a limited number of tunnels they can support […]
WTF? Wasn’t x86-based software packet forwarding supposed to bring infinite resources and nirvana? How badly written must your solution be to have a limited number of IPsec tunnels on a decent x86 CPU?
Read more ...Enterprise environments usually implement “mission-critical” applications by pushing high-availability requirements down the stack until they hit networking… and then blame the networking team when the whole house of cards collapses.
Most public cloud providers are not willing to play the same stupid blame-shifting game - they live or die by their reputation, and maintaining a stable service is their highest priority. They will do their best to implement a robust and resilient infrastructure, but will not do anything that could impact its stability or scalability… including the snake oil the virtualization and networking vendors love to sell to their gullible customers. When you deploy your application workloads into a public cloud, you become responsible for the resiliency of your own application, and there’s no magic button that could allow you to push the problems down the stack.
Read more ...Some network devices return structured data in either text- or XML format (but cannot spell JSON). Ansible prefers getting JSON-formatted data, and has a number of filters to process text printouts… but what could you do if you want to work with XML documents within Ansible? I described a few solutions in Transforming XML Data in Ansible.