Author Archives: Ivan Pepelnjak
Author Archives: Ivan Pepelnjak
A while ago Cisco added dynamic FCoE support to Nexus 5000 switches. It sounded interesting and I wanted to talk about it in my Data Center Fabrics update session, but I couldn’t find any documentation at that time.
In the meantime, the Configuring Dynamic FCoE Using FabricPath configuration guide appeared on Cisco’s web site and J Metz wrote a lengthly blog post explaining how it all works, triggering a severe attack of déjà vu.
Read more ...When we were discussing my autumn travel plans, my lovely wife asked me “What are you going to talk about in Bern?” She has a technical background, but I didn’t feel like going into the intricacies of SDN, SDDC and NetOps, so I told her the essence of my keynote speech:
Read more ...After the initial onslaught of SDN washing, four distinct approaches to SDN have started to emerge, from centralized control plane architectures to smart reuse of existing protocols.
As always, each approach has its benefits and drawbacks, and there’s no universally best solution. You just got four more (somewhat immature) tools in your toolbox. And now for the details.
Read more ...I spent last Tuesday in Bern attending the SIGS DC Day Event, and came back home extremely pleasantly surprised. The conference was nice and cozy, giving everyone plenty of opportunities to chat about data center technical challenges (thanks for all the wonderful conversations we had – you know who you are!).
Having the opportunity to meet fellow networking engineers and compare notes is great, but it’s even better to combine that with new knowledge, and that’s where the event really excelled.
Read more ...Seamus Gilchrist sent me a fantastic list of MPLS- and MPLS-TE-related questions. Instead of starting an email exchange we agreed on something that should benefit a wider community: a lengthy whiteboard session discussing the basics of MPLS, MPLS-TE, load balancing and QoS in MPLS networks…
The first part of our conversation is already online: The Essence of MPLS.
A while ago Rick Parker told me about his amazing project: he started a meetup group that will build a reference private/hybrid cloud heavily relying on virtualized network services, and publish all documentation related to their effort, from high-level architecture to device and software configurations, and wiring plans.
In Episode 8 of Software Gone Wild Rick told us more about his project, and we simply couldn’t avoid a long list of topics including:
Read more ...A few days ago Garrett Wollman published his exasperating experience running IPv6 on large L2 subnets with Juniper Ex4200 switches, concluding that “… much in IPv6 design and implementation has been botched by protocol designers and vendors …” (some of us would forcefully agree) making IPv6 “…simply unsafe to run on a production network…”
The resulting debate on Hacker News is quite interesting (and Andrew Yourtchenko is trying hard to keep it close to facts) and definitely worth reading… but is ND/MLD really as broken as some people claim it is?
Read more ...VMware announced several vMotion enhancements in vSphere 6, ranging from “finally” to “interesting”.
vMotion across virtual switches. Finally. The tricks you had to use previous were absolutely bizarre.
Read more ...Some OpenFlow-focused startups are desperately trying to tell you how redundant their architecture is. Unfortunately all the whitepapers (and the prancing unicorns) cannot change a simple fact: an SDN controller (OpenFlow-based or otherwise) is in some aspects a single failure domain.
Read more ...One of my readers sent me an interesting challenge: they’re deploying a new DMVPN WAN, and as they cannot expect all locations to have native (non-NAT) IPv4 access, they plan to build the new DMVPN over IPv6. He was wondering whether it would work.
Apart from “you’re definitely going in the right direction” all I could tell him was “looking at the documentation I couldn’t see why it wouldn’t work” Has anyone deployed DMVPN over IPv6 in a production network? Any hiccups? Please share your experience in the comments. Thank you!
TL;DR: I'll be in Bern on September 9th. If you'd like to drop by and discuss network design or automation challenges, read on…
Read more ...The latest release of Cisco Nexus 1000V for vSphere can handle twice as many vSphere hosts as the previous one (250 instead of 128). Cisco probably did a lot of code polishing to improve Nexus 1000V scalability, but I’m positive most of the improvement comes from interesting architectural changes.
Read more ...The pilot episode of Software Gone Wild podcast featuring Snabb Switch created plenty of additional queries (and thousands of downloads) – it was obviously time for another deep dive episode discussing the intricate innards of this interesting virtual switch.
During the deep dive Luke Gorrie, the mastermind behind the Snabb Switch, answered a long list of questions, including:
Read more ...If you’re a regular reader of my blog, you know that I spent a lot of time during the last three years debunking SDN myths, explaining the limitations of OpenFlow and pointing out other technologies one could use to program the network.
During the summer of 2014 I organized my SDN- and OpenFlow-related blog posts into a digital book. I want to make this information as useful and as widely distributed as possible – for a limited time you can download the PDF free of charge.
A while ago I wrote about the idea of treating network infrastructure (and all other infrastructure) as code, and using the same processes application developers are using to write, test and deploy code to design and implement networks.
That approach clearly works well if you can virtualize (and clone ad infinitum) everything. We can virtualize appliances or even routers, but installed equipment and high-speed physical infrastructure remain somewhat resistant to that idea. We need a different paradigm, and the best analogy I could come up with is a database.
Read more ...A reader sent me this question:
My company will have 10GE dark fiber across our DCs with possibly OTV as the DCI. The VM team has also expressed interest in DC-to-DC vMotion (<4ms). Based on your blogs it looks like overall you don't recommend long-distance vMotion across DCI. Will the "Data Center trilogy" package be the right fit to help me better understand why?
Unfortunately, long-distance vMotion seems to be a persistent craze that peaks with a predicable period of approximately 12 months, and while it seems nothing can inoculate your peers against it, having technical arguments might help.
Read more ...My good friend Tiziano complained about the fact that BGP considers next hop reachable if there’s an entry in the IP routing table even though the router cannot even ping the next hop.
That behavior is one of the fundamental aspects of IP networks: networks built with IP routing protocols rely on fate sharing between control and data planes instead of path liveliness checks.
Read more ...I first met Elisa Jasinska when she had one of the coolest job titles I ever saw: Senior Packet Herder. Her current job title is almost as cool: Senior Network Toolsmith @ Netflix – obviously an ideal guest for the Software Gone Wild podcast.
In our short chat she described some of the tools she’s working on, including an adaptation of pmacct to environments with numerous BGP exit points (more details in her NANOG presentation).
Building a private cloud infrastructure tends to be a cumbersome process: even if you do it right, you oft have to deal with four to six different components: orchestration system, hypervisors, servers, storage arrays, networking infrastructure, and network services appliances.
Read more ...A few days ago I had an interesting interview with Christoph Jaggi discussing the challenges, changes in mindsets and processes, and other “minor details” one must undertake to gain something from the SDDC concepts. The German version of the interview is published on Inside-IT.ch; you’ll find the English version below.
Read more ...