Author Archives: Ivan Pepelnjak
Author Archives: Ivan Pepelnjak
A while ago I described why some storage vendors require end-to-end layer-2 connectivity for iSCSI replication.
TL&DR version: they were too lazy to implement iSCSI checksums and rely on Ethernet checksums because TCP/IP checksums are not good enough.
It turns out even Ethernet checksums fail every now and then.
2015-12-06: I misunderstood the main technical argument in Evan’s post. The real problem is that switches recalculate CRC, so the Ethernet CRC is no longer end-to-end protection mechanism.
Read more ...I love stumbling upon new networking-focused blogs. Many of my old friends switched to the dark side vendors and stopped blogging, others simply gave up, and it seems like there aren’t that many engineers that would like to start this experiment.
One of the obvious first questions is always “what should I write about” and my reply is always “it doesn’t really matter – make sure it’s useful.”
Read more ...I love listening to the Datanauts podcast (Ethan and Chris are fantastic hosts), starting from the very first episode (hyper-converged infrastructure) in which Chris made a very valid comment along the lines of “with the hyper-converged infrastructure it’s possible to get so many things done without knowing too much about any individual thing…” and I immediately thought “… and what happens when it fails?”
Read more ...Do you want to know more about Cumulus Linux after learning what data center architectures it supports, what base technologies it uses, and how you can use it to simplify network configurations? It’s time to explore Cumulus Linux architecture (part 5 of the presentation Dinesh Dutt had during the Data Center Fabrics webinar).
One of the engineers listening to my DMVPN webinars sent me a follow-up question (yes, I always try to reply to them) asking how to implement direct Internet access from the spoke sites (aka local exit) in combination with split default routing you have to use in DMVPN Phase 2 or Phase 3 networks.
It’s really simple: either you have a design requirement that requires split default routing, or you don’t.
Read more ...One of my readers read the Ars Technica article on ads communicating with other devices via ultrasound and wondered whether something similar could be done for IP.
Not surprisingly, someone already did it. A quick google search found this tutorial which explains how to run IP stack over Gnuradio (at speeds that were last experienced with dial-up modems 30 years ago).
I was asked to present a data-center-related talk last week and decided to focus on one of my favorite topics: because most people don’t have more than a few hundred servers in their data center, they don’t need more than two switches (or a rack of servers).
Not surprisingly, an equipment reseller sitting in the room was not amused.
The video and the slide deck are already online, but there’s a minor challenge: the whole event was in Slovenian ;) However, I plan to record the same topic in English once my SDN travels stop.
Imagine you’d design your network by documenting the desired traffic flow across the network under all failure conditions, and only then do a low-level design, create configurations, and deploy the network… while being able to use the desired traffic flows as a testing tool to verify that the network still behaves as expected, both in a test lab as well as in the live network.
Read more ...You might remember all the fuss about various encapsulations used in overlay virtual networking… just because one wouldn’t be good enough (according to Andrew Lerner “we provide users with choice” actually means “we can’t decide which product to offer you”).
Read more ...I was really excited when Juniper announced Junos Fusion. I hoped for QFabric Done Right, but after watching the NFD10 video describing the architecture, I was disappointed: they reinvented Fabric Extenders.
The blog post was slightly updated on November 14th 2015 based on feedback received from Juniper engineers.
Read more ...Andrew Lerner, my favorite Gartner analyst, recently published a hilarious blog post describing what vendors mean when they say “our product is software-defined” or “we’ll make it work”. Enjoy!
Need more vendorspeak? Try eight levels of vendor acceptance (carefully documented during a particularly stressful on-site test in Poland).
A newbie exploring the mythical lands of SDN might decide to start at the ONF definition of SDN, which currently (November 2015) starts with a battle cry:
The physical separation of the network control plane from the forwarding plane, and where a control plane controls several devices.
The rest of that same page is what I’d call the marketing definition of SDN: directly programmable, agile, centrally managed, programmatically configured, open standards based and vendor-neutral.
Read more ...One of the typical questions I get in my SDN workshops is “how do you run control-plane protocols like LACP or OSPF in OpenFlow networks?”.
I wrote a blog post describing the process two years ago and we discussed the details of this challenge in the OpenFlow Deep Dive webinar. That part of the webinar is now public: you’ll find the OpenFlow Use Cases: Control-Plane Protocols video on the ipSpace.net Free Content web site.
In an amazing turn of events, at least one IETF working group recognized we have serious problems with IPv6 multihoming. According to the email Fred Baker sent to a number of relevant IETF working groups:
PI multihoming demonstrably works, but PA multihoming when the upstreams implement BCP 38 filtering requires the deployment of some form of egress routing - source/destination routing in which the traffic using a stated PA source prefix and directed to a remote destination is routed to the provider that allocated the prefix. The IETF currently has no such recommendation, or consensus that it should have.
Here are a few really old blog posts just in case you don’t know what I’m talking about (and make sure you read the comments as well):
Read more ...Last year I claimed that you don’t need more than two switches in your data center (I’ll run a presentation on the same topic in a few days), but focused exclusively on the networking side of the equation.
Iwan Rahabok recently published a great blog post describing the compute- and storage parts of it. His conclusion: 1000 VM per rack is perfectly realistic.
One of the reasons I started creating ipSpace.net webinars was to help networking engineers grasp the basics of adjacent technologies like virtualization and storage. Based on feedback from an attendee of my Introduction to Virtual Networking webinar it works:
I am completely on the Network side of the house and understand what I need to build for Storage/Data replication, but I really never thoroughly understood why. This allowed me to have a coherent discussion with my counterparts in DB and Storage and some of the pitfalls that can occur if we try to cowboy the network design.
Recommendation: if you have a similar problem, start with Introduction to Virtual Networking and continue with Data Center 3.0 webinar.
I got this question from one of my readers (and based on these comments he’s not the only one facing this challenge):
I was wondering if you can do a blog post on Cisco's new ASA 5585-X clustering. My company recently purchased a few of these with the intent to run their cross data center active/active firewalls but found out we cannot do this without OTV or a layer 2 DCI.
A while ago I expressed my opinion about these ideas, but it seems some people still don’t get it. However, a picture is worth a thousand words, so maybe this will work:
Read more ...Content providers were using centralized traffic flow optimization together with MPLS TE for at least 15 years (some of them immediately after Cisco launched the early MPLS-TE implementation in their 12.0(5)T release), but it was always hard to push the results into the network devices.
PCEP and BGP-LS all changed that – they give you a standard mechanism to extract network topology and install end-to-end paths across the network, as Julian Lucek of Juniper Networks explained in Episode 43 of Software Gone Wild.
Read more ...Time for another fill-in-the-blanks survey: how many vendors support NETCONF and/or REST API in their data center switches, routers, firewalls and load balancers?
Please help me complete the tables by writing a comment – and do keep in mind that it only counts if it’s documented in a public configuration guide on vendor’s web site.
Also, I’m not aware of any vendor using standard NETMOD YANG models. If someone does, please let me know.
Read more ...I had fun times participating in a discussion focused on whether it makes sense to deploy OTV+LISP in a new data center deployment. Someone quickly pointed out the elephant in the room:
How many LISP VM mobility installs has anyone on this list been involved with or heard of being successfully deployed? How many VM mobility installs in general, where the VMs go at least 1,000 miles? I'm curious as to what the success rate for that stuff is.
I think we got one semi-qualifying response, so I made it even simpler ;)
Read more ...