Author Archives: Ivan Pepelnjak
Author Archives: Ivan Pepelnjak
One of the readers of the Considerations for Host-Based Firewalls blog post wrote this interesting comment:
Perhaps a paradigm shift is due for firewalls in general? I’m thinking quickly here but wondering if we perhaps just had a protocol by which a host could request upstream firewall(s) to open access inbound on their behalf dynamically, the hosts themselves would then automatically inform the security device what ports they need/want opened upstream.
Well, we have at least two protocols that could fit the bill: Universal Plug and Play and Port Control Protocol (RFC 6887).
In early September I explained the difference between exposed and published Docker container ports.
Now let’s peek behind the curtain and explore how Docker uses iptables to publish container ports, and why it runs a userland proxy process for every published port.
For even more details, explore the Docker Networking Deep Dive webinar.
A friend of mine involved in multiple Cisco ACI installations sent me this comment on their tenant connectivity model:
I’m a bit allergic to ACI. The abstraction is mis-aligned with familiar configurations, in particular contracts being independent of and over-riding routing, tenants, etc. You can really make a mess with that, and I’ve seen some! One needs to impose some structure, naming conventions…, and most people don’t seem to get that done.
As I noticed in the NSX-or-ACI webinar, it’s interesting how NSX decided to stay with the familiar VLAN/routing/filtering paradigm (more details), whereas the designers of Cisco ACI decided to go down a totally different path.
Junhui Liu added this comment to my Where Do We Need Smart NICs? blog post:
CPU is not designed for the purpose of packet forwarding. One example is packet order retaining. It is impossible for a multicore CPU to retain the packet order as is received after parallel processing by multiple cores. Another example is scheduling. Yes CPU can do scheduling, but at a very high tax of CPU cycles.
Years ago I was naive enough to participate in writing an IETF document. Three years later we finally managed to turn it into an RFC, and I decided that was enough for one lifetime.
But wait, it gets worse… as Geoff Huston argues in his article, the lengthy review process doesn’t necessarily result in better (or more precise) documents.
Seems like we managed to turn IETF into yet another standard body like IEEE, ISO or ITU/T.
Earlier this year, Pete Lumbis returned as an ipSpace.net webinar guest speaker with a great presentation describing data center switching ASICs from the perspective of networking engineers. After a brief intro, he started with ASIC Basics… a topic which generated a 25-minute Q&A session.
Several engineers formerly working for a large virtualization vendor were pretty upset with me when I claimed that the virtualization consultants promote “disaster recovery using stretched VLANs” designs instead of alternatives that would implement proper separation of failure domains.
Guess what… it’s even worse than I thought.
Here’s a sequence of comments I received after reposting one of my “disaster recovery doesn’t need stretched VLANs” blog posts on LinkedIn sometime in late 2019:
We did a number of Software Gone Wild podcasts trying to figure out whether smart NICs address a real need or whether it’s just another vendor attempt to explore all potential markets. As expected, we got opposing views from Luke Gorrie claiming a NIC should be as simple as possible to Silvano Gai explaining how dedicated hardware performs the same operations at lower cost, lower power consumption and way higher speeds.
In theory, there’s no doubt that Silvano is right. Just look at how expensive some router line cards are, and try to figure out how much it would cost to get 25.6 Tbps of forwarding performance that we’ll get in a single ASIC (Tomahawk-4) in software (assuming ~10 Gbps per CPU core). High-speed core packet forwarding has to be done in dedicated hardware.
When planning to move your workloads to a public cloud you might want to consider the minor detail of public IPv4 connectivity (I know of at least one public cloud venture that couldn’t get their business off the ground because they couldn’t get enough public IPv4 addresses).
Here’s a question along these lines that one of the attendees of our public cloud networking course sent me:
You might remember my occasional rants about lack of engineering in networking. A long while ago David Barroso nicely summarized the situation in a tweet responding to my BGP and Car Safety blog post:
If we were in a proper engineering we’d be discussing how to regulate and add safeties to an important tech that is unsafe and hard to operate. Instead, we blog about how to do crazy shit to it or how it’s a hot mess. Let’s be honest, if BGP was a car it’d be one pulled by horses.
Justin Pietsch published another must-read article, this time dealing with operational complexity of load balancers and IP multicast. Here are just a few choice quotes to get you started:
You might find what he learned useful the next time you’re facing a unicorn-colored slide deck from your favorite software-defined or intent-based vendor ;))
In December 2019 I finally turned my focus on business challenges first presentation into a short webinar session (part of Business Aspects of Networking Technologies webinar) starting with defining the problem before searching for a solution including three simple questions:
This is a guest blog post by Matthias Luft, Principal Platform Security Engineer @ Salesforce, and a regular ipSpace.net guest speaker.
Having spent my career in various roles in IT security, Ivan and I always bounced thoughts on the overlap between networking and security (and, more recently, Cloud/Container) around. One of the hot challenges on that boundary that regularly comes up in network/security discussions is the topic of this blog post: microsegmentation and host-based firewalls (HBFs).
Nadeem Lughmani created an excellent solution for the securing your cloud deployment hands-on exercise in our public cloud online course. His Terraform-based solution includes:
Last week I published an overview of how complex (networking-wise) Docker Swarm services can get. This time let’s focus on something that should have been way simpler: running container-based services on a single Linux host.
In the first part of this article I’m focusing on the basics, including exposed ports, and published ports. The behind-the-scenes details are coming in a week or so; in the meantime you can enjoy (most of them) in the Docker Networking Deep Dive webinar.
In June 2020 I published the first part of Redundant Server Connectivity in Layer-3-Only Fabrics article describing the target design and application-layer requirements.
During the summer I added the details of multi-subnet server and client connectivity and a few conclusions.
The last Fallacy of Distributed Computing I addressed in the introductory part of How Networks Really Work webinar was The Network Is Homogenous. No, it’s not and it never was… for more details watch this video.
One of the most annoying part in every training content development project was the ubiquitous question somewhere at the end of the process: “and now we’d need a few review questions”. I’m positive anyone ever involved in a similar project can feel the pain that question causes…
Writing good review questions requires a particularly devious state of mind, sometimes combined with “I would really like to get the answer to this one” (obviously you’d mark such questions as “needs further research”, and if you’re Donald Knuth the question would be “prove that P != NP").
Remember the claim that networking is becoming obsolete and that everyone else will simply bypass the networking teams (source)?
Good news for you – there are many fast growing overlay solutions that are adopted by apps and security teams and bypass the networking teams altogether.
That sounds awesome in a VC pitch deck. Let’s see how well that concept works out in reality using Docker Swarm as an example (Kubernetes is probably even worse).
Greg Ferro is back with some great technical content, this time explaining why hardware-based packet capture might return unexpected results.