In the previous blog posts in this series, we explored whether we need addresses on point-to-point links (TL&DR: no), whether it’s better to have interface or node addresses (TL&DR: it depends), and why we got unnumbered IPv4 interfaces. Now let’s see how IP routing works over unnumbered interfaces.
A cursory look at an IP routing table (or at CCNA-level materials) tells you that the IP routing table contains prefixes and next hops, and that the next hops are IP addresses. How should that work over unnumbered interfaces, and what should we use for the next-hop IP address in that case?
In the previous blog posts in this series, we explored whether we need addresses on point-to-point links (TL&DR: no), whether it’s better to have interface or node addresses (TL&DR: it depends), and why we got unnumbered IPv4 interfaces. Now let’s see how IP routing works over unnumbered interfaces.
A cursory look at an IP routing table (or at CCNA-level materials) tells you that the IP routing table contains prefixes and next hops, and that the next hops are IP addresses. How should that work over unnumbered interfaces, and what should we use for the next-hop IP address in that case?
When Cloudflare started, sophisticated online security was beyond the reach of all but the largest organizations. If your pockets were deep enough, you could buy the necessary services — and the support that was required to operate them — to keep your online operations secure, fast, and reliable. For everyone else? You were out of luck.
We wanted to change that: to help build a better Internet. To build a set of services that weren’t just technically sophisticated, but easy to use. Accessible. Affordable. Part of this meant that we were always looking to build and equip our customers with all the tools they needed in order to do this for themselves.
Of course, a lot has changed since we started. The Internet has only increased in importance, fast becoming the most important channel for many businesses. Cybersecurity threats have only become more prevalent — and more sophisticated. And the products that Cloudflare offers to keep you safe on the Internet have attracted some of the largest and most recognizable organizations in the world.
Ask some of these larger organizations about cybersecurity, and they’ll tell you a few things: first, they love our products. But, second, that when something happens Continue reading
On today's Full Stack Journey podcast, host Scott Lowe shares some personal changes in his life, including leaving VMware for a startup called Kong, selling a house and moving, and buying and using an M1-based MacBook Pro. He shares his reflections on career changes, his decision-making process, and more.
The post Full Stack Journey 054: Changes Big And Small appeared first on Packet Pushers.
There are many resources for network automation with Ansible. Most of them only expose the first steps or limit themselves to a narrow scope. They give no clue on how to expand from that. Real network environments may be large, versatile, heterogeneous, and filled with exceptions. The lack of real-world examples for Ansible deployments, unlike Puppet and SaltStack, leads many teams to build brittle and incomplete automation solutions.
We have released under an open-source license our attempt to tackle this problem:
Here is a quick demo to configure a new peering:
This work is the collective effort of Continue reading
Do you know someone who has made an outstanding and sustained contribution in service to the Internet community? Nominate them.
The post Nominations Open! Jonathan B. Postel Service Award 2021 appeared first on Internet Society.
Every now and then I stumble upon an article or a comment explaining how Network Function Virtualization (NFV) introduces new data center fabric buffering requirements. Here’s a recent example:
For Telco/carrier Cloud environments, where NFVs (which are much slower than hardware SGW) get used a lot, latency is higher with a lot of jitter due to the nature of software and the varying link speeds, so DC-level near-zero buffer is not applicable.
It seems to me we’re dealing with another myth. Starting with the basics:
Every now and then I stumble upon an article or a comment explaining how Network Function Virtualization (NFV) introduces new data center fabric buffering requirements. Here’s a recent example:
For Telco/carrier Cloud environments, where NFVs (which are much slower than hardware SGW) get used a lot, latency is higher with a lot of jitter due to the nature of software and the varying link speeds, so DC-level near-zero buffer is not applicable.
It seems to me we’re dealing with another myth. Starting with the basics:
In one of his recent posts, Ivan raises a question: “I can’t grasp why Cumulus releases a Vagrant box, but not a Docker container”. Coincidentally, only a few weeks before that I had managed to create a Cumulus Linux container image. Since then, I’ve done a lot of testing and discovered limitations of the pure containerised approach and how to overcome them while still retaining the container user experience. This post is a documentation of my journey from the early days of running Cumulus on Docker to the integration with containerlab and, finally, running Cumulus in microVMs backed by AWS’s Firecracker and Weavework’s Ignite.
One of the main reason for running containerised infrastructure is the famous Docker UX. Containers existed for a very long time but they only became mainstream when docker released their container engine. The simplicity of a typical docker workflow (build, ship, run) made it accessible to a large number of not-so-technical users and was the key to its popularity.
Virtualised infrastructure, including networking operating systems, has mainly been distributed in a VM form-factor, retaining much of the look and feel of the real hardware for the software processes running on top. However it Continue reading