Ivan Pepelnjak

Author Archives: Ivan Pepelnjak

Mythbusting: NFV Data Center Fabric Buffering Requirements

Every now and then I stumble upon an article or a comment explaining how Network Function Virtualization (NFV) introduces new data center fabric buffering requirements. Here’s a recent example:

For Telco/carrier Cloud environments, where NFVs (which are much slower than hardware SGW) get used a lot, latency is higher with a lot of jitter due to the nature of software and the varying link speeds, so DC-level near-zero buffer is not applicable.

It seems to me we’re dealing with another myth. Starting with the basics:

Feedback: Kubernetes Networking Deep Dive

Here’s what one of the engineers watching Stuart Charlton’s Kubernetes webinar wrote about it:

“Kubernetes Networking Deep Dive” is a must see webinar. Once done take a break and then watch it again, let it sink in and then sign-up for a free account with Azure or GCP and practice all that was learned during the webinar.

At the end of this exercise … one will begin to understand why the networking domain seems to be lagging behind … This webinar will help one pick up the pace!

Feedback: Kubernetes Networking Deep Dive

Here’s what one of the engineers watching Stuart Charlton’s Kubernetes webinar wrote about it:

“Kubernetes Networking Deep Dive” is a must see webinar. Once done take a break and then watch it again, let it sink in and then sign-up for a free account with Azure or GCP and practice all that was learned during the webinar.

At the end of this exercise … one will begin to understand why the networking domain seems to be lagging behind … This webinar will help one pick up the pace!

Video: We Still Need Networking in Public Clouds

Whenever someone starts mansplaining that you we need no networking when we move the workloads into a public cloud, please walk away – he has just proved how clueless he is.

He might be a tiny bit correct when talking about software-as-a-service (after all, it’s just someone else’s web site), but when it comes to complex infrastructure virtual networks, there’s plenty of networking involved, from packet filters and subnets to NAT, load balancers, firewalls, BGP and IPsec.

For more details, watch the We Still Need Networking in Public Clouds video (part of Introduction to Cloud Computing webinar).

You need Free ipSpace.net Subscription to watch the video

Video: We Still Need Networking in Public Clouds

Whenever someone starts mansplaining that we need no networking when we move the workloads into a public cloud, please walk away – he has just proved how clueless he is.

He might be a tiny bit correct when talking about software-as-a-service (after all, it’s just someone else’s web site), but when it comes to complex infrastructure virtual networks, there’s plenty of networking involved, from packet filters and subnets to NAT, load balancers, firewalls, BGP and IPsec.

For more details, watch the We Still Need Networking in Public Clouds video (part of Introduction to Cloud Computing webinar).

You need Free ipSpace.net Subscription to watch the video

Automation: Dealing with Vendor-Specific Configuration Keywords

One of the students in our Building Network Automation Solutions online course asked an interesting question:

I’m building an IPsec multi-vendor automation solution and am now facing the challenge of vendor-specific parameter names. For example, to select the AES-128 algorithm, Juniper uses ‌aes-128-cbc, Arista aes128, and Checkpoint AES-128.

I guess I need a kind of Rosetta stone to convert the IKE/IPSEC parameters from a standard parameter to a vendor-specific one. Should I do that directly in the Jinja2 template, or in the Ansible playbook calling the template?

Both options are awkward. It would be best to have a lookup table mapping parameter values from the data model into vendor-specific keywords, for example:

Automation: Dealing with Vendor-Specific Configuration Keywords

One of the students in our Building Network Automation Solutions online course asked an interesting question:

I’m building an IPsec multi-vendor automation solution and am now facing the challenge of vendor-specific parameter names. For example, to select the AES-128 algorithm, Juniper uses ‌aes-128-cbc, Arista aes128, and Checkpoint AES-128.

I guess I need a kind of Rosetta stone to convert the IKE/IPSEC parameters from a standard parameter to a vendor-specific one. Should I do that directly in the Jinja2 template, or in the Ansible playbook calling the template?

Both options are awkward. It would be best to have a lookup table mapping parameter values from the data model into vendor-specific keywords, for example:

Back to Basics: Unnumbered IPv4 Interfaces

In the previous blog post in this series, we explored some of the reasons IP uses per-interface (and not per-node) IP addresses. That model worked well when routers had few interfaces and mostly routed between a few LAN segments (often large subnets of a Class A network assigned to an academic institution) and a few WAN uplinks. In those days, the WAN networks were often implemented with non-IP technologies like Frame Relay or ATM (with an occasional pinch of X.25).

The first sign of troubles in paradise probably occurred when someone wanted to use a dial-up modem to connect to a LAN segment. What subnet (and IP address) do you assign to the dial-up connection, and how do you tell the other end what to use? Also, what do you do when you want to have a bank of modems and dozens of people dialing in?

Back to Basics: Unnumbered IPv4 Interfaces

In the previous blog post in this series, we explored some of the reasons IP uses per-interface (and not per-node) IP addresses. That model worked well when routers had few interfaces and mostly routed between a few LAN segments (often large subnets of a Class A network assigned to an academic institution) and a few WAN uplinks. In those days, the WAN networks were frequently implemented with non-IP technologies like Frame Relay or ATM (with an occasional pinch of X.25).

The first sign of troubles in paradise probably occurred when someone wanted to use a dial-up modem to connect to a LAN segment. What subnet (and IP address) do you assign to the dial-up connection, and how do you tell the other end what to use? Also, what do you do when you want to have a bank of modems and dozens of people dialing in?

Packet Bursts in Data Center Fabrics

When I wrote about the (non)impact of switching latency, I was (also) thinking about packet bursts jamming core data center fabric links when I mentioned the elephants in the room… but when I started writing about them, I realized they might be yet another red herring (together with the supposed need for large buffers in data center switches).

Here’s how it looks like from my ignorant perspective when considering a simple leaf-and-spine network like the one in the following diagram. Please feel free to set me straight, I honestly can’t figure out where I went astray.

Packet Bursts in Data Center Fabrics

When I wrote about the (non)impact of switching latency, I was (also) thinking about packet bursts jamming core data center fabric links when I mentioned the elephants in the room… but when I started writing about them, I realized they might be yet another red herring (together with the supposed need for large buffers in data center switches).

Here’s how it looks like from my ignorant perspective when considering a simple leaf-and-spine network like the one in the following diagram. Please feel free to set me straight, I honestly can’t figure out where I went astray.

netsim-tools release 0.6.2

Last week we pushed out netsim-tools release 0.6.2. It’s a maintenance release, so mostly full of bug fixes apart from awesome contributions by Leo Kirchner who

  • Made vSRX 3.0 work on AMD CPU (warning: totally unsupported).
  • Figured out how to use vagrant mutate to use virtualbox version of Cisco Nexus 9300v Vagrant box with libvirt

Other bug fixes include:

  • Numerous fixes in Ansible installation playbook
  • LLDP on all vSRX interfaces as part of initial configuration
  • Changes in FRR configuration process to use bash or vtysh as needed
  • connect.sh executing inline commands with docker exec

netsim-tools release 0.6.2

Last week we pushed out netsim-tools release 0.6.2. It’s a maintenance release, so mostly full of bug fixes apart from awesome contributions by Leo Kirchner who

  • Made vSRX 3.0 work on AMD CPU (warning: totally unsupported).
  • Figured out how to use vagrant mutate to use virtualbox version of Cisco Nexus 9300v Vagrant box with libvirt

Other bug fixes include:

  • Numerous fixes in Ansible installation playbook
  • LLDP on all vSRX interfaces as part of initial configuration
  • Changes in FRR configuration process to use bash or vtysh as needed
  • connect.sh executing inline commands with docker exec

Worth Reading: Rethinking Internet Backbone Architectures

Johan Gustawsson wrote a lengthy blog post describing Telia’s approach to next-generation Internet backbone architecture… and it’s so refreshing seeing someone bringing to life what some of us have been preaching for ages:

  • Simplify the network;
  • Stop cramming ever-more-complex services into the network;
  • Bloated major vendor NPUs implementing every magic ever envisioned are overpriced – platforms like Broadcom Jericho2 are good enough for most use cases.
  • Return from large chassis-based stupidities to network-centric high availability.

I don’t know enough about optics to have an opinion on what they did there, but it looks as good as the routing part. It would be great to hear your opinion on the topic – write a comment.

Worth Reading: Rethinking Internet Backbone Architectures

Johan Gustawsson wrote a lengthy blog post describing Telia’s approach to next-generation Internet backbone architecture… and it’s so refreshing seeing someone bringing to life what some of us have been preaching for ages:

  • Simplify the network;
  • Stop cramming ever-more-complex services into the network;
  • Bloated major vendor NPUs implementing every magic ever envisioned are overpriced – platforms like Broadcom Jericho2 are good enough for most use cases.
  • Return from large chassis-based stupidities to network-centric high availability.

I don’t know enough about optics to have an opinion on what they did there, but it looks as good as the routing part. It would be great to hear your opinion on the topic – write a comment.

Worth Exploring: Magic Carpet

Looks like John changed the name of the project and all the URLs. Will update the links once he’s done with the migration.

John Capobianco recently released his Magic Carpet: a tool that helps you gather information from network devices without the usual Ansible bloat and glacial speed.

Believing in “no job is finished until the paperwork is done”, he wrote extensive documentation, and recorded a collection of videos describing the tool’s functionality – definitely worth reading, watching, and exploring.

1 79 80 81 82 83 176