It’s an established fact on the internet that we have ran out of IP(v4) addresses, and we are st
Sander Steffann sent me an intriguing question a long while ago:
I was wondering if there are any downsides to setting “system mtu jumbo 9198” by default on every switch? I mean, if all connected devices have MTU 1500 they won’t notice that the switch could support longer frames, right?
That’s absolutely correct, and unless the end hosts get into UDP fights things will always work out (aka TCP MSS saves the day)… but there must be a reason switching vendors don’t use maximum frame sizes larger than 1514 by default (Cumulus Linux seems to be an exception, and according to Sébastien Keller Arista’s default maximum frame size is between 9214 and 10178 depending on the platform).
Sander Steffann sent me an intriguing question a long while ago:
I was wondering if there are any downsides to setting “system mtu jumbo 9198” by default on every switch? I mean, if all connected devices have MTU 1500 they won’t notice that the switch could support longer frames, right?
That’s absolutely correct, and unless the end hosts get into UDP fights things will always work out (aka TCP MSS saves the day)… but there must be a reason switching vendors don’t use maximum frame sizes larger than 1514 by default (Cumulus Linux seems to be an exception, and according to Sébastien Keller Arista’s default maximum frame size is between 9214 and 10178 depending on the platform).
I pass access tokens, authentication keys, and other secrets to Python scripts via environment variables rather than encode these values into the scripts themselves. If I was a real boy, I’d use a solution like Hashicorp Vault or other secrets management tool (there’s a bunch of them), but I haven’t yet found the motivation to learn such a tool.
I’m not sure I’d want to build and maintain such a tool if I did find the motivation. I’m sort of lazy sometimes is what I’m saying. So for now, environment variables it is.
PyCharm allows for the passing of environment variables from the IDE to a script, whether that script is running locally or in a remote SSH deployment you’ve configured for your project.
To set the environment variables, select Edit Configurations from the Run menu.
Or in the project bar above the code window, click the dropdown with your script name, and select Edit Configurations.
Either way brings up the following configuration window for the scripts in your project. In the Environment variables: field, click the icon.
That will bring up the following window you can use to configure the environment variables.
Fantastic. But how do we assign the Continue reading
When vendors build something new—or when you decide to go a different direction in your network—you have to figure out how to integrate these new things. Integration of this type often includes cultural, as well as technical, changes. William Collins joins Tom Ammon and Russ White to discuss his experience in integrating new technologies on Hedge 118.
The release of VMware NSX-T 3.2 and VMware Container Networking with Antrea 1.3.1-1.2.3 delivers on VMware’s vision to have heterogeneous Antrea clusters running anywhere integrate with NSX-T for centralized container policy management and visibility.
NSX-T becomes the single pane of glass for policy management when connected to Antrea clusters. The Antrea clusters could be running on VMware Tanzu platform, RedHat OpenShift or any upstream Kubernetes cluster. Inventory management, tagging, dynamic grouping and troubleshooting can be extended to Antrea clusters along with native Kubernetes network policies and Antrea network policies to be centrally managed by NSX-T.
Antrea to NSX-T interworking Architecture
Antrea NSX Adapter is a new component introduced to the standard Antrea cluster to make the integration possible. This component communicates with K8s API and Antrea Controller and connects to the NSX-T APIs. When a NSX-T admin defines a new policy via NSX APIs or UI, the policies are replicated to all the clusters as applicable. These policies will be received by the adapter which in turn will create appropriate CRDs using K8s APIs. The Antrea Controller which is watching these policies run the relevant computation and sends the results Continue reading
Today's Day Two Cloud is a sponsored episode with StrongDM, which helps engineers and IT professionals get access to databases, servers, Kubernetes clusters, switches, Web apps, and more from a desktop or laptop. We dive into StrongDM's proxy model, integrations with directories and ID stores, audit features, and more.
The post Day Two Cloud 134: Simplifying Infrastructure Access With StrongDM (Sponsored) appeared first on Packet Pushers.
I’m teaching a three-hour webinar on privacy over at Safari Books on Friday. From the description there—
Privacy is important to every IT professional, including network engineers—but there is very little training oriented towards anyone other than privacy professionals. This training aims to provide a high-level overview of privacy and how privacy impacts network engineers. Information technology professionals are often perceived as “experts” on “all things IT,” and hence are bound to face questions about the importance of privacy, and how individual users can protect their privacy in more public settings.
Please join me for this—it’s a very important topic largely ignored in the infrastructure space.
Got this question from one of my readers:
When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?
Short answer: you don’t.
Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.
Got this question from one of my readers:
When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?
Short answer: you don’t.
Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.
There are two "mainly used" string types in Rust. The str slice, which is mostly seen as a borrowed &str slice. And the errm ... String. Wait ... Wut? String Considerations The data in a borrowed &str slice CANNOT be modified. The data in a String CAN be modified. A &str has a...continue reading
For Loop A for expression extracts values from an iterator until the iterator is empty. for loops in Rust use a similar syntax to Python with the in keyword. } } // => i: blah j: blah' ) }} For Loop Considerations For loops can iterate over anything that implements the...continue reading
In this post, we look at BGP on Junos OS and a typical BGP configuration for the underlay, for a 3-stage Clos fabric. We also introduce BGP unnumbered, which is a great way of building the underlay, without the need of any IP addressing.