Cisco 1812 as Home Router
This lab delves into configuring a Cisco 1812 router, a legacy device with 100 Mbps […]
The post Cisco 1812 as Home Router first appeared on Brezular's Blog.
This lab delves into configuring a Cisco 1812 router, a legacy device with 100 Mbps […]
The post Cisco 1812 as Home Router first appeared on Brezular's Blog.
Segment Routing allows the network operator to deploy Traffic Engineering even with the most basic routers that support the bare minimum of features.
Traffic engineering is a set of techniques to influence the path a particular …
Nick Buraglio and Brian E. Carpenter published a free, open-source IPv6 textbook.
The book seems to be in an early (ever-evolving) stage, but it’s well worth exploring if you’re new to the IPv6 world, and you might consider contributing if you’re a seasoned old-timer.
It would also be nice to have a few online labs to go with it ;)
Hi all, welcome back to the Packetswitch blog. In today's post, we'll explore how to use NAPALM for managing device configurations. We'll focus on Arista EOS as our example. We'll cover the methods available in NAPALM and how to push, commit and revert configurations on Arista devices.
We'll start by explaining what NAPALM is and why you might want to use it. Then we'll move on to a few examples and take a look at what happens behind the scenes. This approach will give you a clear understanding of NAPALM's role in network configuration management and how it works with Arista EOS devices.
NAPALM stands for Network Automation and Programmability Abstraction Layer with Multivendor support. It's a Python library that helps network engineers manage and automate different network devices using a common set of functions. NAPALM solves the problem of dealing with multiple vendor-specific interfaces by providing a unified way to interact with network devices from various manufacturers. This means you can use the same code to manage devices from Cisco, Juniper, Arista, and others, saving time and reducing the complexity of network automation tasks.
After having written a user space AX.25 stack in C++, I got bitten by the Rust bug. So this is the third time I’ve written an AX.25 stack, and I’ve become exceedingly efficient at it.
Here it is:
The reason for a user space stack remains from last time, but this time:
I’ve added almost an excessive amount of comments to the code, to cross reference with the specs. The specs that have a few bugs, by the way.
I’m not an expert in Rust, but it allows for so much more confidence in your code than any other language I’ve tried.
I think I know enough Rust to know what I don’t fully know. Sure, I’ve successfully added lifetime annotations, created macros, and built async code, but I’m not fluent in those yet.
Interestingly, Continue reading
In 2020, Google introduced Core Web Vitals metrics to measure some aspects of real-world user experience on the web. This blog has consistently achieved good scores for two of these metrics: Largest Contentful Paint and Interaction to Next Paint. However, optimizing the third metric, Cumulative Layout Shift, which measures unexpected layout changes, has been more challenging. Let’s face it: optimizing for this metric is not really useful for a site like this one. But getting a better score is always a good distraction. 💯
To prevent the “flash of invisible text” when using web fonts, developers should
set the font-display
property to swap
in @font-face
rules. This method
allows browsers to initially render text using a fallback font, then replace it
with the web font after loading. While this improves the LCP score, it causes
content reflow and layout shifts if the fallback and web fonts are not
metrically compatible. These shifts negatively affect the CLS score. CSS
provides properties to address this issue by overriding font metrics when using
fallback fonts: size-adjust
,
ascent-override
, descent-override
,
and line-gap-override
.
Two comprehensive articles explain each property and their computation methods in detail: Creating Perfect Font Fallbacks in CSS and Improved Continue reading
We have been watching the big original equipment manufactures like a hawk to see how they are generating revenues and income from GPU-accelerated system sales. …
Dell’s AI Server Business Now Bigger Than VMware Used To Be was written by Timothy Prickett Morgan at The Next Platform.
I stumbled across this tool, while am always a fan of VRNET-LAB https://github.com/vrnetlab/vrnetlab and it operates on docker containers, i could not get it properly bridge it with Local network meaning reachability to internet is something that I never worked on.
A container lab is a virtualized environment that utilises containers to create and manage network testing labs. It offers a flexible and efficient way to simulate complex network topologies, test new features, and perform various network experiments.
One striking feature that i really liked about containerlab is that representation is in a straight yaml which most of the network engineers now a days are Familiar with and its easy to edit the representation.
Other advantages
Host mappings after spinning up the lab
Slide explaining the capture process – Courtesy Petr Ankudinov (https://arista-netdevops-community.github.io/building-containerlab-with-ceos/#1)
https://containerlab.dev/quickstart/ – Will give you how to do a quick start and install containerlab.
https://github.com/topics/clab-topo – Topologies contributed by community
https://github.com/arista-netdevops-community/building-containerlab-with-ceos/tree/main?tab=readme-ov-file – Amazing Repo
https://arista-netdevops-community.github.io/building-containerlab-with-ceos/ -> This presentation has some a eVPN topology and also explain how to spin up a quick eVPN with ceos Continue reading
Hi all, welcome back to our Network CI/CD blog series. In this part, we’ll discuss what exactly GitLab is and the role it plays in the whole CI/CD process. We’ll explore how to use GitLab as a Git repository, how to install GitLab runners, and how to write a GitLab CI/CD pipeline, among other topics. So let’s get to it.
Before we proceed, let’s go over some prerequisites. This part of the series assumes you have some familiarity with Git, Ansible, and basic Docker concepts. I’m not an expert in any of these, but I have a basic understanding of what each tool does and how to configure and use them. Even if you’re not very familiar, you can still follow along as we go step by step.
Git is a version control system that allows you to track changes to your code, collaborate with others, and manage different versions of your projects. It's a fundamental tool for network automation that works with code or configuration files.
The Image Builder project is a set of tools aimed at automating the creation of Kubernetes disk images—such as VM templates or Amazon Machine Images (AMIs). (Interesting side note: Image Builder is the evolution of a much older Heptio project where I was a minor contributor.) I recently had a need to build a custom AMI with some extra container images preloaded, and in this post I’ll share with you how to configure Image Builder to preload additional container images.
Image Builder isn’t a single binary; it’s a framework built on top of other tools such as Packer and Ansible. Although in this post I’m discussing Image Builder in the context of building an AMI, it’s not limited to use with AWS. You can use Image Builder for a pretty wide collection of platforms (check the Image Builder web site for more details).
To have Image Builder preload additional images into your disk image, there are three changes needed. All three of these changes belong in the images/capi/packer/config/additional_components.json
file:
load_additional_components
to true
. (The default value is false
.)additional_registry_images
to true
. (This also defaults to false
.)additional_registry_images_list
to a comma-delimited list of fully-qualified image Continue readingUrs Baumann loves hands-on teaching and created tons of lab exercises to support his Infrastructure-as-Code automation course.
During the summer, he published some of them in a collection of GitHub repositories and made them work in GitHub Codespaces. An amazing idea well worth exploring!
When Starlink first went into service we heard a lot of stories about how its Internet service was slow and unreliable. We’re a few years into Starlink launching satellites–how is Starlink holding up? Is service improving? Geoff Huston joins Tom, Eyvonne, and Russ to look into Starlink’s performance today.
PARTNER CONTENT Given the size and complexity of modern semiconductor designs, functional verification has become a dominant phase in the development cycle. …
Reduce Manual Effort, Achieve Better Coverage With AI And Formal Techniques was written by Timothy Prickett Morgan at The Next Platform.
After discovering that some EVPN implementations support multiple transit VNI values in a single VRF, I had to check whether I could implement a common services L3VPN with EVPN.
TL&DR: It works (on Arista cEOS)1.
Here are the relevant parts of a netlab lab topology I used in my test (you can find the complete lab topology in netlab-examples GitHub repository):
Welcome to the most important earnings call in history, with the weight of the aggregate stock markets of the entire world hanging on what Nvidia says and doesn’t say. …
Nvidia Says “Blackwell” GPU Issues Are Fixed, Ramp Starts In Fiscal Q4 was written by Timothy Prickett Morgan at The Next Platform.
After Wall Street closed the markets for the day and Nvidia reported its financial results for the second quarter of fiscal 2025, we had the opportunity to chat with Colette Kress, chief financial officer of the accelerated computing giant. …
Interview: Post-Earnings Insight With Nvidia CFO Colette Kress was written by Timothy Prickett Morgan at The Next Platform.