When we think of automation—and more broadly tooling—we tend to think of automating the configuration, monitoring, and (possibly) the monitoring of a network. On the other hand, a friend once observed that when interviewing coders, the first thing he asked was about the tools they had developed and used for making themselves more efficient. This “self-tooling” process turns out to be important not just to be more efficient at work, but to use time more effectively in general. Join Nick Russo, Eyvonne Sharp, Tom Ammon, and Russ White as we discuss self-tooling.
Not content with having dug the Northbound Networks Zodiac FX out of a pile of overlooked technology in my office, I thought that the poor thing desperately needed to have a case to sit in. When I originally received the switch, I did not have a 3D printer and had no idea what it would take to make a case; now though, I do have a 3D printer … and no idea what it would take to make a case. Sounds like a plan to me!
The most important tool I bought to go with my 3D printer (a Creality CR6-SE) was some digital calipers. I discovered early on how important it was to ensure that if I was going to screw up, I should be able to screw up accurately.

These calipers are made by RexBeti, and if you’ve never heard of that company that’s ok, because before I purchased this I hadn’t either. The calipers claim to be accurate to 0.01mm, but I don’t have any way to validate that claim, so let’s just assume that they are. I do know that it beats using a ruler. A few minutes of careful Continue reading
Bandwidth map of global internet
Last year I wrote an article describing data model optimization going from a simple this is what we need to configure individual devices to a highly polished high-level network nodes and links model. Not surprisingly, as Jeremy Schulman was quick to point out, the latter one had Jinja2 templates you wouldn’t want to debug. Ever. You can’t run away from complexity… but you can manage it.
Many successful network automation solutions (example: Cisco NSO) solve the “we’d love to work with high-level data models but hate complex templates” challenge with data transformation: operators work with an abstracted data model describing services, nodes and links, and the device configuration templates use low-level data derived from the abstracted data models through a series of business logic rules or lookups (aka network design).
Last year I wrote an article describing data model optimization going from a simple this is what we need to configure individual devices to a highly polished high-level network nodes and links model. Not surprisingly, as Jeremy Schulman was quick to point out, the latter one had Jinja2 templates you wouldn’t want to debug. Ever. You can’t run away from complexity… but you can manage it.
Many successful network automation solutions (example: Cisco NSO) solve the “we’d love to work with high-level data models but hate complex templates” challenge with data transformation: operators work with an abstracted data model describing services, nodes and links, and the device configuration templates use low-level data derived from the abstracted data models through a series of business logic rules or lookups (aka network design).
The Indian government’s recent Internet shutdown during farmer protests impacted over 50 million residents. It is a stark warning of the danger of tampering with the foundations that make the Internet work for everyone.
Internet shutdowns are a dangerous tactic increasingly used by the state to quell situations of unrest. In this instance, it occurred during protests in the capital, Delhi, where farmers are asking for a repeal of three state-proposed farm laws. But while the initial Internet shutdown was targeted in Delhi and lasted around 29 hours, it soon extended to districts in the neighboring state of Haryana from 26 January to 1 February to “prevent disturbance to peace and public order”.
The consequence of shutting down parts of the Internet to prevent citizen access is profound: it undermines the global Internet infrastructure, which is based on collaboration and trust, and has severe individual and economic consequences that can extend far beyond a nation’s borders.
The Internet is an incredibly successful and powerful tool, a fact that has become all too clear during the COVID-19 pandemic. It is a key technology for supporting education, economic activity, and even access to healthcare for those under stay-at-home orders. Continue reading
The following post is aimed for photographers and other digital hoarders. Those of us that want to keep various digital assets not just for a few years, but a lifetime, and even multiple lifetimes (passed down, etc.)
There are three levels of data protection: Data resiliency, data backup, and data archive.
Data resiliency is when you have multiple disks in some sort of redundant configuration. Typically this is some type of RAID array, through there are other technologies now that operate similar to RAID (such as ZFS, Storage Spaces, etc.) This will protect you from a drive failure. It will not, however, protect you from accidental file deletion, theft, flood/natural disaster, etc. The drives have the same file system on them, and thus have a lot of “shared fate”, where if something happens to one, it can happen to the other.
To put it simply, while there are some scenarios where your data is protected by data resiliency (drive failure), there are scenarios where it won’t (flood, theft).
RAID is not backup.
One of the maxims we have in the IT industry in which I’ve worked for the past Continue reading
The network was definitely up, and had been up. There was nothing in the logs indicating link flaps, spanning-tree convergence events, or routing process adjacency changes. The packets had been, were presently, and presumably would forever be flowing. Flowing like a river. I was pondering this inaccurate version of reality because of an annoying ticket that wouldn’t go away...
The post Preempting Gray Failures With AI/ML appeared first on Packet Pushers.
Let’s say I host my Infrastructure as Code provisioning stuff locally. It works. It’s nearby. I feel in control. Are there good reasons I should move that stuff to the cloud? Here to help us sort the pros and cons of that question is Calvin Hendryx-Parker. Calvin is the co-founder and CTO of Six Feet Up, a Python web application development company.
The post Day Two Cloud 085: Hosting Your Infrastructure Code In The Cloud appeared first on Packet Pushers.
One of my readers sent me this interesting question:
Assuming we are running a very large OSPF area with a few thousand nodes. If we follow the chain reaction of OSPF LSA flooding while the network is converging at the same time, how would all routers come to know that they all now have same view of area link states and there are no further updates or convergence?
I have bad news: the design requirements for link state protocols effectively prevent that idea from ever working well.