Last year I wrote an article describing data model optimization going from a simple this is what we need to configure individual devices to a highly polished high-level network nodes and links model. Not surprisingly, as Jeremy Schulman was quick to point out, the latter one had Jinja2 templates you wouldn’t want to debug. Ever. You can’t run away from complexity… but you can manage it.
Many successful network automation solutions (example: Cisco NSO) solve the “we’d love to work with high-level data models but hate complex templates” challenge with data transformation: operators work with an abstracted data model describing services, nodes and links, and the device configuration templates use low-level data derived from the abstracted data models through a series of business logic rules or lookups (aka network design).
The Indian government’s recent Internet shutdown during farmer protests impacted over 50 million residents. It is a stark warning of the danger of tampering with the foundations that make the Internet work for everyone.
Internet shutdowns are a dangerous tactic increasingly used by the state to quell situations of unrest. In this instance, it occurred during protests in the capital, Delhi, where farmers are asking for a repeal of three state-proposed farm laws. But while the initial Internet shutdown was targeted in Delhi and lasted around 29 hours, it soon extended to districts in the neighboring state of Haryana from 26 January to 1 February to “prevent disturbance to peace and public order”.
The consequence of shutting down parts of the Internet to prevent citizen access is profound: it undermines the global Internet infrastructure, which is based on collaboration and trust, and has severe individual and economic consequences that can extend far beyond a nation’s borders.
The Internet is an incredibly successful and powerful tool, a fact that has become all too clear during the COVID-19 pandemic. It is a key technology for supporting education, economic activity, and even access to healthcare for those under stay-at-home orders. Continue reading
The following post is aimed for photographers and other digital hoarders. Those of us that want to keep various digital assets not just for a few years, but a lifetime, and even multiple lifetimes (passed down, etc.)
There are three levels of data protection: Data resiliency, data backup, and data archive.
Data resiliency is when you have multiple disks in some sort of redundant configuration. Typically this is some type of RAID array, through there are other technologies now that operate similar to RAID (such as ZFS, Storage Spaces, etc.) This will protect you from a drive failure. It will not, however, protect you from accidental file deletion, theft, flood/natural disaster, etc. The drives have the same file system on them, and thus have a lot of “shared fate”, where if something happens to one, it can happen to the other.
To put it simply, while there are some scenarios where your data is protected by data resiliency (drive failure), there are scenarios where it won’t (flood, theft).
RAID is not backup.
One of the maxims we have in the IT industry in which I’ve worked for the past Continue reading
The network was definitely up, and had been up. There was nothing in the logs indicating link flaps, spanning-tree convergence events, or routing process adjacency changes. The packets had been, were presently, and presumably would forever be flowing. Flowing like a river. I was pondering this inaccurate version of reality because of an annoying ticket that wouldn’t go away...
The post Preempting Gray Failures With AI/ML appeared first on Packet Pushers.
Let’s say I host my Infrastructure as Code provisioning stuff locally. It works. It’s nearby. I feel in control. Are there good reasons I should move that stuff to the cloud? Here to help us sort the pros and cons of that question is Calvin Hendryx-Parker. Calvin is the co-founder and CTO of Six Feet Up, a Python web application development company.
The post Day Two Cloud 085: Hosting Your Infrastructure Code In The Cloud appeared first on Packet Pushers.
One of my readers sent me this interesting question:
Assuming we are running a very large OSPF area with a few thousand nodes. If we follow the chain reaction of OSPF LSA flooding while the network is converging at the same time, how would all routers come to know that they all now have same view of area link states and there are no further updates or convergence?
I have bad news: the design requirements for link state protocols effectively prevent that idea from ever working well.
One of my readers sent me this interesting question:
Assuming we are running a very large OSPF area with a few thousand nodes. If we follow the chain reaction of OSPF LSA flooding while the network is converging at the same time, how would all routers come to know that they all now have same view of area link states and there are no further updates or convergence?
I have bad news: the design requirements for link state protocols effectively prevent that idea from ever working well.
Recent versions of firmware (after v0.80) running on the Northbound Networks Zodiac FX can be updated directly from the web interface, or using XMODEM via the serial console. But what if, say, you had sat one your Zodiac FX for a while and are running firmware earlier than v0.81 and have a sudden, unexpected desire to upgrade the firmware? Say you are, for example, me?
The process turned out to be less straightforward than I had hoped, so I am documenting the successful steps I followed in case it’s of use to somebody else.
Back in 2015 I backed a Kickstarter project for this awesome-sounding four-port FastEthernet SDN switch with OpenFlow support. It sounded so cool that I even ordered a two-pack as I thought it would be more fun to have two OpenFlow switches to mess around with). The project was funded successfully, but embarrassingly when the beautifully-made boards arrived in early 2016, for some reason I never quite got around to playing with them. I think it was in part because it was just a printed circuit board without a case and without easy access to 3D printing I was turned Continue reading
In today’s sponsored show with Juniper Networks, we dive into Juniper's Paragon product portfolio, which measures service quality for critical applications. The portfolio allows service providers and enterprises to get deeper visibility into, and automated control over, their networks. Our guests from Juniper to walk us through the portfolio are Peter Weinberger and Jonas Krogell.
The post Heavy Networking 562: Juniper’s Paragon Automation Portfolio Prioritizes Service Experience (Sponsored) appeared first on Packet Pushers.