Seven Reasons Why Network Automation Is Important

Organizations today constantly seek greater agility and speed in their IT operations. They’re looking to seize market advantage by innovating with new technology and quickly responding to shifting market trends. Meanwhile, IT teams seek higher levels of simplicity and automation – and more efficient allocation of limited resources – in order to support these larger business goals.

Why Businesses Need Network Automation

A major roadblock many organizations face in the drive for efficiency is that their enterprise network is far more difficult to manage than ever before. Distributed workloads and distributed IT resources have led to extremely complex configurations and poor visibility across the environment. To make matters worse, much of the management work on these networks has traditionally been performed manually, via command-line entry. That’s proved to be tedious, costly, unnecessarily rigid, and prone to error. 

Industry reports find as much as 40-80% of network failures are the result of human error

Network outages are of course a large pain point in enterprise networking, but there are certainly others. Complex, hard-to-manage networks are hindering business innovation, making critical security improvements more difficult, and driving up costs. This set of drawbacks has naturally led to a search for better Continue reading

Adding integration tests to Ansible Content Collections

In the previous installment of our "let us create the best Ansible Content Collection ever" saga, we covered the DigitalOcean-related content migration process. What we ended up with was a fully functioning Ansible Content Collection that unfortunately had no tests. But not for long; we will be adding an integration test for the droplet module.

 

We do not need tests, right?

If we were able to write perfect code all of the time, there would be no need for tests. But unfortunately, this is not how things work in real life. Any modestly useful software has deadlines attached, which usually means that developers need to strike a compromise between polish and delivery speed.

For us, the Ansible Content Collections authors, having a semi-decent Collection of integration tests has two main benefits:

  1. We know that the tested code paths function as expected and produce desired results.
  2. We can catch the breaking changes in the upstream product that we are trying to automate.

The second point is especially crucial in the Ansible world, where  one team of developers is usually responsible for the upstream product, and a separate group maintains Ansible content.

With the "why integration tests" behind us, we can Continue reading

BGP Navel Gazing on Software Gone Wild

This podcast introduction was written by Nick Buraglio, the host of today’s podcast.

As we all know, BGP runs the networked world. It is a protocol that has existed and operated in the vast expanse of the internet in one form or another since early 1990s, and despite the fact that it has been extended, enhanced, twisted, and warped into performing a myriad of tasks that one would never have imagined in the silver era of internetworking, it has remained largely unchanged in its operational core.

The world as we know it would never exist without BGP, and because of the fact that it is such a widely deployed protocol with such a solid track record of “just working”, the transition to a better security model surrounding it has been extraordinarily slow to modernize.

Accelerated Databases In The Fast Lane

Hardware accelerated databases are not new things. More than twenty years ago, Netezza was founded and created a hybrid hardware architecture that ran PostgreSQL on a big, wonking NUMA server running Linux and accelerated certain functions with adjunct accelerators that were themselves hybrid CPU-FPGA server blades that also stored the data.

Accelerated Databases In The Fast Lane was written by Timothy Prickett Morgan at The Next Platform.

Amateur packet radio walkthrough

An earlier version of this post that did data over D-Star was misleading. This is the new version.

This blog post aims do describe the steps to setting up packet radio on modern hardware with Linux. There’s lots of ham radio documentation out there about various setups, but they’re usually at least 20 years old, and you’ll find recommendations to use software that’s not been updated is just as long.

Specifically here I’ll set up a Kenwood TH-D74 and ICom 9700 to talk to each other over D-Star and AX.25. But for the latter you can also use use cheap Baofengs just as well.

Note that 9600bps AX.25 can only be generated by a compatible radio. 1200bps can be send to a non-supporting radio as audio, but 9600bps cannot. So both D-Star and AX.25 here will give only 1200bps. But with hundreds of watts you can get really far with it, at least.

I’ll assume that you already know how to set up APRS (and therefore KISS) on a D74. If not, get comfortable with that first by reading the manual.

DMR doesn’t seem to have a data mode, and SystemFusion radios don’t give the user access Continue reading

Ripple20 TCP/IP flaws can be patched but still threaten IoT devices

A set of serious network security vulnerabilities collectively known as Ripple20 roiled the IoT landscape when they came to light last week, and the problems they pose for IoT-equipped businesses could be both dangerous and difficult to solve.Ripple20 was originally discovered by Israel-based security company JSOF in September 2019. It affects a lightweight, proprietary TCP/IP library created by a small company in Ohio called Treck, which has issued a patch for the vulnerabilities. Several of those vulnerabilities would allow for remote-code execution, allowing for data theft, malicious takeovers and more, said the security vendor.That, however, isn’t the end of the problem. The TCP/IP library that contains the vulnerabilities has been used in a huge range of connected devices, from medical devices to industrial control systems to printers, and actually delivering and applying the patch is a vast undertaking. JSOF said that “hundreds of millions” of devices could be affected. Many devices don’t have the capacity to receive remote patches, and Terry Dunlap, co-founder of security vendor ReFirm Labs, said that there are numerous hurdles to getting patches onto older equipment in particular.To read this article in full, please click here

The Hedge Episode 41: Centralized Architectures with Jari Arkko

Consolidation is a well-recognized trend in the Internet ecosystem—but what does this centralization mean in terms of distributed systems, such as the DNS? Jari Arkko joins this episode of the Hedge, along with Alvaro Retana, to discuss the import and impact of centralization on the Internet through his draft, draft-arkko-arch-infrastructure-centralisation.

download

Day Two Cloud 054: Real Life VMware Cloud On AWS

We discuss the reality of running VMware Cloud (VMC) on AWS with Adam Fisher, Cloud & DevOps Engineer at RoundTower. Adam's been deploying VMC on AWS in the real world for customers since the product's early days, and has plenty of insights. VMC on AWS presents a VMware software defined data center (SDDC) hosted on bare metal in AWS data centers. If you're trying to vacate your own data centers or colos, but aren't going to refactor your applications to do it, VMC on AWS presents a compelling technical solution.

Day Two Cloud 054: Real Life VMware Cloud On AWS

We discuss the reality of running VMware Cloud (VMC) on AWS with Adam Fisher, Cloud & DevOps Engineer at RoundTower. Adam's been deploying VMC on AWS in the real world for customers since the product's early days, and has plenty of insights. VMC on AWS presents a VMware software defined data center (SDDC) hosted on bare metal in AWS data centers. If you're trying to vacate your own data centers or colos, but aren't going to refactor your applications to do it, VMC on AWS presents a compelling technical solution.

The post Day Two Cloud 054: Real Life VMware Cloud On AWS appeared first on Packet Pushers.

Getting AWS Availability Zones using Pulumi and Go

I’ve written several different articles on Pulumi (take a look at all articles tagged “Pulumi”), the infrastructure-as-code tool that allows users to define their infrastructure using a general-purpose programming language instead of a domain-specific language (DSL). Thus far, my work with Pulumi has leveraged TypeScript, but moving forward I’m going to start sharing more Pulumi code written using Go. In this post, I’ll share how to use Pulumi and Go to get a list of Availability Zones (AZs) from a particular region in AWS.

Before I proceed, I feel like it is important to provide the disclaimer that I’m new to Go (and therefore still learning). There are probably better ways of doing what I’m doing here, and so I welcome all constructive feedback on how I can improve.

With that disclaimer out of the way, allow me to first provide a small bit of context around this code. When I’m using Pulumi to manage infrastructure on AWS, I like to try to keep things as region-independent as possible. Therefore, I try to avoid hard-coding things like the number of AZs or the AZ names, and prefer to gather that information dynamically—which is what this code does.

Here’s the Continue reading

Top 5 Questions from “How to become a Docker Power User” session at DockerCon 2020

This is a guest post from Brian Christner. Brian is a Docker Captain since 2016, host of The Byte podcast, and Co-Founder & Site Reliability Engineer at 56K.Cloud. At 56K.Cloud, he helps companies to adapt technologies and concepts like Cloud, Containers, and DevOps. 56K.Cloud is a Technology company from Switzerland focusing on Automation, IoT, Containerization, and DevOps.

It was a fantastic experience hosting my first ever virtual conference session. The commute to my home office was great, and I even picked up a coffee on the way before my session started. No more waiting in lines, queueing for food, or sitting on the conference floor somewhere in a corner to check emails. 

The “DockerCon 2020 that’s a wrap” blog post highlighted my session “How to Become a Docker Power User using VS Code” session was one of the most popular sessions from DockerCon. Docker asked if I could write a recap and summarize some of the top questions that appeared in the chat. Absolutely.

Honestly, I liked the presented/audience interaction more than an in-person conference. Typically, a presenter broadcasts their content to a room full of participants, and if you are lucky and Continue reading

BiB094 – HPE Discover Greenlake and Ezmeral

      HPE Greenlake Common cloud platform – pivot to “edge-to-cloud platform-as-a-service company” cloud services, software and customer experiences. Greenlake in numbers: 4B in contract value , 1000 customers, 50 countries, 90% retention rate 700 partners selling Greenlake = next generational partner ecosystem self-served, pay per use     HPE Ezmeral The HPE Ezmeral... Read more »

BiB094 – HPE Discover Greenlake and Ezmeral

      HPE Greenlake Common cloud platform – pivot to “edge-to-cloud platform-as-a-service company” cloud services, software and customer experiences. Greenlake in numbers: 4B in contract value , 1000 customers, 50 countries, 90% retention rate 700 partners selling Greenlake = next generational partner ecosystem self-served, pay per use     HPE Ezmeral The HPE Ezmeral […]

The post BiB094 – HPE Discover Greenlake and Ezmeral appeared first on Packet Pushers.

Eighty for Africa: Kenya and Nigeria’s IXP Success

Ten years ago the peering community came up with a vision: We wanted 80 percent of Internet traffic to be localized by 2020. I must admit, over the last decade there were times I wondered if it was possible.

But Kenya and Nigeria have just proven that it is – all thanks to the help of Internet exchange points (IXPs). A new report, Anchoring the African Internet Ecosystem: Lessons from Kenya and Nigeria’s Internet Exchange Points Growth is a case study on how they did it.

What Changed in Kenya and Nigeria

In just eight years a dedicated community helped Kenya and Nigeria to boost the levels of Internet traffic that is locally exchanged from 30% to 70%.

That happened because of a vibrant community of people united around a common cause: bringing faster, cheaper, and better Internet to their neighbours. They did this by focusing on their local Internet ecosystem that is dependent on the IXP.

Building an IXP takes humans and tech. We often say it takes 80% human engineering and 20% network engineering. It certainly is no easy task. Building a strong local Internet community facilitates this collaboration and results in neutral, even, and good local governance Continue reading

Adapting Network Design to Support Automation

This blog post was initially sent to the subscribers of my SDN and Network Automation mailing list. Subscribe here.

Adam left a thoughtful comment addressing numerous interesting aspects of network design in the era of booming automation hype on my How Should Network Architects Deal with Network Automation blog post. He started with:

A question I keep tasking myself with addressing but never finding the best answer, is how appropriate is it to reform a network environment into a flattened design such as spine-and-leaf, if that reform is with the sole intent and purpose to enable automation?

A few basic facts first:

HPE Builds Out GreenLake Utility, Creates Ezmeral Software

Hewlett Packard Enterprise in January created its Transformation Office with an eye toward accelerating its move to become a platform provider – complete with hardware, software, services and other components – with a reach from the datacenter out through the cloud and to the fast-growing edge computing environment.

HPE Builds Out GreenLake Utility, Creates Ezmeral Software was written by Jeffrey Burt at The Next Platform.