13 Years Later, the Bad Bugs of DNS Linger on

It’s 2023, and we are still copying code without fully debugging. Did we not learn from the Great DNS Vulnerability of 2008? Fear not, internet godfather spoke about the cost of open source dependencies in an Open Source Summit Europe in Dublin talk — which he Dan Kaminsky discovered a fundamental design flaw in DNS code that allowed for arbitrary cache poisoning that affected nearly every DNS server on the planet. The patch was released in July 2008 followed by the permanent solution, Domain Name Security Extensions (DNSSEC), in 2010. The Domain Name System is the basic name-based global addressing system for The Internet, so vulnerabilities in DNS could spell major trouble for pretty much everyone on the Internet. Vixie and Kaminsky “set [their] hair on fire” to build the security vulnerability solution that “13 years later, is not widely enough deployed to solve this problem,” Vixie said. All of this software is open-source and inspectable but the DNS bugs are still being brought to Vixie’s attention in the present day. “This is never going to stop if we don’t start writing down the lessons people should know before they write software,”  Vixie said. How Did This Happen? It’s our fault, “the call is coming from inside the house.” Before internet commercialization and the dawn of the home computer room, publishers of Berkley Software Distribution (BSD) of UNIX decided to support the then-new DNS protocol. “Spinning up a new release, making mag tapes, and putting them all in shipping containers was a lot of work” so they published DNS as a patch and posted to Usenet newsgroups, making it available to anyone who wanted it via an FTP server and mailing list. When Vixie began working on DNS at Berkeley, DNS was for all intents abandonware insofar as all the original creators had since moved on. Since there was no concept of importing code and making dependencies, embedded systems vendors copied the original code and changed the API names to suit their local engineering needs… this sound familiar? And then Linux came along. The internet E-X-P-L-O-D-E-D. You get an AOL account. And you get an AOL account… Distros had to build their first C library and copied some version of the old Berkeley code whether they knew what it was or not. It was a copy of a copy that some other distro was using, they made a local version forever divorced from the upstream. DSL modems are an early example of this. Now the Internet of Things is everywhere and “all of this DNS code in all of the billions of devices are running on some fork of a fork of a fork of code that Berkeley published in 1986.” Why does any of this matter? The original DNS bugs were written and shipped by Vixie. He then went on to fix them in the  90s but some still appear today. “For embedded systems today to still have that problem, any of those problems, means that whatever I did to fix it wasn’t enough. I didn’t have a way of telling people.” Where Do We Go from Here? “Sure would have been nice if we already had an internet when we were building one,” Vixie said. But, try as I might, we can’t go backward we can only go forward. Vixie made it very clear, “if you can’t afford to do these things [below] then free software is too expensive for you.” Here is some of Vixie’s advice for software producers: Do the best you can with the tools you have but “try to anticipate what you’re going to have.” Assume all software has bugs “not just because it always has, but because that’s the safe position to take.” Machine-readable updates are necessary because “you can’t rely on a human to monitor a mailing list.” Version numbers are must-haves for your downstream. “The people who are depending on you need to know something more than what you thought worked on Tuesday.” It doesn’t matter what it is as long as it uniquely identifies the bug level of the software. Cite code sources in the README files in source code comments. It will help anyone using your code and chasing bugs. Automate monitoring of your upstreams, review all changes, and integrate patches. “This isn’t optional.” Let your downstream know about changes automatically “otherwise these bugs are going to do what the DNS bugs are doing.” Here is the advice for software consumers: Your software’s dependencies are your dependencies. “As a consumer when you import something, remember that you’re also importing everything it depends on… So when you check your dependencies, you’d have to do it recursively you have to go all the way up.” Uncontracted dependencies can make free software incredibly expensive but are an acceptable operating risk because “we need the software that everybody else is writing.” Orphaned dependencies require local maintenance and therein lies the risk because that’s a much higher cost than monitoring the developments that are coming out of other teams. “It’s either expensive because you hire enough people and build enough automation or it’s expensive because you don’t.” Automate dependency upgrades (mostly) because sometimes “the license will change from one you could live with to one that you can’t or at some point someone decides they’d like to get paid” [insert adventure here]. Specify acceptable version numbers. If versions 5+ have the fix needed for your software, say that to make sure you don’t accidentally get an older one. Monitor your supply chain and ticket every release. Have an engineer review every update to determine if it’s “set my hair on fire, work over the weekend or we’ll just get to it when we get to it” priority level. He closed with “we are all in this together but I think we could get it organized better than we have.” And it sure is one way to say it. There is a certain level of humility and grace one has to have after being on the tiny team that prevented the potential DNS collapse, is a leader in their field for over a generation, but still has their early career bugs (that they solved 30 years ago) brought to their attention at regular intervals because adopters aren’t inspecting the source code. The post 13 Years Later, the Bad Bugs of DNS Linger on appeared first on The New Stack.

Enable Extensions on Azure Arc Connected Machines with Ansible Automation Platform

azure arc machines blog

Last year, I blogged about how to use Red Hat Ansible Automation Platform to migrate Azure Arc-enabled servers from Azure Log Analytics Agents (MMA/OMS) to Azure Monitor Agent (AMA).  Azure Arc supports a number of other extensions that can add additional value to your Arc-enabled infrastructure.  Since my previous article, all of these extensions have been added to the azure.infrastructure_config_demos collection that contains a role for managing Arc-enabled server VM extensions with Ansible.

Each extension offers unique capabilities to your Arc-enabled fleet, such as logging, vulnerability scanning, key vault cert sync, update management, and more.  Enabling these extensions is simple for small numbers of machines. When you need to scale out the work of enabling and configuring these extensions across hundreds or thousands of devices, then Ansible Automation Platform can help!

This article covers how to use Ansible Automation Platform to enable VM extensions supported in the azure.infrastructure_config_demos collection.  Within the collection, there are a number of playbooks and roles; the following are pertinent to this post.

File or Folder

Description

playbook_enable_arc_extension.yml

Playbook that will be used as a job template to enable Azure Arc extensions.

playbook_disable_arc_extension.yml

Playbook that will be used Continue reading

Twenty-five open-source network emulators and simulators you can use in 2023

I surveyed the current state of the art in open-source network emulation and simulation. I also reviewed the development and support status of all the network emulators and network simulators previously featured in my blog.

Of all the network emulators and network simulators I mentioned in my blog over the years, I found that eighteen of them are still active projects. I also found seven new projects that you can try. See below for a brief update about each tool.

Active projects

Below is a list of the tools previously featured in my blog that are, in my opinion, still actively supported.

Cloonix

Cloonix version 28 was released in January 2023. Cloonix stitches together Linux networking tools to make it easy to emulate complex networks by linking virtual machines and containers. Cloonix has both a command-line-interface and a graphical user interface.

The Cloonix web site now has a new address at: clownix.net and theCloonix project now hosts code on Github. Cloonix adopted a new release numbering scheme since I reviewed it in 2017. So it is now at “v28”.

Cloudsim

CloudSim is still maintained. Cloudsim is a network simulator that enables modeling, simulation, and experimentation of emerging Cloud computing Continue reading

Twilio announces fresh round of layoffs, impacting 17% of its workforce

Cloud communications firm Twilio has announced that it plans to reduce its global workforce by about 17%, within months of laying off over 800 employees. In addition to the layoffs, the company is also undergoing an internal restructuring to create two business units, Twilio Communications and Twilio Data & Applications, according to a company blog post.As of September 30, 2022, Twilio had 8,992 employees, of which 816 were laid off in the fourth quarter of 2022. The company is now expected to lay off an additional 1,400 employees in the new round of layoffs.This fresh round of layoffs is meant to help the company “spend less, streamline, and become more efficient,” Jeff Lawson, chief executive officer and co-founder of Twilio, said in the blog post. “To do that, we’re forming two business units: Twilio Communications and Twilio Data & Applications. And today, I’m unfortunately bearing the news that we’re parting ways with approximately 17% of our team.”To read this article in full, please click here

MUST READ: Machine Learning for Network and Cloud Engineers

Javier Antich, the author of the fantastic AI/ML in Networking webinar, spent years writing the Machine Learning for Network and Cloud Engineers book that is now available in paperback and Kindle format.

I’ve seen a final draft of the book and it’s definitely worth reading. You should also invest some time into testing the scenarios Javier created. Here’s what I wrote in the foreword:


Artificial Intelligence (AI) has been around for decades. It was one of the exciting emerging (and overhyped) topics when I attended university in the late 1980s. Like today, the hype failed to deliver, resulting in long, long AI winter.

MUST READ: Machine Learning for Network and Cloud Engineers

Javier Antich, the author of the fantastic AI/ML in Networking webinar, spent years writing the Machine Learning for Network and Cloud Engineers book that is now available in paperback and Kindle format.

I’ve seen a final draft of the book and it’s definitely worth reading. You should also invest some time into testing the scenarios Javier created. Here’s what I wrote in the foreword:


Artificial Intelligence (AI) has been around for decades. It was one of the exciting emerging (and overhyped) topics when I attended university in the late 1980s. Like today, the hype failed to deliver, resulting in long, long AI winter.

Akamai targets cloud computing’s middle ground with Connected Cloud

CDN (content delivery network) giant Akamai Technologies today announced that it will discount cloud egress pricing, add ISO, SOC 2 and HIPAA compliance, and build out enterprise-scale cloud computing sites and distributed points of presence in over 50 cities as part of a new initiative — dubbed Connected Cloud — aimed at filling a niche between the hyperscalers and edge computing.The idea is to fulfill what the company sees as unmet demand. In Akamai's view, modern applications are ncreasingly being broken into a range of different microservices. In many cases, those microservices need to be distributed across a geographically wide area, creating different computing needs than those addressed by most cloud vendors.To read this article in full, please click here

Akamai targets cloud computing’s middle ground with Connected Cloud

CDN (content delivery network) giant Akamai Technologies today announced that it will discount cloud egress pricing, add ISO, SOC 2 and HIPAA compliance, and build out enterprise-scale cloud computing sites and distributed points of presence in over 50 cities as part of a new initiative — dubbed Connected Cloud — aimed at filling a niche between the hyperscalers and edge computing.The idea is to fulfill what the company sees as unmet demand. In Akamai's view, modern applications are ncreasingly being broken into a range of different microservices. In many cases, those microservices need to be distributed across a geographically wide area, creating different computing needs than those addressed by most cloud vendors.To read this article in full, please click here

A look at Internet traffic trends during Super Bowl LVII

A look at Internet traffic trends during Super Bowl LVII
A look at Internet traffic trends during Super Bowl LVII

The Super Bowl has been happening since the end of the 1966 season, the same year that the ARPANET project, which gave birth to the Internet, was initiated. Around 20 years ago, 50% of the US population were Internet users, and that number is now around 92%. So, it's no surprise that interest in an event like Super Bowl LVII resulted in a noticeable dip in Internet traffic in the United States at the time of the game's kickoff, dropping to around 5% lower than the previous Sunday. During the game, Rihanna's halftime show also caused a significant drop in Internet traffic across most states, with Pennsylvania and New York feeling the biggest impact, but messaging and video platforms saw a surge of traffic right after her show ended.

In this blog post, we will dive into who the biggest winners were among Super Bowl advertisers, as well as examine how traffic to food delivery services, social media and sports and betting websites changed during the game. In addition, we look at traffic trends seen at city and state levels during the game, as well as email threat volume across related categories in the weeks ahead of the game.

Cloudflare Continue reading

Cisco observability: What you need to know

Observability may be the latest buzzword in an industry loaded with them, but Cisco will tell you the primary goal of the technology is to help enterprises get a handle on effectively managing distributed resources in ways that have not been possible in the past.The idea of employing observability tools and applications is a hot idea. Gartner says that by 2024, 30% of enterprises implementing distributed system architectures will have adopted observability techniques to improve digital-business service performance, up from less than 10% in 2020.“Today’s operational teams have tools for network monitoring, application monitoring, infrastructure monitoring, call monitoring, and more, but they rarely intermingle to provide a cohesive view of what’s going on across the enterprise,” according to Carlos Pereira, Cisco Fellow and chief architect in its Strategy, Incubation & Applications group.To read this article in full, please click here

Cisco observability: What you need to know

Observability may be the latest buzzword in an industry loaded with them, but Cisco will tell you the primary goal of the technology is to help enterprises get a handle on effectively managing distributed resources in ways that have not been possible in the past.The idea of employing observability tools and applications is a hot idea. Gartner says that by 2024, 30% of enterprises implementing distributed system architectures will have adopted observability techniques to improve digital-business service performance, up from less than 10% in 2020.“Today’s operational teams have tools for network monitoring, application monitoring, infrastructure monitoring, call monitoring, and more, but they rarely intermingle to provide a cohesive view of what’s going on across the enterprise,” according to Carlos Pereira, Cisco Fellow and chief architect in its Strategy, Incubation & Applications group.To read this article in full, please click here

Cloudflare mitigates record-breaking 71 million request-per-second DDoS attack

Cloudflare mitigates record-breaking 71 million request-per-second DDoS attack
Cloudflare mitigates record-breaking 71 million request-per-second DDoS attack

This was a weekend of record-breaking DDoS attacks. Over the weekend, Cloudflare detected and mitigated dozens of hyper-volumetric DDoS attacks. The majority of attacks peaked in the ballpark of 50-70 million requests per second (rps) with the largest exceeding 71 million rps. This is the largest reported HTTP DDoS attack on record, more than 35% higher than the previous reported record of 46M rps in June 2022.

The attacks were HTTP/2-based and targeted websites protected by Cloudflare. They originated from over 30,000 IP addresses. Some of the attacked websites included a popular gaming provider, cryptocurrency companies, hosting providers, and cloud computing platforms. The attacks originated from numerous cloud providers, and we have been working with them to crack down on the botnet.

Cloudflare mitigates record-breaking 71 million request-per-second DDoS attack
Record breaking attack: DDoS attack exceeding 71 million requests per second

Over the past year, we’ve seen more attacks originate from cloud computing providers. For this reason, we will be providing service providers that own their own autonomous system a free Botnet threat feed. The feed will provide service providers threat intelligence about their own IP space; attacks originating from within their autonomous system. Service providers that operate their own IP space can now sign up to the Continue reading

Tech Bytes: Event-Driven Automation With Nokia’s SR Linux Event Handler Framework (Sponsored)

Today on the Tech Bytes podcast we talk about Event Handler, a new automation feature in Nokia’s SR Linux network OS that lets you automatically run scripts to fix problems when an event occurs. Nokia is our sponsor, and our guest is Roman Dodin, Product Line Manager at Nokia.

The post Tech Bytes: Event-Driven Automation With Nokia’s SR Linux Event Handler Framework (Sponsored) appeared first on Packet Pushers.

Network-as-a-service lets a shoe retailer take steps toward Zero Trust

Nigel Williams-Lucas, director of Information Technology at Maryland-based footwear retailer DTLR, faced a challenge that most IT execs will recognize: the business was pushing hard on digital transformation, and the IT infrastructure was struggling to keep pace.Store managers were seeking better data analytics and business intelligence from backend systems like inventory and sales. The business wanted IT systems to support customers ordering online and picking up at a physical store within two hours.The network needed to securely support real-time, bandwidth-intensive IP security cameras. And Williams-Lucas wanted to roll out beaconing technology, in which the network gathers information about customer in-store activity via Bluetooth or Wi-Fi, and can send discount offers to a customer’s phone based on where they are in the store and what they appear to be interested in.To read this article in full, please click here

Network-as-a-service lets a shoe retailer take steps toward Zero Trust

Nigel Williams-Lucas, director of Information Technology at Maryland-based footwear retailer DTLR, faced a challenge that most IT execs will recognize: the business was pushing hard on digital transformation, and the IT infrastructure was struggling to keep pace.Store managers were seeking better data analytics and business intelligence from backend systems like inventory and sales. The business wanted IT systems to support customers ordering online and picking up at a physical store within two hours.The network needed to securely support real-time, bandwidth-intensive IP security cameras. And Williams-Lucas wanted to roll out beaconing technology, in which the network gathers information about customer in-store activity via Bluetooth or Wi-Fi, and can send discount offers to a customer’s phone based on where they are in the store and what they appear to be interested in.To read this article in full, please click here