Author Archives: Jessica Wachtel
Author Archives: Jessica Wachtel
It’s 2023, and we are still copying code without fully debugging. Did we not learn from the Great DNS Vulnerability of 2008? Fear not, internet godfather spoke about the cost of open source dependencies in an Open Source Summit Europe in Dublin talk — which he Dan Kaminsky discovered a fundamental design flaw in DNS code that allowed for arbitrary cache poisoning that affected nearly every DNS server on the planet. The patch was released in July 2008 followed by the permanent solution, Domain Name Security Extensions (DNSSEC), in 2010. The Domain Name System is the basic name-based global addressing system for The Internet, so vulnerabilities in DNS could spell major trouble for pretty much everyone on the Internet. Vixie and Kaminsky “set [their] hair on fire” to build the security vulnerability solution that “13 years later, is not widely enough deployed to solve this problem,” Vixie said. All of this software is open-source and inspectable but the DNS bugs are still being brought to Vixie’s attention in the present day. “This is never going to stop if we don’t start writing down the lessons people should know before they write software,” Vixie said. How Did This Happen? It’s our fault, “the call is coming from inside the house.” Before internet commercialization and the dawn of the home computer room, publishers of Berkley Software Distribution (BSD) of UNIX decided to support the then-new DNS protocol. “Spinning up a new release, making mag tapes, and putting them all in shipping containers was a lot of work” so they published DNS as a patch and posted to Usenet newsgroups, making it available to anyone who wanted it via an FTP server and mailing list. When Vixie began working on DNS at Berkeley, DNS was for all intents abandonware insofar as all the original creators had since moved on. Since there was no concept of importing code and making dependencies, embedded systems vendors copied the original code and changed the API names to suit their local engineering needs… this sound familiar? And then Linux came along. The internet E-X-P-L-O-D-E-D. You get an AOL account. And you get an AOL account… Distros had to build their first C library and copied some version of the old Berkeley code whether they knew what it was or not. It was a copy of a copy that some other distro was using, they made a local version forever divorced from the upstream. DSL modems are an early example of this. Now the Internet of Things is everywhere and “all of this DNS code in all of the billions of devices are running on some fork of a fork of a fork of code that Berkeley published in 1986.” Why does any of this matter? The original DNS bugs were written and shipped by Vixie. He then went on to fix them in the 90s but some still appear today. “For embedded systems today to still have that problem, any of those problems, means that whatever I did to fix it wasn’t enough. I didn’t have a way of telling people.” Where Do We Go from Here? “Sure would have been nice if we already had an internet when we were building one,” Vixie said. But, try as I might, we can’t go backward we can only go forward. Vixie made it very clear, “if you can’t afford to do these things [below] then free software is too expensive for you.” Here is some of Vixie’s advice for software producers: Do the best you can with the tools you have but “try to anticipate what you’re going to have.” Assume all software has bugs “not just because it always has, but because that’s the safe position to take.” Machine-readable updates are necessary because “you can’t rely on a human to monitor a mailing list.” Version numbers are must-haves for your downstream. “The people who are depending on you need to know something more than what you thought worked on Tuesday.” It doesn’t matter what it is as long as it uniquely identifies the bug level of the software. Cite code sources in the README files in source code comments. It will help anyone using your code and chasing bugs. Automate monitoring of your upstreams, review all changes, and integrate patches. “This isn’t optional.” Let your downstream know about changes automatically “otherwise these bugs are going to do what the DNS bugs are doing.” Here is the advice for software consumers: Your software’s dependencies are your dependencies. “As a consumer when you import something, remember that you’re also importing everything it depends on… So when you check your dependencies, you’d have to do it recursively you have to go all the way up.” Uncontracted dependencies can make free software incredibly expensive but are an acceptable operating risk because “we need the software that everybody else is writing.” Orphaned dependencies require local maintenance and therein lies the risk because that’s a much higher cost than monitoring the developments that are coming out of other teams. “It’s either expensive because you hire enough people and build enough automation or it’s expensive because you don’t.” Automate dependency upgrades (mostly) because sometimes “the license will change from one you could live with to one that you can’t or at some point someone decides they’d like to get paid” [insert adventure here]. Specify acceptable version numbers. If versions 5+ have the fix needed for your software, say that to make sure you don’t accidentally get an older one. Monitor your supply chain and ticket every release. Have an engineer review every update to determine if it’s “set my hair on fire, work over the weekend or we’ll just get to it when we get to it” priority level. He closed with “we are all in this together but I think we could get it organized better than we have.” And it sure is one way to say it. There is a certain level of humility and grace one has to have after being on the tiny team that prevented the potential DNS collapse, is a leader in their field for over a generation, but still has their early career bugs (that they solved 30 years ago) brought to their attention at regular intervals because adopters aren’t inspecting the source code. The post 13 Years Later, the Bad Bugs of DNS Linger on appeared first on The New Stack.
Online travel agency Juliette Howland describes in her recent Vrbo (part of
Tackling the challenge of providing fast, smooth, jitter-free gameplay with super low end-to-end latency, social media giant in a blog post Thursday. This low-latency gaming platform could also serve as the base Meta’s pending Metaverse, they asserted. Facebook launched its cloud gaming platform in 2020, providing users quick access to native Android and Windows mobile games across all the browsers. Along with high a volume of consumer access came a high volume of developer and engineering challenges. Network, Hosting, and Cluster Management The first step Meta took in providing low end-to-end latency was a physical one — to reduce the distance between the cloud gaming infrastructure and the players themselves. For this Meta used edge computing and deployed in edges that were close to large populations of players. The goal of edge computing is to “have a unified hosting environment to make sure we can run as many games as possible as smoothly as possible,” Meta engineers Xiaoxing Zhu wrote. The more edge computing sites, the lower the user latency. Continue reading