>At IETF 118 in November 2023 I attended the meeting of the Measurement and Analysis of Protocols Research Group, and here are my impressions from that meeting.
The IETF met in Prague in the first week of November 2023, and, as usual there was a flurry of activity in the DNS-related Working groups. Here's a roundup of those DNS topics I found to be of interest at that meeting.
There is a continual strewam of routing anomalies that are seen in today's Internet. Some are the result of operational mishaps, some are malicious and deliberate, but all of them have some impact. The latest routing mishap in Australia affected some 10 million customers when all their services, including telephony, IP, mobiles and fixed services all stopped. How can we enforce a set of requirements for service operators to do a better job? Where's the Routing Police to chase down these incidents and find out where poor operational practices are compromising the stability of the public Internet?
If we are going to update RFC 3901, "DNS IPv6 Transport Guidelines," and offer a revised set of guidelines that are more positive guidelines about the use of IPv6 in the DNS, then what should such updated guidelines say?
t APNIC Labs we publish a number of measurements of the deployment of various technologies that are being adopted on the Internet. Here we will look at how we measure the adoption of the signing of Route Origination Attestations (ROAs) as part of the framework for securing inter-domain routing on the Internet using the digital credential framework provided by the Resource Public Key Infrastructure (RPKI).
At APNIC Labs we publish a number of measurements of the deployment of various technologies that are being adopted on the Internet. Here we will look at how we measure the adoption of DNSSEC validation.
Distributed routing protocols rely on each active router processing routing updates in an identical manner. Given that there are so many implementation of the BGP routing protocol then the role of a clear standard specification is critical. This extends to the handling of error conditions. What happens when some implementations handle errors in a different manner to all the others?
Trust is such a difficult concept in any context, and certainly computer networks are no exception. How can you be assured that your network infrastructure us running on authentic platforms, both hardware and software, and its operation has not been compromised in any way?
In 2005 the UN-sponsored World Summit on the Information
Society (WSIS) eventually agreed on a compromise approach that deferred
any determination on the matter of the governance of the Internet
and instead decided to convene a series of meetings on the underlying policy principles relating to Internet Governance. Hence, we saw the inauguration of a series of Internet
Governance Forum (IGF) meetings. These forums were intended to be
non-decisional forums for all stakeholders to debate the issues. Eighteen
later this is still going on. After so long is there anything left to talk about?
OARC held a 2-day meeting in September in Danang, Vietnam, with a
set of presentations on various DNS topics. Here’s some observations
that I picked up from the presentations that were made that meeting.
One of the big changes within the Internet over the last decade or so has
been the shift to replicated services. Service replication allows each individual
service point to be positioned closer to clusters of users. The question
now becomes who (and how) selects the "best" service point to use in
response to each user's service request. It seems that in many cases the answer
is the DNS, and not the BGP routing protocol.
So far, the silicon technology at the heart of this revolution has been truly prodigious. The processes of assembling silicon wafers and the superimposition of tracks and gates hs been the subject of continual refinement, and some 75 years after the invention of the transistor we are now able to cram almost a trillion of them onto a silicon wafer not much biggeer than a fingernail. Have we reached the end of this silicon road, or is there more to come?
It's challenging to measure the uptake of DNSSEC in the DNS. There are just so many aspects of the DNS that are occluded from view! How many DNS names are there in the
DNS? How many of these are signed? How many queries are processed by DNS infrastructure? How many queries add DNSSEC validcation. We present a new measurement here which is
a query-weighted view of the DNS, looking the amount of queries for DNS names that are DNSSEC-signed as a proportion of the total query load.
The IEPG meets for a couple of hours before each IETF meeting. It's a somewhat eclectic collection of presentations, with some vague common thread of relevance to Internet operations. Here's a summary of my impression from these IEPG session presentations for IETF 117.
After the flurry of work in various aspects of DNS privacy, the IETF’s agenda for DNS has shifted towards more maintenance and update. This does not mean that the volume of work has abated in any way, but it has dropped the more focussed stance of previous meetings to a broader diversity of topics in operating DNS infrastructure.
There seem to be two dominant themes in the enumeration of potential perils that face the Internet these days, and oddly enough they seem to me to be opposite in nature.
The DNS is a strange and at times surprising environment. One could take a simple perspective and claim that the aim of the DNS is to translate DNS names into IP addresses. And you wouldn’t be wrong, but it's also so much more. Most of the time when we analyse the behaviour of the DNS we look at the way in which names are resolved by the DNS infrasdtructure, but there is also another view of the DNS. What do we see when we look at DNS queries for names that do not exist in the DNS?
Some 50 years ago, at the Palo Alto Research Centre of that renowned photocopier company Xerox, a revolutionary approach to local digital networks was born. On the 22nd of May 1973 Bob Metcalf authored a memo that described "X-Wire", a 3Mbps common bus office network system developed at Xerox's Palo Alto Research Center (PARC). There are very few networking technologies from the early 70's that have proved to be so resilient (TCP/IP is the only other major networking technology from that era that I can recall), so it’s worth looking at Ethernet a little closer in order to see why it has enjoyed such an unusual longevity.