Far from being a vibrant environment with an array of competitive offerings, the activity of providing so-called "last mile" Internet access appears to have been reduced to an environment where, in many markets, a small number of access providers appear to operate in a manner that resembles a cosy cartel, strenuously resisting the imposition of harsher strictures of true competition. Mobile access continues to operate at a distinct price premium and the choices for broadband wireline access are all too often limited to just one or two providers. Is there another option? If we looked up into the sky are there potential services that could alter this situation?
The rise of the Internet has heralded rapid changes in our society. The opportunities presented by a capable and ubiquitous communications system and a global transportation network have taken some corporations from the multinational to the status of truly global mega-corporation. There are a handful of large scale winners in this space and many losers. But this is not the first time we’ve witnessed a period of rapid technological and social change.
Few parts of the Domain Name System are filled with such levels of mythology as its root server system. Here I'd like to try and explain what it is all about and ask the question whether the system we have is still adequate, or if it's time to think about some further changes.
Time for another annual roundup from the world of IP addresses. Let’s see what has changed in the past 12 months in addressing the Internet, and look at how IP address allocation information can inform us of the changing nature of the network itself.
Once more its time report on the experience with the inter-domain routing system over the past year, looking in some detail at some metrics from the routing system that can show the essential shape and behaviour of the underlying interconnection fabric of the Internet.
The inexorable progress of time clocked past the New Year and at 23:59:60 on the 31st December 2016 UTC the leap second claimed another victim. This time Cloudflare described how the Leap Second caused some DNS failures in Cloudflare’s infrastructure. What's going on here? It should not have been a surprise, yet we still see failing systems.
For many years we’ve seen Domain Name certificates priced as a luxury add-on, costing many times more than the original name registration fees. Let’s Encrypt has broken that model and now basic security is now freely available to anyone. But the CA model itself is not all that robust, and there are still some critical vulnerabilities that can be exploited by a well-resourced attacker. Adding DANE TSLA records to the DNS signed zone, and equipping user applications, such as browsers, with an additional DNS lookup to fetch and validate the TLSA record is a small step, but a significant improvement to the overall security picture.
Thanks to the moon, the earth's rate of rotation is slowing down. To compensate, we periodically adjust Universal Coordinated Time. On Saturday 31st December 2016, the last minute of 2016 will be extended to be 61 seconds long, creating the the timestamp 24:59:60. Previous leap seconds have not gone completely smoothly, and there is no particular reason to think that much will have changed for this leap second.
In November I wrote about some simple tests that I had undertaken on the DNS Root nameservers. The tests looked at the way the various servers responded when they presented a UDP DNS response that was larger than 1,280 octets. I awarded each of the name servers up to five stars depending on how that managed to serve such large responses in IPv4 and IPv6. I'd like to return to this topic by looking at one further aspect of DNS server behaviour, namely the way in which servers handle large UDP responses over IPv6.
The process of rolling the DNS Root’s Key Signing Key of the DNS has now started. During this process there will be a period where the root zone servers’ response to a DNS query for the DNSKEY resource record of the root zone will grow from the current value of 864 octets to 1,425 octets. Does this present a problem? Let’s look at the DNS Root Server system and score it on how well it can cope with large responses. It seems that awarding stars is the current Internet way, so let’s see how many stars we’ll give to the Root Server System for their handling of large responses.
IPv4 addresses are not the only Internet number resource that has effectively run out in recent times. Another pool of Internet numbers under similar consumption pressures has been the numbers that are intended to uniquely identify each network in the Internet’s inter-domain routing space. These are Autonomous System numbers.
I was struck at a recent NANOG meeting just how few presentations looked at the ISP space and the issues relating to ISP operations and how many were looking at the data centre environment. If the topics that we use to talk to each other are any guide, then this is certainly an environment which appears to be dominated today by data centre design and the operation of content distribution networks. And it seems that the ISP function, and particularly the transit ISP function is waning. It’s no longer a case of getting users to content, but getting content to users. Does this mean that the role of transit for the Internet’s users is now over?
The recent attacks on the DNS infrastructure operated by DYN in October 2016 have generated a lot of comment in recent days. Indeed, it’s not often that the DNS itself has been prominent in the mainstream of news commentary, and in some ways this DNS DDOS prominence is for all the wrong reasons! I’d like to speculate a bit on what this attack means for the DNS and what we could do to mitigate the recurrence of such attacks.
October 2016 marks a milestone in the story of the Internet. At the start of the month the United States Government let its residual oversight arrangements with ICANN (the Internet Corporation for Assigned Names and Numbers) over the operation of the Internet Assigned Numbers Authority (IANA) lapse. No single government now has a unique relationship with the governance of the protocol elements of the Internet, and it is now in the hands of a community of interested parties in a so-called Multi-Stakeholder framework. This is a unique step for the Internet and not without its attendant risks. How did we get here?
DNS OARC is the place to share research, experiences and data primarily concerned with the operation of the DNS in the Internet. Here are some highlights for me from the most recent meeting, held in October 2016 in Dallas.
We often think of the Internet as the web, or even these days as just a set of apps. When we look at the progress with the transition to IPv6 we talk about how these apps are accessible using IPv6 and mark this as progress. But the Internet is more than just these services. There is a whole substructure of support and if we are thinking about an IPv6 Internet then everything needs to change. So here I want to look at perhaps the most critical of these hidden infrastructure elements - the Domain Name System. How are we going with using IPv6 in the DNS?
The 'traditional' cryptographic algorithm used to generate digital signatures in secure DNS (DNSSEC) has been RSA. But maybe its time to look around at a "denser" algorithm that can offer comparable cryptographic strength using much smaller digital keys. Are we ready to use ECDSA in DNSSEC?