How are new technologies adopted in the Internet? What drives adoption? What impedes adoption? These were the questions posed at a panel session at the recent EuroDiG workshop in June.
For many years I have been a keenly interested participant in the meetings organised by the DNS Operations and Research Community, or DNS OARC. This time around its most recent meeting headed into the online space. Here's my impressions of the material presented at the online DNS OARC 32a meeting.
Over the past couple of decades, we've constructed two quite distinct online environments. There is the enterprise network which is commonly encountered at physical workplaces, and there is the consumer network which has been deployed across residential domains. The result is that many observed characteristics of the network have patterns that reflected the differences between these work and home environments. But what happened when the at-work workforce was sent home to work? What can the DNS tell us about the Lockdown?
A "New IP" framework was proposed to an ITU Study Group last year. This framework envisages a resurgence of a network-centric view of communications architectures where application behaviours are moderated by network-managed control mechanisms. It's not the first time that we’ve seen proposals to rethink the basic architecture of the Internet’s technology and it certainly won’t be the last. But is it going to really going to influence the evolution of the Internet? What can we observe about emerging technologies that will play a critical role in the coming years? Here’s my personal selection of recent technical innovations that I would add into the set of emerging technologies that will exercise a massive influence over the coming ten years.
I've been asked a number of times: "Why are we using as distributed trust framework where each of the RIRs are publishing a trust anchor that claims the entire Internet number space?"" I suspect that the question will arise again the future so it may be useful to record the design considerations here in the hope that this may be useful to those who stumble upon the same question in the future.
I'm constantly impressed by the rather complex intricacies that are associated with running your own web server these days. A recent source of these complexities has been the PKI, the security infrastructure used to maintain secure connections over the network, and I'd like to recount my experience here, in case any others encounter the same seemingly inexplicable behaviours in their secure web service configurations.
We need a secure and trustable infrastructure. We need to be able to provide assurance that the service we are contacting is genuine, that the transaction is secured from eavesdroppers and that we leave no useful traces behind us. Why has our public key certificate system failed the Internet so badly?
Public key cryptography is the mainstay of Internet security. It relies on all of us being able to keep our private key a secret. And if it all goes wrong, well we can always get our public key certificate revoked and start again with a new key pair. But what if revocation doesn't work?
One year ago, I looked at the state of adoption of DNSSEC validation in DNS resolvers and the answer was not unreservedly optimistic. Instead of the “up and to the right” curves that show a momentum of adoption, there was a pronounced slowing down aof the momentum of DNSSEC adoption. The current picture of DNSSEC adoption is certainly far more heartening, and I would like to update this earlier article on DNSSEC with more recent data.
There is something quite compelling about engineering a piece of state-of-the-art technology that is intended to be dropped off a boat and then operate flawlessly for the next twenty-five years or more in the silent depths of the world's oceans! It brings together advanced physics, marine technology and engineering to create some truly amazing pieces of netw2orking infrastructure.
Time for another annual roundup from the world of IP addresses. Let's see what has changed in the past 12 months in addressing the Internet and look at how IP address allocation information can inform us of the changing nature of the network itself.
This second part of the report of BGP across 2019 will look at the profile of BGP updates across 2019 to assess whether the stability of the routing system, as measured by the level of BGP update activity, is changing.
It has become a tradition each January for me to report on the behaviour of the inter-domain routing system over the past year, looking in some detail at some metrics from the routing system that can show the essential shape and behaviour of the underlying interconnection fabric of the Internet.
The topic of buffer sizing was the subject of a workshop at Stanford University in early December 2019. The workshop drew together academics, researchers, vendors and operators to look at this topic from their perspectives. The following are my notes from this workshop.
The 106th meeting of the IETF was in Singapore in November 2019. As usual for the IETF, there were many Working Group meetings, and this report is definitely not an attempt to cover all of these meetings or even anything close to that. Here I’ve been highly selective and picked out just the items that I found interesting from the sessions I attended.
DNS OARC held its 31st meeting in Austin, Texas on 31 October to 1 November. Here are some of my highlights from two full days of DNS presentations at this workshop.
The 77th NANOG meeting was held in Austin, Texas at the end of October and they invited Farsight’s Paul Vixie to deliver a keynote presentation. These are my thoughts in response to his presentation, and they are my interpretation of Paul’s talk and more than a few of my opinions thrown in for good measure!
In this article I'd like to look at one particular aspect of the Internet's inter-domain routing framework, namely the role of the Autonomous System (AS) Path in the operation of BGP, and in particular the use of AS Prepending.
This a report on a four-year long experiment in advertising a 'dark' prefix on the internet and examining the profile of unsolicited traffic that is sent to a traffic collector.
Moving the DNS from the access ISP to the browser may not necessarily enhance open competition in the DNS world. In today's Internet just two browsers, Chrome and Safari dominate the browser world with an estimated 80% share of all users. If the DNS becomes a browser-specific setting, then what would that mean for the DNS resolver market? And why should we care? It would be useful to understand what is going on in the DNS today, before there has been any major shift to adopt DoH or DoT by high-use applications such as browsers. Can we measure the level of DNS centrality in the Internet today?