What’s the difference between .local and .here? Or between .onion and .apple? All four of these labels are capable of being represented in the Internet’s Domain Name System as a generic Top Level Domains (gTLDs), but only two of these are in fact delegated names while the other two cannot be delegated. It seems that Internet no longer has a single coherent name space, but has developed a number of silent and unsignalled fracture lines, and instead of being administered by a single administrative body there are a number of folk who appear to want to have a hand on the tiller! How have we managed to get ourselves into this somewhat uncomfortable position?
The RIPE 71 meeting took place in Bucharest, Romania in November. Here are my impressions from a number of the sessions I attended that I thought were of interest. It was a relatively packed meeting held over 5 days so this is by no means all that was presented through the week.
Every so often I hear the claim that some service or other does not support IPv6 not because of some technical issue, or some cost or business issue, but simply because the service operator is of the view that IPv6 offers an inferior level service as compared to IPv4, and by offering the service over IPv6 they would be exposing their clients to an inferior level of performance of the service. But is this really the case? Is IPv6 an inferior cousin of IPv4 in terms of service performance? In this article I'll report on the results of a large scale measurement of IPv4 and IPv6 performance, looking at the relativities of IPv6 and IPv4 performance.
One of the early refinements in the Internet protocol model was the splitting of the original Internet protocol from a single monolithic protocol specification into the Internet Protocol (IP) and a pair of transport protocols. The Internet Protocol layer is intended to be used by the internal switches within the network to forward the packet to its intended destination, while the Transport Protocol layer is intended to be used by the source and destination systems. In this article I’d like to look at what we’ve been doing since then with these transport protocols.
NANOG 65 was once again your typical NANOG meeting: a set of operators, vendors, researchers and others, meeting over 3 days, this time in Montreal in October. Here’s my impressions of the meeting.
The DNS Operations, Analysis and Research Centre holds a 2 day workshop twice a year. These are my impressions of the Fall 2015 workshop, held at the start of October in Montreal.
I’m sure we’ve all heard about "the Open Internet." The expression builds upon a rich pedigree of term "open" in various contexts. We seem to have developed this connotation that "open" is some positive attribute, and when we use the expression of the "Open Internet" it seems that we are lauding it in some way. But in what way? So let’s ask the question: What does the "Open Internet" mean?
A little over five years ago the root zone of the DNS was signed with DNSSEC for the first time. At the time the Root Zone operators promised to execute a change of key in five years time. It's now that time and we are contemplating a roll of the root key of the DNS. The problem is that we believe that there are number of resolvers who are not going to follow the implicit signalling of a new key value. So for some users, for some domain names things will go dark when this key is rolled. Is there any way to predict in advance how big a problem this will be?
Today’s Internet is undoubtedly the mobile Internet. Sales of all other forms of personal computers are in decline and the market focus is now squarely on tablets, “smart” phones and wearable peripherals. You might think that such significant volumes and major revenue streams would underpin a highly competitive and diverse industry base, but you’d be wrong. In 2014 84% of all of the new mobile smart devices were using Google’s Android platform, and a further 12% were using Apple’s iOS system. This consolidation of the platform supply into just two channels is a major change. Further changes are happening. In a world as seemlingly prodigious as the mobile Internet it’s scarcity that is driving much of these changes, but in this particular case it’s not the scarcity of IPv4 addresses. It’s access to useable radio spectrum.
I recall from some years back, when we were debating in Australia some national Internet censorship proposal de jour, that if the Internet represented a new Global Village then Australia was trying very hard to position itself as the Global Village Idiot. And the current situation with Australia’s new Data Retention laws may well support a case for reviving that sentiment.
It has been said often enough that its easy to make predictions; the tough part is getting them right! And in trying to predict the manner that APNIC will exhaust its remaining supply of IPv4 addresses I’m pretty sure that I did not get it right in the most recent article on this topic. So I’ll try and correct that in a more detailed look at the situation.
In 2010 the Asia Pacific Regional Address Policy community adopted a policy that permitted address holders in the region to transfer address registration records, enabling an aftermarket in IPv4 addresses to operate with the support of the APNIC registry function. While APNIC was still able to allocate addresses to meet demands there was very little in the way of activity in this market, but once APNIC was down to its last /8 of addresses in April 2011 the level of transfer activity has picked up. In this article I’d like to take a more detailed look at APNIC’s transfer log and see what it can tell us about the level of activity in the address market in the Asia Pacific region.
It has been over 4 years since APNIC, the Regional Internet Registry for the Asia Pacific Region handed out its last “general use” allocation of IPv4 addresses. Since April 2011 APNIC has been restricted to handing out addresses from a “last chance” address pool, and has limited the amount of addresses allocated to each applicant. In this article I’d like to review where APNIC is up to with its remaining pools of IPv4 addresses.
A few weeks ago I wrote about Apple's IPv6 announcements at the Apple Developers Conference. While I thought that in IPv6 terms Apple gets it, the story was not complete and there were a number of aspects of Apple's systems that were not quite there with IPv6. So I gave them a 7/10 for their IPv6 efforts. Time to reassess that score in the light of a few recent posts from Apple.
Most of the time, mostly everywhere, most of the Internet appears to work just fine. Indeed, it seems to work just fine enough to the point that that when it goes wrong in a significant way then it seems to be fodder for headlines in the industry press. But there are some valuable lessons to be learned from these route leaks about approaches to routing security.
In the coming weeks another Regional Internet Registry will reach into its inventory of available IPv4 addresses to hand out and it will find that there is nothing left. This is by no means a surprise, and the depletion of IPv4 addresses in the Internet could be seen as one of the longest slow motion train wrecks in history. As of mid June 2015 ARIN has 2.2 million addresses left in its available pool, and at the current allocation rate it will take around 30 days to run though this remaining pool. What does this mean for IPv6?
It’s Apple’s Developers Conference time again, and in amongst the various announcements was week, in the “Platforms Status of the Union” presentation, was the mention of some recent IPv6 developments by Apple. As far as supporting IPv6 is concerned Apple still appear to get it. But do they really get all of it?
The Transmission Control Protocol (TCP) is a core protocol of the
Internet networking protocol suite. This protocol transforms the
underlying unreliable datagram delivery service provided by the IP
protocol into a reliable data stream protocol. This protocol was
undoubtedly the single greatest transformative moment in the evolution of
computer networks. The TCP protocol is now some 40 years old, but
that doesn’t mean that it has been frozen over all these years.
The turning of the DNS from a distributed database query tool
into a malicious weapon in the cyber warfare arena has had
profound impacts on the thinking about the DNS. I remember
hearing the rallying cry some years back: “Lets all work together
to find all these open resolvers and shut them down!” These days
I don't hear that any more. It seems that, like SPAM in email,
we’ve quietly given up on eradication, and are now focusing on
how to preserve service in a toxic world. I suppose that this is
yet another clear case of markets in action – there is no money
in eradication, but there is money in meeting a customer’s
requirement to allow their service to work under any
circumstances. We’ve changed our self-perception from being the
public DNS police to private mercenaries who work diligently to
protect the interests of our paying customers. We are being paid
to care about the victim, not to catch the attacker or even to
prevent the attack.