It has been over 4 years since APNIC, the Regional Internet Registry for the Asia Pacific Region handed out its last “general use” allocation of IPv4 addresses. Since April 2011 APNIC has been restricted to handing out addresses from a “last chance” address pool, and has limited the amount of addresses allocated to each applicant. In this article I’d like to review where APNIC is up to with its remaining pools of IPv4 addresses.
A few weeks ago I wrote about Apple's IPv6 announcements at the Apple Developers Conference. While I thought that in IPv6 terms Apple gets it, the story was not complete and there were a number of aspects of Apple's systems that were not quite there with IPv6. So I gave them a 7/10 for their IPv6 efforts. Time to reassess that score in the light of a few recent posts from Apple.
Most of the time, mostly everywhere, most of the Internet appears to work just fine. Indeed, it seems to work just fine enough to the point that that when it goes wrong in a significant way then it seems to be fodder for headlines in the industry press. But there are some valuable lessons to be learned from these route leaks about approaches to routing security.
In the coming weeks another Regional Internet Registry will reach into its inventory of available IPv4 addresses to hand out and it will find that there is nothing left. This is by no means a surprise, and the depletion of IPv4 addresses in the Internet could be seen as one of the longest slow motion train wrecks in history. As of mid June 2015 ARIN has 2.2 million addresses left in its available pool, and at the current allocation rate it will take around 30 days to run though this remaining pool. What does this mean for IPv6?
It’s Apple’s Developers Conference time again, and in amongst the various announcements was week, in the “Platforms Status of the Union” presentation, was the mention of some recent IPv6 developments by Apple. As far as supporting IPv6 is concerned Apple still appear to get it. But do they really get all of it?
The Transmission Control Protocol (TCP) is a core protocol of the
Internet networking protocol suite. This protocol transforms the
underlying unreliable datagram delivery service provided by the IP
protocol into a reliable data stream protocol. This protocol was
undoubtedly the single greatest transformative moment in the evolution of
computer networks. The TCP protocol is now some 40 years old, but
that doesn’t mean that it has been frozen over all these years.
The turning of the DNS from a distributed database query tool
into a malicious weapon in the cyber warfare arena has had
profound impacts on the thinking about the DNS. I remember
hearing the rallying cry some years back: “Lets all work together
to find all these open resolvers and shut them down!” These days
I don't hear that any more. It seems that, like SPAM in email,
we’ve quietly given up on eradication, and are now focusing on
how to preserve service in a toxic world. I suppose that this is
yet another clear case of markets in action – there is no money
in eradication, but there is money in meeting a customer’s
requirement to allow their service to work under any
circumstances. We’ve changed our self-perception from being the
public DNS police to private mercenaries who work diligently to
protect the interests of our paying customers. We are being paid
to care about the victim, not to catch the attacker or even to
prevent the attack.
In those circles where Internet prognostications abound and policy
makers flock to hear grand visions of the future, we often hear
about the boundless future represented by “The Internet of Things”.
In the vision of the Internet of Things we are going to expand the
Internet beyond people and press on with connecting up our world
using billions of these chattering devices in every aspect of our
world. What do we know about the “things” that are already
connected to the Internet? Some of them are not very good. In fact
some of them are just plain stupid. And this stupidity is toxic, in
that their sometimes inadequate models of operation and security can
affect others in potentially malicious ways.
It has been observed that the most profound technologies are those
that disappear. They weave themselves into the fabric of everyday
life until they are indistinguishable from it, and are notable only
by their absence. So how should we regard the Internet? Is it like
large scale electricity power generators: a technology feat that is
quickly taken for granted and largely ignored? Are we increasingly
seeing the Internet in terms of the applications and services that
sit upon it and just ignoring how the underlying systems are
constructed? To what extent is the mobile Internet driving this
change in perception of the Internet as a technology we simply
assume is always available, anytime and anywhere? What is happening
in the mobile world?
On February 26 of this year the Federal Communications Commission
of the United States will vote on a proposed new ruling on the
issue of "Network Neutrality" in the United States, bringing into
force a new round of measures that are intended to prevent certain
access providers from deliberately differentiating service
responses on the carriage services that they provide.
Time for another annual roundup from the world of IP addresses. What
happened in 2014 and what is likely to happen in 2015? This is an update
to the reports prepared at the same time in previous years, so lets see
what has changed in the past 12 months in addressing the Internet, and
look at how IP address allocation information can inform us of the
changing nature of the network itself.
The Border Gateway Protocol, or BGP, has been holding the Internet
together, for more than two decades and nothing seems to be falling
off the edge so far. As far as we can tell everyone can still see
everyone else, assuming that they want to be seen, and the
distributed routing system appears to be working smoothly. All
appears to be working within reasonable parameters, and there is no
imminent danger of some routing catastrophe, as far as we can tell.
For a protocol designed some 25 years ago, when the Internet of
that time contained some 10,000 constituent networks, its done well
to scale fifty-fold, to carry in excess of half a million routed
elements by the end of 2014.
The theme of a workshop, held at the start of December 2014 in Hong
Kong, was the considerations of further scaling of the root server
system, and the 1½ day workshop was scoped in the form of
consideration of approaches to that of the default activity of
adding further anycast instances of the existing 13 root server
anycast constellations. This was a workshop operating on at least
three levels. Firstly there was the overt agenda of working through
a number of proposed approaches that could improve the services
provided by the DNS root service. The second was an unspoken agenda
concerned with protecting the DNS from potential national measures
that would “fragment” the DNS name space into a number of spaces,
which includes, but by no means not limited to, the DNS blocking
activities that occur at national levels. The third level, and an
even less acknowledged agenda, is that there are various groups who
want to claim a seat at the Root Server table.
The Internet's Domain Name System is a modern day miracle. It may
not represent the largest database that has ever been built, but
nevertheless it's truly massive. The DNS is consulted every time we
head to a web page, every time we send an email message, or in fact
every time we initiate almost any transaction on the Internet. We
assume a lot about the DNS. For example, content distribution
networks are observed to make use of the location of the DNS
resolver as being also the same location as the user. How robust is
this assumption of co-locality of users and their resolvers? Are
users always located "close" to their resolvers? More generally,
what is the relationship between the end user, and the DNS
resolvers that they use? Are they in fact closely related? Or is
there widespread use of distant resolvers?
It's been more than a year since Edward Snowden released material
concerning the activities of US agencies in the area of
cyber-intelligence gathering. A year later, and with allegations
of various forms of cyber spying flying about, it's probably
useful to ask some more questions. What is a reasonable
expectation about privacy and the Internet? Should we now consider
various forms of digital stalking to be "normal"? To what extent
can we see information relating to individuals'
activities online being passed to others?
Yes, that's a cryptic topic, even for an article that addresses
matters of the use of cryptographic algorithms, so congratulations
for getting even this far! This is a report of a an experiment
conducted in September and October 2014 by the authors to measure
the extent to which deployed DNSSEC-validating resolvers fully
support the use of the Elliptic Curve Digital Signature Algorithm
(ECDSA) with curve P-256.
It has been a very busy period in the domain of computer security.
What with "shellshock", "heartbleed" and NTP monlink adding to the
background of open DNS resolvers, port 445 viral nasties, SYN attacks
and other forms of vulnerability exploits, it's getting very hard to
see the forest for the trees. We are spending large amounts of
resources in reacting to various vulnerabilities and attempting to
mitigate individual network attacks, but are we making overall
progress? What activities would constitute "progress" anyway?