On February 26 of this year the Federal Communications Commission
of the United States will vote on a proposed new ruling on the
issue of "Network Neutrality" in the United States, bringing into
force a new round of measures that are intended to prevent certain
access providers from deliberately differentiating service
responses on the carriage services that they provide.
Time for another annual roundup from the world of IP addresses. What
happened in 2014 and what is likely to happen in 2015? This is an update
to the reports prepared at the same time in previous years, so lets see
what has changed in the past 12 months in addressing the Internet, and
look at how IP address allocation information can inform us of the
changing nature of the network itself.
The Border Gateway Protocol, or BGP, has been holding the Internet
together, for more than two decades and nothing seems to be falling
off the edge so far. As far as we can tell everyone can still see
everyone else, assuming that they want to be seen, and the
distributed routing system appears to be working smoothly. All
appears to be working within reasonable parameters, and there is no
imminent danger of some routing catastrophe, as far as we can tell.
For a protocol designed some 25 years ago, when the Internet of
that time contained some 10,000 constituent networks, its done well
to scale fifty-fold, to carry in excess of half a million routed
elements by the end of 2014.
The theme of a workshop, held at the start of December 2014 in Hong
Kong, was the considerations of further scaling of the root server
system, and the 1½ day workshop was scoped in the form of
consideration of approaches to that of the default activity of
adding further anycast instances of the existing 13 root server
anycast constellations. This was a workshop operating on at least
three levels. Firstly there was the overt agenda of working through
a number of proposed approaches that could improve the services
provided by the DNS root service. The second was an unspoken agenda
concerned with protecting the DNS from potential national measures
that would “fragment” the DNS name space into a number of spaces,
which includes, but by no means not limited to, the DNS blocking
activities that occur at national levels. The third level, and an
even less acknowledged agenda, is that there are various groups who
want to claim a seat at the Root Server table.
The Internet's Domain Name System is a modern day miracle. It may
not represent the largest database that has ever been built, but
nevertheless it's truly massive. The DNS is consulted every time we
head to a web page, every time we send an email message, or in fact
every time we initiate almost any transaction on the Internet. We
assume a lot about the DNS. For example, content distribution
networks are observed to make use of the location of the DNS
resolver as being also the same location as the user. How robust is
this assumption of co-locality of users and their resolvers? Are
users always located "close" to their resolvers? More generally,
what is the relationship between the end user, and the DNS
resolvers that they use? Are they in fact closely related? Or is
there widespread use of distant resolvers?
It's been more than a year since Edward Snowden released material
concerning the activities of US agencies in the area of
cyber-intelligence gathering. A year later, and with allegations
of various forms of cyber spying flying about, it's probably
useful to ask some more questions. What is a reasonable
expectation about privacy and the Internet? Should we now consider
various forms of digital stalking to be "normal"? To what extent
can we see information relating to individuals'
activities online being passed to others?
Yes, that's a cryptic topic, even for an article that addresses
matters of the use of cryptographic algorithms, so congratulations
for getting even this far! This is a report of a an experiment
conducted in September and October 2014 by the authors to measure
the extent to which deployed DNSSEC-validating resolvers fully
support the use of the Elliptic Curve Digital Signature Algorithm
(ECDSA) with curve P-256.
It has been a very busy period in the domain of computer security.
What with "shellshock", "heartbleed" and NTP monlink adding to the
background of open DNS resolvers, port 445 viral nasties, SYN attacks
and other forms of vulnerability exploits, it's getting very hard to
see the forest for the trees. We are spending large amounts of
resources in reacting to various vulnerabilities and attempting to
mitigate individual network attacks, but are we making overall
progress? What activities would constitute "progress" anyway?
At the NANOG meeting in Baltimore this week I listened to a
presentation by Patrick Gilmore on “The Open Internet Debate: Section
706 vs Title II”. It’s true that this is a title that would normally
induce a comatose reaction from any audience, but don’t let the title
put you off. Behind this is an impassioned debate about the nature of
the retail Internet for the United States, and, I suspect, a debate
about the Internet itself and the nature of the industry that provides
it.
There is a careful policy path to be followed that encourages
continued investment and innovation in national
telecommunications-related infrastructure and services, while at the
time same time avoiding the formation of market distortions and
inefficiencies. What helps in this regulatory process is clear
information about the state of the industry itself. One of those
pieces of information concerns the market scope of the retail
Internet Service Provider sector. To put it another way, how “big” is
a particular network? How many customers does it serve? Is its
market share increasing or falling?
The 12th August 2014 was widely reported as a day when the Internet
collapsed. Despite the sensational media reports the following day,
the condition was not fatal, and perhaps it could be more
reasonably reported that some parts of the Internet were having a
bad hair day. What was happening was that the Internet’s growth had
just exceeded the default configuration limits of certain models of
network switching equipment. In this article I'll look at how the
growth of the routing table and the scaling in the size of
transmission circuits impacts on the internal components of network
routing equipment.
If you’re playing in the DNS game, and you haven’t done so already,
then you really should be considering turning on security in your
part of the DNS by enabling DNSSEC. There are various forms of
insidious attack that start with perverting the DNS, and end with the
misdirection of an unsuspecting user. DNSSEC certainly allows a DNS
resolver to tell the difference between valid intention and
misdirection. But there's no such thing as a free lunch, and the
decision to turn on DNSSEC is not without some additional cost in
terms of traffic load and resolution time. In this article, I'll take
our observations from running a large scale DNSSEC adoption
measurement experiment and apply them to the question: What’s the
incremental cost when turning on DNSSEC?
There is an emerging picture that while networks, and network
operators, make convenient targets for various forms of national
security surveillance efforts, the reality of today’s IP
network’s are far more complex, and Internet networks are
increasingly ignorant about what their customers do. The result
is that it's now quite common for Internet networks not to have
the information that these security agencies are after. Not only
can moderately well-informed users hide their activities from
their local network, but increasingly this has been taken out of
the hands of users, as the applications we have on our
smartphones, tablets and other devices are increasingly making
use of the network in ways that are completely opaque to the
network provider.
August 2014 is proving yet again to be an amusing month in the
Australian political scene, and in this case the source of the
amusement was watching a number of Australian politicians
fumble around the topic of digital surveillance and proposed
legislation relating to data retention measures.
At APNIC Labs we’ve been working on developing a new approach to
navigating through some of our data sets the describe aspects of
IPv6 deployment, the use of DNSSEC and some measurements
relating to the current state of BGP.
I hear the virtues of the “open Internet” being extolled so much
these days that I can’t help but wonder what exactly we are
referring to. So let’s ask the question. What is an “open”
Internet?
The recent NANOG 61 meeting was a pretty typical NANOG meeting,
with a plenary stream, some interest group sessions, and an ARIN
Public Policy session. The meeting attracted some 898 registered
attendees, which was the biggest NANOG to date. No doubt the 70
registrations from Microsoft helped in this number, as the
location for NANOG 61 was in Bellevue, Washington State, but
even so the interest in NANOG continues to grow, and there was a
strong European contingent, as well as some Japanese and a
couple of Australians. The meeting continues to have a rich set
of corridor conversations in addition to the meeting schedule.
These corridor conversations are traditionally focused on
peering, but these days there are a number of address brokers,
content networks, vendors and niche industry service providers
added to the mix. Here’s my impressions of some of the
presentations at NANOG 61.
It's been an interesting couple of months in the ongoing tensions
between Internet carriage and content service providers,
particularly in the United States. The previous confident
assertion was that the network neutrality regulatory measures in
that country had capably addressed these tensions. While the
demands of the content industry continue to escalate as the
Internet rapidly expands into video content streaming models, we
are seeing a certain level of reluctance from the carriage
providers to continually accommodate these expanding demands
within their networks though ongoing upgrades of their own
capacity without any impost on the content provider. The veneer
of network neutrality is cracking under the pressure, and the
arrangements that attempted to isolate content from carriage
appear to be failing. What's going on this extended saga about
the tensions between carriage and content?