At APNIC Labs we’ve been working on developing a new approach to
navigating through some of our data sets the describe aspects of
IPv6 deployment, the use of DNSSEC and some measurements
relating to the current state of BGP.
I hear the virtues of the “open Internet” being extolled so much
these days that I can’t help but wonder what exactly we are
referring to. So let’s ask the question. What is an “open”
Internet?
The recent NANOG 61 meeting was a pretty typical NANOG meeting,
with a plenary stream, some interest group sessions, and an ARIN
Public Policy session. The meeting attracted some 898 registered
attendees, which was the biggest NANOG to date. No doubt the 70
registrations from Microsoft helped in this number, as the
location for NANOG 61 was in Bellevue, Washington State, but
even so the interest in NANOG continues to grow, and there was a
strong European contingent, as well as some Japanese and a
couple of Australians. The meeting continues to have a rich set
of corridor conversations in addition to the meeting schedule.
These corridor conversations are traditionally focused on
peering, but these days there are a number of address brokers,
content networks, vendors and niche industry service providers
added to the mix. Here’s my impressions of some of the
presentations at NANOG 61.
It's been an interesting couple of months in the ongoing tensions
between Internet carriage and content service providers,
particularly in the United States. The previous confident
assertion was that the network neutrality regulatory measures in
that country had capably addressed these tensions. While the
demands of the content industry continue to escalate as the
Internet rapidly expands into video content streaming models, we
are seeing a certain level of reluctance from the carriage
providers to continually accommodate these expanding demands
within their networks though ongoing upgrades of their own
capacity without any impost on the content provider. The veneer
of network neutrality is cracking under the pressure, and the
arrangements that attempted to isolate content from carriage
appear to be failing. What's going on this extended saga about
the tensions between carriage and content?
I’ve often heard that security is hard. And good security is very
hard. Despite the best of intentions, and the investment of
considerable care and attention in the design of a secure system,
sometimes it takes the critical gaze of experience to sharpen the
focus and understand what’s working and what’s not. We saw this with
the evolution of the security framework in the DNS, where it took
multiple iterations over 10 or more years to come up with a DNSSEC
framework that was able to gather a critical mass of acceptance. So
before we hear cries that the deployed volume of RPKI technology
means that its too late to change anything, let’s take a deep breath
and see what we've learned so far from this initial experience, and
see if we can figure out what's working and what's not, and what we
may want to reconsider.
There was a story that was distributed around the newswire
services at the start of February this year, reporting that we
had just encountered the “biggest DDOS attack ever” from a
NTP-based attack. What’s going on? Why are these supposedly
innocuous, and conventionally all but invisible services suddenly
turning into venomous daemons? How has the DNS and NTP been
turned against us in such a manner? And why have these attacks
managed to overwhelm our conventional cyber defences?
These days we have become used to a world that operates on a
consistent time standard, and we have become used to our computers
operating at sub-second accuracy. But how do they do so? In this
article I will look at how a consistent time standard is spread
across the Internet, and examine the operation of the Network Time
Protocol (NTP).
When looking at the Internet's Inter-domain routing space, the
number of routed entries in the routing table is not the only
metric of the scale of the routing space – it’s also what the
routing protocol, BGP, does with this information that matters.
As the routing table increases in size do we see a corresponding
increase in the number of updates generated by BGP as it attempts
to find a converged state? What can we see when we look a the
profile of dynamic updates within BGP, and can we make some
projections here about the likely future for BGP?
Time for another annual roundup from the world of IP addresses.
What happened in 2013 and what is likely to happen in 2014? This
is an update to the reports prepared at the same time in
previous years, so lets see what has changed in the past 12
months in addressing the Internet, and look at how IP address
allocation information can inform us of the changing nature of
the network itself.
If the motivation behind the effort behind securing BGP was to
allow any BGP speaker to distinguish between routing updates
that contained “genuine” routing information and routing updates
that contained contrived or false information, then these two
reports point out that we’ve fallen short of that target. What’s
gone wrong? Why are certain forms of routing Man-In-The-Middle
attacks all but undetectable for the RPKI-enabled BGPSEC
framework?
The Organisation for Economic Co-operation and Development, the OECD,
is a widely referenced and respected source of objective economic data
and comparative studies of national economies and economic performance.
The organization has a very impressive track record of high quality
research and a justified reputation of excellence in its publications,
even with its evident preference for advocating economic reform through
open markets and their associated competitive rigors. OECD activities
in the past have proved to be instrumental in facilitating change in
governmental approaches to common issues that have broad economic and
social dimensions. So how does IPv6 fit into this picture of OECD
activities?
Much has been said about how Google uses the services they
provide, including their mail service, their office productivity
tools, file storage and similar services, as a means of gathering
an accurate profile of each individual user of their services.
The company has made a very successful business out of measuring
users, and selling those metrics to advertisers. But can we
measure Google as they undertake this activity? How many users
avail themselves of their services? Perhaps that's a little
ambitious at this stage, so maybe a slightly smaller scale may be
better, so let's just look at one Google service. Can we measure
how many folk use Google's Public DNS Service?
This is an informal description the evolution of a particular
area of network forensic activity, namely that of traceback. This
activity typically involves using data recorded at one end of a
network transaction, and using various logs and registration
records to identify the other party to the transaction. Here
we’ll look at the impact that IPv4 address exhaustion and IPv6
transition has had on this activity, and also note, as we explore
this space, the changing role of IP addresses within the IP
protocol architecture.
It was never obvious at the outset of this grand Internet
experiment that the one aspect of the network’s infrastructure
that would truly prove to be the most fascinating, intriguing,
painful, lucrative and just plain confusing, would be the
Internet’s Domain Name System. After all, it all seemed so simple
to start with.
I often think there are only two types of stories about the
Internet. One is a continuing story of prodigious technology that
continues to shrink in physical size and at the same time continue
to dazzle and amaze us. We've managed to get the cost and form
factor of computers down to that of an ordinary wrist watch, or
even into a pair of glasses, and embed rich functionality into
almost everything. The other is a darker evolving story of the
associated vulnerabilities of this technology, where we've seen
"hacking" turn into organised crime and from there into a scale of
sophistication that is sometimes termed "cyber warfare". And in
this same darker theme one could add the current set of stories
about various forms of state sponsored surveillance and espionage
on the net. In this article I'd like to wander into this darker
side of the Internet and briefly look at some of the current issues
in this area of cybercrime, based on some conferences and workshops
I've attended recently.
In the emerging IP address broker world it seems that one of the
most widely cited address transactions was that of a US
bankruptcy proceedings in 2011, where Microsoft successfully
tendered $7.5M to purchase a block of 666,624 addresses from the
liquidators of Nortel, which is equivalent to a price of $11.25
per address. Was that a "fair" price for IP addresses then, and
is it a "fair" price now?
One IP address is much the same as another - right? There's
hardly a difference between 192.0.2.45 and 192.0.2.46 is there?
They are just encoded integer values, and aside from
numerological considerations, one address value is as good or bad
as any other - right? So IP addresses are much the same as each
other, and an after-market in IP addresses should be like many
other markets in undistinguished commodity goods. Right? Wrong!
One of the most prominent denial of service attacks in recent
months was one that occurred in March 2013, launched against
Spamhaus and Cloudflare. With a peak volume of
attack traffic of some 120Gbps, it was a very significant attack.
How did the attackers generate such massive volumes of attack
traffic? The answer lies in the Domain Name System (DNS). The
attackers asked about domain names, and the DNS system answered.
Something we all do all of the time on the Internet. So how can a
conventional activity of translating a domain name into an IP
address be turned into a massive attack?