The turning of the DNS from a distributed database query tool
into a malicious weapon in the cyber warfare arena has had
profound impacts on the thinking about the DNS. I remember
hearing the rallying cry some years back: “Lets all work together
to find all these open resolvers and shut them down!” These days
I don't hear that any more. It seems that, like SPAM in email,
we’ve quietly given up on eradication, and are now focusing on
how to preserve service in a toxic world. I suppose that this is
yet another clear case of markets in action – there is no money
in eradication, but there is money in meeting a customer’s
requirement to allow their service to work under any
circumstances. We’ve changed our self-perception from being the
public DNS police to private mercenaries who work diligently to
protect the interests of our paying customers. We are being paid
to care about the victim, not to catch the attacker or even to
prevent the attack.
In those circles where Internet prognostications abound and policy
makers flock to hear grand visions of the future, we often hear
about the boundless future represented by “The Internet of Things”.
In the vision of the Internet of Things we are going to expand the
Internet beyond people and press on with connecting up our world
using billions of these chattering devices in every aspect of our
world. What do we know about the “things” that are already
connected to the Internet? Some of them are not very good. In fact
some of them are just plain stupid. And this stupidity is toxic, in
that their sometimes inadequate models of operation and security can
affect others in potentially malicious ways.
It has been observed that the most profound technologies are those
that disappear. They weave themselves into the fabric of everyday
life until they are indistinguishable from it, and are notable only
by their absence. So how should we regard the Internet? Is it like
large scale electricity power generators: a technology feat that is
quickly taken for granted and largely ignored? Are we increasingly
seeing the Internet in terms of the applications and services that
sit upon it and just ignoring how the underlying systems are
constructed? To what extent is the mobile Internet driving this
change in perception of the Internet as a technology we simply
assume is always available, anytime and anywhere? What is happening
in the mobile world?
On February 26 of this year the Federal Communications Commission
of the United States will vote on a proposed new ruling on the
issue of "Network Neutrality" in the United States, bringing into
force a new round of measures that are intended to prevent certain
access providers from deliberately differentiating service
responses on the carriage services that they provide.
Time for another annual roundup from the world of IP addresses. What
happened in 2014 and what is likely to happen in 2015? This is an update
to the reports prepared at the same time in previous years, so lets see
what has changed in the past 12 months in addressing the Internet, and
look at how IP address allocation information can inform us of the
changing nature of the network itself.
The Border Gateway Protocol, or BGP, has been holding the Internet
together, for more than two decades and nothing seems to be falling
off the edge so far. As far as we can tell everyone can still see
everyone else, assuming that they want to be seen, and the
distributed routing system appears to be working smoothly. All
appears to be working within reasonable parameters, and there is no
imminent danger of some routing catastrophe, as far as we can tell.
For a protocol designed some 25 years ago, when the Internet of
that time contained some 10,000 constituent networks, its done well
to scale fifty-fold, to carry in excess of half a million routed
elements by the end of 2014.
The theme of a workshop, held at the start of December 2014 in Hong
Kong, was the considerations of further scaling of the root server
system, and the 1½ day workshop was scoped in the form of
consideration of approaches to that of the default activity of
adding further anycast instances of the existing 13 root server
anycast constellations. This was a workshop operating on at least
three levels. Firstly there was the overt agenda of working through
a number of proposed approaches that could improve the services
provided by the DNS root service. The second was an unspoken agenda
concerned with protecting the DNS from potential national measures
that would “fragment” the DNS name space into a number of spaces,
which includes, but by no means not limited to, the DNS blocking
activities that occur at national levels. The third level, and an
even less acknowledged agenda, is that there are various groups who
want to claim a seat at the Root Server table.
The Internet's Domain Name System is a modern day miracle. It may
not represent the largest database that has ever been built, but
nevertheless it's truly massive. The DNS is consulted every time we
head to a web page, every time we send an email message, or in fact
every time we initiate almost any transaction on the Internet. We
assume a lot about the DNS. For example, content distribution
networks are observed to make use of the location of the DNS
resolver as being also the same location as the user. How robust is
this assumption of co-locality of users and their resolvers? Are
users always located "close" to their resolvers? More generally,
what is the relationship between the end user, and the DNS
resolvers that they use? Are they in fact closely related? Or is
there widespread use of distant resolvers?
It's been more than a year since Edward Snowden released material
concerning the activities of US agencies in the area of
cyber-intelligence gathering. A year later, and with allegations
of various forms of cyber spying flying about, it's probably
useful to ask some more questions. What is a reasonable
expectation about privacy and the Internet? Should we now consider
various forms of digital stalking to be "normal"? To what extent
can we see information relating to individuals'
activities online being passed to others?
Yes, that's a cryptic topic, even for an article that addresses
matters of the use of cryptographic algorithms, so congratulations
for getting even this far! This is a report of a an experiment
conducted in September and October 2014 by the authors to measure
the extent to which deployed DNSSEC-validating resolvers fully
support the use of the Elliptic Curve Digital Signature Algorithm
(ECDSA) with curve P-256.
It has been a very busy period in the domain of computer security.
What with "shellshock", "heartbleed" and NTP monlink adding to the
background of open DNS resolvers, port 445 viral nasties, SYN attacks
and other forms of vulnerability exploits, it's getting very hard to
see the forest for the trees. We are spending large amounts of
resources in reacting to various vulnerabilities and attempting to
mitigate individual network attacks, but are we making overall
progress? What activities would constitute "progress" anyway?
At the NANOG meeting in Baltimore this week I listened to a
presentation by Patrick Gilmore on “The Open Internet Debate: Section
706 vs Title II”. It’s true that this is a title that would normally
induce a comatose reaction from any audience, but don’t let the title
put you off. Behind this is an impassioned debate about the nature of
the retail Internet for the United States, and, I suspect, a debate
about the Internet itself and the nature of the industry that provides
it.
There is a careful policy path to be followed that encourages
continued investment and innovation in national
telecommunications-related infrastructure and services, while at the
time same time avoiding the formation of market distortions and
inefficiencies. What helps in this regulatory process is clear
information about the state of the industry itself. One of those
pieces of information concerns the market scope of the retail
Internet Service Provider sector. To put it another way, how “big” is
a particular network? How many customers does it serve? Is its
market share increasing or falling?
The 12th August 2014 was widely reported as a day when the Internet
collapsed. Despite the sensational media reports the following day,
the condition was not fatal, and perhaps it could be more
reasonably reported that some parts of the Internet were having a
bad hair day. What was happening was that the Internet’s growth had
just exceeded the default configuration limits of certain models of
network switching equipment. In this article I'll look at how the
growth of the routing table and the scaling in the size of
transmission circuits impacts on the internal components of network
routing equipment.
If you’re playing in the DNS game, and you haven’t done so already,
then you really should be considering turning on security in your
part of the DNS by enabling DNSSEC. There are various forms of
insidious attack that start with perverting the DNS, and end with the
misdirection of an unsuspecting user. DNSSEC certainly allows a DNS
resolver to tell the difference between valid intention and
misdirection. But there's no such thing as a free lunch, and the
decision to turn on DNSSEC is not without some additional cost in
terms of traffic load and resolution time. In this article, I'll take
our observations from running a large scale DNSSEC adoption
measurement experiment and apply them to the question: What’s the
incremental cost when turning on DNSSEC?
There is an emerging picture that while networks, and network
operators, make convenient targets for various forms of national
security surveillance efforts, the reality of today’s IP
network’s are far more complex, and Internet networks are
increasingly ignorant about what their customers do. The result
is that it's now quite common for Internet networks not to have
the information that these security agencies are after. Not only
can moderately well-informed users hide their activities from
their local network, but increasingly this has been taken out of
the hands of users, as the applications we have on our
smartphones, tablets and other devices are increasingly making
use of the network in ways that are completely opaque to the
network provider.
August 2014 is proving yet again to be an amusing month in the
Australian political scene, and in this case the source of the
amusement was watching a number of Australian politicians
fumble around the topic of digital surveillance and proposed
legislation relating to data retention measures.