In the original framework of the IP architecture, hosts had network interfaces, and network interfaces had single IP addresses. These days, many operating systems allow a configuration to add additional addresses to network interfaces by enumerating these additonal addresses. But can we bind a network interface to an entire subnet of IP addresses without having to enumerate each and every individual address?
Every so often I hear the claim that some service or other has deliberately chosen not to support IPv6, and the reason cited is not because of some technical issue, or some cost or business issue, but simply because the service operator is of the view that IPv6 offers an inferior level service as compared to IPv4, and by offering the service over IPv6 they would be exposing their clients to an inferior level of performance of the service. But is this really the case?
The IETF meetings are relatively packed events lasting over a week, and it’s just not possible to attend every session. From the various sessions I attended here are a few personal impressions that I took away from the meeting that I would like to share with you.
There are a number of ways to view the relationship between hosts and the network in the Internet. One view is that this is an example of two sets of cooperating entities that share a common goal: hosts and the network both want content to be delivered. Another view is that hosts and networks have conflicting objectives. This was apparent in a couple of sessions at the recent IETF 96 Meeting.
The Earth Orientation Centre is the bureau that looks after Universal Coordinated Time, and each six months they release a bulletin about their intentions for the next Universal Time correction window. This month they announced a leap second to be scheduled for midnight UTC 31 December 2016.
The astonishing rise and rise of the fortunes of Google has been one of the major features of both social and business life of the early 21st century. In the same way that Microsoft transformed the computer into a mainstream consumer product, Google has had a similar transformative effect upon its environment.
It seems that it's the season to consider "openess" in Internet Governance circles. The OECD has recently stated that: “the level of Internet openness will also affect the digital economy’s potential." And according to the Global Commission on Internet Governance (GCIG) “One Internet” report, an open and accessible Internet should generate several trillions of dollars a year in economic benefits. A fragmented Internet on the other hand would weigh on investment, trade and GDP, as well as on the right to free expression and access to knowledge.” It seems that the stakes are high when we consider Internet Openness. How well are we doing?
The DNS is normally a relatively open protocol that smears its data far and wide. Little wonder that the DNS is used in many ways, not just as a mundane name resolution protocol, but as a data channel for surveillance and as a common means of implementing various forms of content access control. But all this is poised to change.
The design of IPv6 represented a relatively conservative evolutionary step of the Internet protocol. Mostly, it's just IPv4 with significantly larger address fields. Mostly, but not completely, as there were some changes. IPv6 changed the boot process to use auto-configuration and multicast to perform functions that were performed by ARP and DHCP in IPv4. IPv6 added a 20-bit Flow Identifier to the packet header. IPv6 replaced IP header options with an optional chain of extension headers. IPv6 also changed the behaviour of packet fragmentation. Which is what we will look at here.
At the recent IETF meeting the topic of making IPv6 an Internet Standard came up. What is perhaps a little surprising is that it is not an Internet Standard already. Equally surprisingly, strictly speaking it is probably not quite ready to be an Internet Standard. And I think that's a good thing!
It has often been claimed that IPv6 and the Internet of Things are strongly aligned, to the extent that claims are made they mutually dependant. Each needs the other. However, the evidence we have so far with small self-managed device deployments does not provide a compelling justification of this case. The question here is: Does the Internet of Things require IPv6 as an essential precondition, or are we going to continue to deploy an ever expanding population of micro devices within today’s framework of ever increasing address sharing on IPv4?
Is it time to declare IPv4 as an "Historic" Protocol Specification and move on with IPv6? Or is this so premature that the proposal is just an April Fools Day prank played out a few days too late?
For a supposedly simply query response protocol that maps names to IP addresses there a huge amount going on under the hood with the DNS. DNS OARC held a 2 day workshop in Buenos Aires prior to IETF 95. Here are my impressions of this meeting.
In the world of public key cryptography, it is often observed that no private key can be a kept as an absolute secret forever. At some point keys need to be refreshed. And the root key of the DNS is no exception. Its time for this key to change.
It seems that some things just never die, and this includes DNS queries. In a five month experiment encompassing the detailed analysis of some 44 billion DNS queries we find that one quarter of these DNS queries are zombies - queries that have no current user awaiting the response, and instead are echoes of previous queries. What is causing these zombies? Are we seeing deranged DNS resolvers that maniacally re-query the same questions and never accept the answer. Or is this something slightly more sinister and are we seeing evidence of widespread DNS stalking and shadowing? Let's find out.
NANOG continues to be one of the major gatherings on network operators and admins, together with the folk who work to meet the various needs of this community. Here are my reactions to some of the presentations I heard at NANOG 66, held in San Diego in February.
Are we seeing evidence of a fragmented Internet where some places on the Internet cannot reach other places? Are these differences in the perspectives of various routing vantage points signs of underlying fractures of the fabric of connectivity in the Internet?
The Border Gateway Protocol, or BGP, has been holding the Internet together, for more than two decades and nothing seems to be falling off the edge so far. But the past does not necessarily determine the future. How well is BGP coping with the ever-growing Internet?
One of the more difficult design exercises in packet switched network architectures is that of the design of packet fragmentation. In this article I’d like to examine IP packet fragmentation in detail and look at the design choices made by IP version 4, and then compare that with the design choices made by IP version 6.