Of the 4.2 billion IPv4 addresses available in the global space, how many are used—or rather, how many are “alive?” Given the increasing usage of IPv6, it might seem this is an unimportant question. Answering the question, however, resolves to another question that is actually more important: how can you determine whether or not an IP address is in use? This question might seem easy to answer: ping every address in the address space. This, however, turns out to be the wrong answer.
Scanning the Internet for Liveness. SIGCOMM Comput. Commun. Rev. 48, 2 (May 2018), 2-9. DOI: https://doi.org/10.1145/3213232.3213234
This answer is wrong because a substantial number of systems do not respond to ICMP requests. According to this paper, in fact, some 16% of the hosts they discovered that would respond to a TCP SYN, and another 2% that would respond to a UDP packet shaped to connect to a service, do not respond to ICMP requests. There are a number of possible reasons for this situation, including hosts being placed behind devices that block ICMP packets, hosts being configured not to respond to ICMP requests, or a server sitting behind a PAT or CGNAT Continue reading
KrebsOnSecurity recently had a chance to interview members of the REACT Task Force, a team of law enforcement officers and prosecutors based in Santa Clara, Calif. that has been tracking down individuals engaged in unauthorized “SIM swaps” — a complex form of mobile phone fraud that is often used to steal large amounts of cryptocurrencies and other items of value from victims. Snippets from that fascinating conversation are recounted below, and punctuated by accounts from a recent victim who lost more than $100,000 after his mobile phone number was hijacked. @Krebs on Security
PortSmash, as the new attack is being called, exploits a largely overlooked side-channel in Intel’s hyperthreading technology. A proprietary implementation of simultaneous multithreading, hyperthreading reduces the amount of time needed to carry out parallel computing tasks, in which large numbers of calculations or executions are carried out simultaneously. The performance boost is the result of two logical processor cores sharing the hardware of a single physical processor. The added logical cores make it easier to divide large tasks into smaller ones that can be completed more quickly. —Dan Goodin @ARS Technica
I just redid my slides for the network troubleshooting seminar I teach on Safari Books from time to time. This new set of slides should make for a better webinar. The outline now covers—
Segment 1: Foundations
Length: 50 minutes
10 Minute Break
Segment 2: Process
Length: 50 minutes
10 Minute Break
Segment 3: Examples
Length: 50 minutes
10 minute final Question and Answer Period
You can register here. Note the name of the seminar is changing, so the URL might change, as well.
The security of the global Default Free Zone DFZ) has been a topic of much debate and concern for the last twenty years (or more). Two recent papers have brought this issue to the surface once again—it is worth looking at what these two papers add to the mix of what is known, and what solutions might be available. The first of these—
Demchak, Chris, and Yuval Shavitt. 2018. “China’s Maxim – Leave No Access Point Unexploited: The Hidden Story of China Telecom’s BGP Hijacking.” Military Cyber Affairs 3 (1). https://doi.org/10.5038/2378-07126.96.36.1990.
—traces the impact of Chinese “state actor” effects on BGP routing in recent years. Whether these are actual attacks, or mistakes from human error for various reasons generally cannot be known, but the potential, at least, for serious damage to companies and institutions relying on the DFZ is hard to overestimate. This paper lays out the basic problem, and the works through a number of BGP hijacks in recent years, showing how they misdirected traffic in ways that could have facilitated attacks, whether by mistake or intentionally. For instance, quoting from the paper—
When the Internet outgrew its academic and research roots and gained some prominence and momentum in the broader telecommunications environment it found itself to be in opposition to many of the established practices of the international telecommunications arrangements and even in opposition to the principles that lie behind these arrangements. For many years, governments were being lectured that the Internet was “special”, and to apply the same mechanisms of national telecommunications and trade regulations to the Internet may not wreck the entire Internet, but they would surely isolate the nation that was attempting to apply these unnatural acts. —Geoff Huston @potaroo
Systemd has a remotely exploitable bug in its DHCPv6 client. That means anybody on the local network can send you a packet and take control of your computer. The flaw is a typical buffer-overflow. Several news stories have pointed out that this client was rewritten from scratch, as if that were the moral failing, instead of reusing existing code. That’s not the problem. @Errata Security
I recently sat with Kireeti Kompella and Gavin Cato to talk about current and future changes in network architecture over at SDXcentral.
Enterprise network architectures are being reshaped using tenets popularized by the major cloud properties. How will the emergence of cloud, edge cloud, and multicloud shape networks? We explore this evolution and look at the ways that real-time streaming telemetry, machine learning, and artificial intelligence affect how networks are designed and operated. In this video, you’ll hear from Gavin Cato, SVP, Development Engineering for Networking at Dell EMC, Kireeti Kompella, CTO & SVP, Engineering at Juniper Networks and Russ White, Infrastructure Architect at LinkedIn.
I will be at the NANOG on the Road in Toronto on the 12th of November giving a short version of the three hour How the Internet Really Works” seminar I give periodically for Pearson. IF you’re in the Toronto area, these one day events are a great place to meet folks in the operator community as well as see some great content.
I spent some time this week moving to a new theme, specifically Beaver Builder. It was a bit more work than I expected because of some serious limitations with the way Beaver Builder works—had I known about these limitations, I probably would have worked with another product, but by the time I discovered them, it was either find a way around the limitations, or spend a lot more time and/or money working through them.
In the process, I completely rebuilt the menu, and cleaned up the categories.
The site should be a good bit faster now. I’m not entirely certain the social sharing bits are working, and I will likely find a few things wrong here and there that need to be fixed over the next few weeks. I just discovered, for instance, that I lost all the work on the papers and topical pages I’d done earlier today, so those need to be redone, which will take a good bit of time.
I am giving my network troubleshooting class over at Safari Books Online on the 6th of December for those who are interested. I consider this a foundational session, covering the time components of an outage, a taxonomy of reactions to outages, the half-split method of searching for the root cause, and how models can help you understand the right questions to ask to narrow a problem down quickly. A lot of this course is based on formal methods of troubleshooting I learned in electronic engineering, adapted for the networking world.
This is one of three webinars I give at Safari Books on a periodic basis; I hope to be adding a fourth in the near future.
However, when BFD runs in a process on top of a generic kernel, notably when running BGP on the host, it is not unexpected to loose a few BFD packets on adverse conditions: the daemon handling the BFD sessions may not get enough CPU to answer in a timely manner. —Vincent Bernat
Mostafa Ammar, out of Georgia Tech (not my alma mater, but many of my engineering family are alumni there), recently posted an interesting paper titled The Service-Infrastructure Cycle, Ossification, and the Fragmentation of the Internet. I have argued elsewhere that we are seeing the fragmentation of the global Internet into multiple smaller pieces, primarily based on the centralization of content hosting combined with the rational economic decisions of the large-scale hosting services. The paper in hand takes a slightly different path to reach the same conclusion.
The author begins by noting networks are designed to provide a set of services. Each design paradigm not only supports the services it was designed for, but also allows for some headroom, which allows users to deploy new, unanticipated services. Over time, as newer services are deployed, the requirements on the network Continue reading