Can you really trust what a routing protocol tells you about how to reach a given destination? Ivan Pepelnjak joins Nick Russo and Russ White to provide a longer version of the tempting one-word answer: no! Join us as we discuss a wide range of issues including third-party next-hops, BGP communities, and the RPKI.
Steve Bellovin began working on networks as a system administrator, helping to build USENIX, which supports operating system research. His work as a system administrator drew his interest into security and cryptographic protection of data, leading him into working on some of the foundational protocols on the Internet.
Insider threats can be accidental or intentional, but the impact of insider breaches remain the same. Negligence at the organization regarding data privacy requirements and compliance can cause catastrophic data loss. To implement effective mitigation measures, employees must be aware of their Continue reading
A couple of weeks ago Scott Morris, Ethan Banks, and I sat down to talk about a project I’ve been working on for a while—a different way of looking at reaching for and showing your skills as a network engineer.
You can listen over at Packet Pushers, or download the show directly here.
The security of the global routing table is foundational to the security of the overall Internet as an ecosystem—if routing cannot be trusted, then everything that relies on routing is suspect, as well. Mutually Agreed Norms for Routing Security (MANRS) is a project of the Internet Society designed to draw network operators of all kinds into thinking about, and doing something about, the security of the global routing table by using common-sense filtering and observation. Andrei Robachevsky joins Russ White and Tom Ammon to talk about MANRS.
Latency is a big deal for many modern applications, particularly in the realm of machine learning applied to problems like determining if someone standing at your door is a delivery person or a … robber out to grab all your smart toasters and big screen television. The problem is networks, particularly in the last mile don’t deal with latency very well. In fact, most of the network speeds and feeds available in anything outside urban areas kindof stinks. The example given by Bagchi et al. is this—
A fixed video sensor may generate 6Mbps of video 24/7, thus producing nearly 2TB of data per month—an amount unsustainable according to business practices for consumer connections, for example, Comcast’s data cap is at 1TB/month and Verizon Wireless throttles traffic over 26GB/month. For example, with DOCSIS 3.0, a widely deployed cable Internet technology, most U.S.-based cable systems deployed today support a maximum of 81Mbps aggregated over 500 home—just 0.16Mbps per home.
Bagchi, Saurabh, Muhammad-Bilal Siddiqui, Paul Wood, and Heng Zhang. “Dependability in Edge Computing.” Communications of the ACM 63, no. 1 (December 2019): 58–66. https://doi.org/10.1145/3362068.
The authors claim a lot of the problem here is just Continue reading
A little late, but still…
As a search engine optimization (SEO) and domain name consultant, one of the questions I get asked most often about domain names is whether or not the domain name or TLD (Top-Level Domain) matters. Will the domain name ending have an effect on SEO or search engine rankings. Are certain domain name endings preferred by the search engines over other domain name extensions? I decided to answer this question based on search engine optimization testing and not just on my Continue reading
Consolidation is a well-recognized trend in the Internet ecosystem—but what does this centralization mean in terms of distributed systems, such as the DNS? Jari Arkko joins this episode of the Hedge, along with Alvaro Retana, to discuss the import and impact of centralization on the Internet through his draft, draft-arkko-arch-infrastructure-centralisation.
Started as a consulting company, SUSE was one of the first organizations to begin working in the development and commercialization of LINUX. Through the years, LINUX has become the base for much of the IT world, including many of the open source network operating systems. Dirk Hohndel joins the History of Networking to discuss the origins of SUSE LINUX.
I’s fnny, bt yu cn prbbly rd ths evn thgh evry wrd s mssng t lst ne lttr. This is because every effective language—or rather every communication system—carried enough information to reconstruct the original meaning even when bits are dropped. Over-the-wire protocols, like TCP, are no different—the protocol must carry enough information about the conversation (flow data) and the data being carried (metadata) to understand when something is wrong and error out or ask for a retransmission. These things, however, are a form of data exhaust; much like you can infer the tone, direction, and sometimes even the content of conversation just by watching the expressions, actions, and occasional word spoken by one of the participants, you can sometimes infer a lot about a conversation between two applications by looking at the amount and timing of data crossing the wire.
The paper under review today, Off-Path TCP Exploit, uses cleverly designed streams of packets and observations about the timing of packets in a TCP stream to construct an off-path TCP injection attack on wireless networks. Understanding the attack requires understanding the interaction between the collision avoidance used in wireless systems and TCP’s reaction to packets with a sequence number outside Continue reading
The indomitable Greg Ferro joins this episode of the Hedge to talk about the path from automated to autonomic, including why you shouldn’t put everything into “getting automation right,” and why you still need to know the basics even if we reach a completely autonomic world.
This last week I was a guest on the TechSequences podcast with Leslie and Alexa discussing the centralization of the routed infrastructure in the ‘net. When that episode posts, I’ll cross post it here (but, of course, you should really just subscribe to their podcast, as they always have interesting guests—I’ll have Leslie and Alexa on the Hedge at some point, as well). The topic is related to this post on CircleID about the death of transit, which was a reaction to Geoff Huston’s article on the death of transit some time before.
All that to say… while reading through some research papers this week, I ran into a recent (2018) paper where Carisimo et al. try out different ways of measuring which autonomous systems belong to the “core” of the ‘net. They went about this by taking a set of AS’ “everyone” acknowledges to be “part of the core,” and then trying to find some measurement that successfully describes something all of them have in common.
The result is the k-metric, which measures the connectivity of an AS’ peers. If an AS has peers who are just as connected as they are, then k-metric is high. Otherwise, the k-metric Continue reading
The Internet Society exists to support the growth of the global ‘net across the world by working with stakeholders, building local connectivity like IXs and community based networks, and encouraging the use of open standards. On this episode of the Hedge, Dan York joins us to talk about the Open Standards Everywhere project which is part of the Internet Society. More information about Open Standards Everywhere can be found—
Ivan Pepelnjak was a founding member of the first IX in Slovenia twenty-five years ago. He joins us to describe the origins of the Internet, from the first dial-up circuits to the founding of the first IX and local DNS services here on the History of Networking. Ivan is an independent consultant and trainer; his work can be found at https://ipspace.net.
QUIC is a relatively new data transport protocol developed by Google, and currently in line to become the default transport for the upcoming HTTP standard. Because of this, it behooves every network engineer to understand a little about this protocol, how it operates, and what impact it will have on the network. We did record a History of Networking episode on QUIC, if you want some background.
In a recent Communications of the ACM article, a group of researchers (Kakhi et al.) used a modified implementation of QUIC to measure its performance under different network conditions, directly comparing it to TCPs performance under the same conditions. Since the current implementations of QUIC use the same congestion control as TCP—Cubic—the only differences in performance should be code tuning in estimating the round-trip timer (RTT) for congestion control, QUIC’s ability to form a session in a single RTT, and QUIC’s ability to carry multiple streams in a single connection. The researchers asked two questions in this paper: how does QUIC interact with TCP flows on the same network, and does UIC perform better than TCP in all situations, or only some?
To answer the first question, the authors tried running QUIC Continue reading
Memory isolation is a cornerstone security feature in the construction of every modern computer system. Allowing the simultaneous execution of multiple mutually distrusting applications at the same time on the same hardware, it is the basis of enabling secure execution of multiple processes on the same machine or in the cloud. The operating system is in charge of enforcing this isolation, as well Continue reading
Personal branding and marketing are two key topics that surface from time to time, but very few people talk about how to actually do these things. For this episode of the Hedge, Evan Knox from Caffeine Marketing to talk about the importance of personal marketing and branding, and some tips and tricks network engineers can follow to improve their personal brand.