Author Archives: Russ
Author Archives: Russ
Should the network be dumb or smart? Network vendors have recently focused on making the network as smart as possible because there is a definite feeling that dumb networks are quickly becoming a commodity—and it’s hard to see where and how steep profit margins can be maintained in a commodifying market. Software vendors, on the other hand, have been encroaching on the network space by “building in” overlay network capabilities, especially in virtualization products. VMWare and Docker come immediately to mind; both are either able to, or working towards, running on a plain IP fabric, reducing the number of services provided by the network to a minimum level (of course, I’d have a lot more confidence in these overlay systems if they were a lot smarter about routing … but I’ll leave that alone for the moment).
How can this question be answered? One way is to think through what sorts of things need to be done in processing packets, and then think through where it makes most sense to do those things. Another way is to measure the accuracy or speed at which some of these “packet processing things” can be done so you can decide in a more Continue reading
Blame it on the webinar yesterday… I’m late again.
However, there have been multiple reported attempts to exploit the transfer market as part of malicious operations — such as the hijacking of dormant address space and spamming — which, judging by exchanges between operators in mailing Continue reading
Certifications are a perennial topic (like weeds, perhaps) in the world of network engineering. While we often ask whether you should get a certification or a degree, or whether you should get a certification at all, we don’t often ask—now that you have the certification, how long should you keep it? Do you keep recertifying “forever,” or is there a limit? Join us as Mike Bolitho, Eyvonne Sharp, Tom Ammon, and Russ White discuss when you should give up on that certification.
Scott Bradner was given his first email address in the 1970’s, and his workstation was the gateway for all Internet connectivity at Harvard for some time. Join Donald Sharp and Russ White as Scott recounts the early days of networking at Harvard, including the installation of the first Cisco router, the origins of comparative performance testing and Interop, and the origins of the SHOULD, MUST, and MAY as they are used in IETF standards today.
This, in a nutshell, is what is often wrong with our design thinking in the networking world today. We want things to be efficient, wringing the last little dollar, and the last little bit of bandwidth, out of everything.
This is also, however, a perfect example of the problem of triads and tradeoffs. In the case of the street sweeper, we might thing, “well, we could replace those folks sitting around smoking a cigarette and cracking jokes with a robot, making things Continue reading
Open source software is everywhere, it seems—and yet it’s nowhere at the same time. Everyone is talking about it, but how many people and organizations are actually using it? Pete Lumbis at NVIDIA joins Tom Ammon and Russ White to discuss the many uses and meanings of open source software in the networking world.
In old presentations on network security (watch this space; I’m working on a new security course for Ignition in the next six months or so), I would use a pair of chocolate chip cookies as an illustration for network security. In the old days, I’d opine, network security was like a cookie that was baked to be crunchy on the outside and gooey on the inside. Now-a-days, however, I’d say network security needs to be more like a store-bought cookie—crunchy all the way through. I always used this illustration to make a point about defense-in-depth. You cannot assume the thin crunchy security layer at the edge of your network—generally in the form of stateful packet filters and the like (okay, firewalls, but let’s leave the appliance world behind for a moment)—is what you really need.
There are such things as insider attacks, after all. Further, once someone breaks through the thin crunchy layer at the edge, you really don’t want them being able to move laterally through your network.
The United States National Institute of Standards and Technology (NIST) has released a draft paper describing Zero Trust Architecture, which addresses many of the same concerns as the cookie that’s crunchy Continue reading
Can you really trust what a routing protocol tells you about how to reach a given destination? Ivan Pepelnjak joins Nick Russo and Russ White to provide a longer version of the tempting one-word answer: no! Join us as we discuss a wide range of issues including third-party next-hops, BGP communities, and the RPKI.
Steve Bellovin began working on networks as a system administrator, helping to build USENIX, which supports operating system research. His work as a system administrator drew his interest into security and cryptographic protection of data, leading him into working on some of the foundational protocols on the Internet.
Insider threats can be accidental or intentional, but the impact of insider breaches remain the same. Negligence at the organization regarding data privacy requirements and compliance can cause catastrophic data loss. To implement effective mitigation measures, employees must be aware of their Continue reading
A couple of weeks ago Scott Morris, Ethan Banks, and I sat down to talk about a project I’ve been working on for a while—a different way of looking at reaching for and showing your skills as a network engineer.
You can listen over at Packet Pushers, or download the show directly here.
The security of the global routing table is foundational to the security of the overall Internet as an ecosystem—if routing cannot be trusted, then everything that relies on routing is suspect, as well. Mutually Agreed Norms for Routing Security (MANRS) is a project of the Internet Society designed to draw network operators of all kinds into thinking about, and doing something about, the security of the global routing table by using common-sense filtering and observation. Andrei Robachevsky joins Russ White and Tom Ammon to talk about MANRS.
Latency is a big deal for many modern applications, particularly in the realm of machine learning applied to problems like determining if someone standing at your door is a delivery person or a … robber out to grab all your smart toasters and big screen television. The problem is networks, particularly in the last mile don’t deal with latency very well. In fact, most of the network speeds and feeds available in anything outside urban areas kindof stinks. The example given by Bagchi et al. is this—
A fixed video sensor may generate 6Mbps of video 24/7, thus producing nearly 2TB of data per month—an amount unsustainable according to business practices for consumer connections, for example, Comcast’s data cap is at 1TB/month and Verizon Wireless throttles traffic over 26GB/month. For example, with DOCSIS 3.0, a widely deployed cable Internet technology, most U.S.-based cable systems deployed today support a maximum of 81Mbps aggregated over 500 home—just 0.16Mbps per home.
Bagchi, Saurabh, Muhammad-Bilal Siddiqui, Paul Wood, and Heng Zhang. “Dependability in Edge Computing.” Communications of the ACM 63, no. 1 (December 2019): 58–66. https://doi.org/10.1145/3362068.
The authors claim a lot of the problem here is just Continue reading
A little late, but still…
As a search engine optimization (SEO) and domain name consultant, one of the questions I get asked most often about domain names is whether or not the domain name or TLD (Top-Level Domain) matters. Will the domain name ending have an effect on SEO or search engine rankings. Are certain domain name endings preferred by the search engines over other domain name extensions? I decided to answer this question based on search engine optimization testing and not just on my Continue reading
Consolidation is a well-recognized trend in the Internet ecosystem—but what does this centralization mean in terms of distributed systems, such as the DNS? Jari Arkko joins this episode of the Hedge, along with Alvaro Retana, to discuss the import and impact of centralization on the Internet through his draft, draft-arkko-arch-infrastructure-centralisation.
Started as a consulting company, SUSE was one of the first organizations to begin working in the development and commercialization of LINUX. Through the years, LINUX has become the base for much of the IT world, including many of the open source network operating systems. Dirk Hohndel joins the History of Networking to discuss the origins of SUSE LINUX.
I’s fnny, bt yu cn prbbly rd ths evn thgh evry wrd s mssng t lst ne lttr. This is because every effective language—or rather every communication system—carried enough information to reconstruct the original meaning even when bits are dropped. Over-the-wire protocols, like TCP, are no different—the protocol must carry enough information about the conversation (flow data) and the data being carried (metadata) to understand when something is wrong and error out or ask for a retransmission. These things, however, are a form of data exhaust; much like you can infer the tone, direction, and sometimes even the content of conversation just by watching the expressions, actions, and occasional word spoken by one of the participants, you can sometimes infer a lot about a conversation between two applications by looking at the amount and timing of data crossing the wire.
The paper under review today, Off-Path TCP Exploit, uses cleverly designed streams of packets and observations about the timing of packets in a TCP stream to construct an off-path TCP injection attack on wireless networks. Understanding the attack requires understanding the interaction between the collision avoidance used in wireless systems and TCP’s reaction to packets with a sequence number outside Continue reading