Author Archives: Russ
Author Archives: Russ
When I was in the military we were constantly drilled about the problem of Essential Elements of Friendly Information, or EEFIs. What are EEFis? If an adversary can cast a wide net of surveillance, they can often find multiple clues about what you are planning to do, or who is making which decisions. For instance, if several people married to military members all make plans to be without their spouses for a long period of time, the adversary can be certain a unit is about to be deployed. If the unit of each member can be determined, then the strength, positioning, and other facts about what action you are taking can be guessed.
Given enough broad information, an adversary can often guess at details that you really do not want them to know.
What brings all of this to mind is a recent article in Dark Reading about how attackers take advantage of publicly available information to form Spear Phishing attacks—
Most security leaders are acutely aware of the threat phishing scams pose to enterprise security. What garners less attention is the vast amount of publicly available information about organizations and their employees that enables these attacks.
Going back further Continue reading
The Resource Public Key Infrastructure (RPKI) is a specialized PKI designed and deployed to improve the security of the Internet BGP routing system. Some of the ‘resources’ that make up the RPKI include IP address prefixes and Autonomous System numbers (ASNs).
New research into 5G architecture has uncovered a security flaw in its network slicing and virtualized network functions that could be exploited to allow data access and denial of service attacks between different network slices on a mobile operator’s 5G network.
The “Milan” Epyc 7003 processors, the third generation of AMD’s revitalized server CPUs, is now in the field, and we await the entry of the “Ice Lake” Xeon SPs from Intel for the next jousting match in the datacenter to begin.
Those who follow my work know I’ve been focused on building live webinars for the last year or two, but I am still creating pre-recorded material for Pearson. The latest is built from several live webinars which I no longer give; I’ve updated the material and turned them into a seven-hour course called How Networks Really Work. Although I begin here with the “four things,” the focus is on a problem/solution view of routed control planes. From the description:
There are many elements to a networking system, including hosts, virtual hosts, routers, virtual routers, routing protocols, discovery protocols, etc. Each protocol and device (whether virtual or physical) is generally studied as an individual “thing.” It is not common to consider all these parts as components of a system that works together to carry traffic through a network. To show how all these components work together to form a complete system, this video course presents a series of walk throughs showing the processing involved in various kinds of network events, and how control planes use those events to build the information needed to carry traffic through a network.
Communication is one of those soft skills so often cited as a key to success—but what does effective communication entail? Mike Bushong joins Eyvonne Sharp and Russ White on the Hedge to discuss radical candor, and the importance of giving and taking honest feedback to relationships and business.
The Internet was originally designed as a research network, but eventually morphed into a primarily commercial system. While “Internet 2” sounds like it might be a replacement for the Internet, it was really started as a way to interconnect high speed computing systems for researchers—a goal the Internet doesn’t really provide any longer. Dale Finkelsen joins Donald Sharp and Russ White for this episode of the History of Networking to discuss the origins of Internet 2.
I began writing this post just to remind readers this blog does have a number of RSS feeds—but then I thought … well, I probably need to explain why that piece of information is important.
The amount of writing, video, and audio being thrown at the average person today is astounding—so much so that, according to a lot of research, most people in the digital world have resorted to relying on social media as their primary source of news. Why do most people get their news from social media? I’m pretty convinced this is largely a matter of “it saves time.” The resulting feed might not be “perfect,” but it’s “close enough,” and no-one wants to spend time seeking out a wide variety of news sources so they will be better informed.
The problem, in this case, is that “close enough” is really a bad idea. We all tend to live in information bubbles of one form or another (although I’m fully convinced it’s much easier to live in a liberal/progressive bubble, being completely insulated from any news that doesn’t support your worldview, than it is to live in a conservative/traditional one). If you think about the role of Continue reading
I’ve started publishing in the Public Discourse on topics of technology and culture; the following is the first article they’ve accepted. Note the contents might be classified as a little controversial.
The recent “takedown” of Parler by Amazon, Apple, and Google has spurred discussion in technological circles. Do the companies’ actions constitute an abuse of free speech? Or were the tech giants well within their rights, simply enforcing their own “terms of service”? Looking back at the history of free speech in the United States, especially the historical moment during which the right to free speech was codified, can offer guidance.
please note I do not necessarily agree with anything contained in the articles linked here, nor do I necessarily support any of the sites I link to—I gather these links because I think they are interesting and present an interesting point of view worth hearing
As stated earlier, o.com along with most of the single-character .com labels were registered by Dr. Postel on December 1, 1993 as a way of reserving these domain names for a since-unrealized potential development path for the Internet’s Domain Name System (DNS).
But it strains credulity to believe random tweets can lead otherwise normal people to drive across the country and stage an insurrection. That places an undue focus on misinformation itself, rather than on the people and institutions sharing it and on the people who choose to access and believe it.
The Atlas, launched in July, contains data on more than 7,000 surveillance programs—including facial recognition, drones, and automated license plate readers—operated by thousands of local police departments and sheriffs’ offices nationwide.
Three years ago, Spectre changed the way we think about security boundaries on the web. It quickly became clear that flaws in modern processors undermined the guarantees that web browsers could make about preventing data leaks between applications.
Until the (more or less) recent explosion of new DNS RFCs (aka the ‘DNS Camel’, loosely referring to the straw that broke the camel’s back), I used to think of the DNS as something similar to chess, with a fairly simple set of rules that developed into complex systems when deployed.
On April 6 at 9 am PDT I’m moderating the second part of a discussion on the evolution of wide area networks. This time we’re going to focus on more of the future rather than the past, relying on our guests, Jeff Tantsura, Brooks Westbrook, and Nick Buraglio to answer questions about putting new WAN technologies to use, and how to choose between private and public wide area options.
When the interests of the end user, the operator, and the vendor come into conflict, who should protocol developers favor? According to RFC8890, the needs and desires of the end user should be the correct answer. According to the RFC:
As the Internet increasingly mediates essential functions in societies, it has unavoidably become profoundly political; it has helped people overthrow governments, revolutionize social orders, swing elections, control populations, collect data about individuals, and reveal secrets. It has created wealth for some individuals and companies while destroying that of others. All of this raises the question: For whom do we go through the pain of gathering rough consensus and writing running code?
Mark Nottingham joins Alvaro Retana and Russ White on this episode of the Hedge to discuss why the Internet is for end users.
Why are networks so insecure?
One reason is we don’t take network security seriously. We just don’t think of the network as a serious target of attack. Or we think of security as a problem “over there,” something that exists in the application realm, that needs to be solved by application developers. Or we think the consequences of a network security breach as “well, they can DDoS us, and then we can figure out how to move load around, so if we build with resilience (enough redundancy) we’re already taking care of our security issues.” Or we put our trust in the firewall, which sits there like some magic box solving all our problems.
The problem is–none of this is true. In any system where overall security is important, defense-in-depth is the key to building a secure system. No single part of the system bears the “primary responsibility” for “security.” The network is certainly a part of any defense-in-depth scheme that is going to work.
Which means network protocols need to be secure, at least in some sense, as well. I don’t mean “secure” in the sense of privacy—routes are not (generally) personally identifiable information (there are always Continue reading
Decision making, especially in large organizations, fails in many interesting ways. Understanding these failure modes can help us cope with seemingly difficult situations, and learn how to make decisions better. On this episode of the Hedge, Frederico Lucifredi, Ethan Banks, and Russ White discuss Frederico’s thoughts on developing a taxonomy of indecision. You can find his presentation on this topic here.
While those working in the network engineering world are quite familiar with the expression “it is always something!,” defining this (often exasperated) declaration is a little trickier. The wise folks in the IETF, however, have provided a definition in RFC1925. Rule 7, “it is always something,” is quickly followed with a corollary, rule 7a, which says: “Good, Fast, Cheap: Pick any two (you can’t have all three).”
You can either quickly build a network which works well and is therefore expensive, or take your time and build a network that is cheap and still does not work well, or… Well, you get the idea. There are many other instances of these sorts of three-way tradeoffs in the real world, such as the (in)famous CAP theorem, which states a database can be consistent, available, and partitionable (or partitioned). Eventual consistency, and problems from microloops to surprise package deliveries (when you thought you ordered one thing, but another was placed in your cart because of a database inconsistency) have resulted. Another form of this three-way tradeoff is the much less famous, but equally true, state, optimization, surface tradeoff trio in network design.
It is possible, however, to build a system Continue reading
Jack of all trades, master of none.
This singular saying—a misquote of Benjamin Franklin (more on this in a moment)—is the defining statement of our time. An alternative form might be the fox knows many small things, but the hedgehog knows one big thing.
The rules for success in the modern marketplace, particularly in the technical world, are simple: start early, focus on a single thing, and practice hard.
But when I look around, I find these rules rarely define actual success. Consider my life. I started out with three different interests, starting jazz piano lessons when I was twelve, continuing music through high school, college, and for many years after. At the same time, I was learning electronics—just about everyone in my family is in electronic engineering (or computers, when those came along) in one way or another.
I worked as on airfield electronics for a few years in the US Air Force (one of the reasons I tend to be calm is I’ve faced death up close and personal multiple times, an experience that tends to center your mind), including RADAR, radio, and instrument landing systems. Besides these two, I was highly interested in art and illustration, getting Continue reading
We are on a path that will see information security transformed in the next 5-10 years. There are five trends that will enable us as an industry to improve the overall security posture and reduce the surface attack space.
In the wake of recent high-profile security incidents, I started wondering: what, generally speaking, should an organization’s security priorities be? That is, given a finite budget — and everyone’s budget is finite — what should you do first?
In this essay, we discuss Articles 22-25.3 After focusing on civil and political rights in the earlier Articles, the drafters of the UDHR turn to social security, defined as the economic, social, and cultural rights necessary for personal dignity and the free of development of one’s personality.
The international pandemic has sent companies scrambling to support lots of new remote workers, which has meant changes in processes, application development, application deployment, connectivity, and even support. Mike Parks joins Eyvonne Sharp and Russ White to discuss these changes on this episode of the Hedge.
While identity is not directly a networking technology, it is closely adjacent to networking, and a critical part of the Internet’s architecture. In this episode of the History of Networking, Pamela Dingle joins Donald Sharpe and Russ White to discuss the humble beginnings of modern identity systems, including NDS and Streettalk.
What percentage of business-impacting application outages are caused by networks? According to a recent survey by the Uptime Institute, about 30% of the 300 operators they surveyed, 29% have experienced network related outages in the last three years—the highest percentage of causes for IT failures across the period.
A secondary question on the survey attempted to “dig a little deeper” to understand the reasons for network failure; the chart below shows the result.
We can be almost certain the third-party failures, if the providers were queried, would break down along the same lines. Is there a pattern among the reasons for failure?
Configuration change—while this could be somewhat managed through automation, these kinds of failures are more generally the result of complexity. Firmware and software failures? The more complex the pieces of software, the more likely it is to have mission-impacting errors of some kind—so again, complexity related. Corrupted policies and routing tables are also complexity related. The only item among the top preventable causes that does not seem, at first, to relate directly to complexity is network overload and/or congestion problems. Many of these cases, however, might also be complexity related.
The Uptime Institute draws this same lesson, though Continue reading
I’ll be joining Jeff Tantsura, Nick Buraglio, and Brooks Westbrook for a roundtable on March 16, 9 am PST (that’s tomorrow if you’re reading this the day it publishes) about the development of wide area networking technologies up until today. This is the first part of a two part series on changes in the wide area network.