I’m a bit late posting this … but this Thursday (an odd day for me) I’m running How the Internet Really Works, Part 1, over at Safari Books Online. From the page:
This live training will provide an overview of the systems, providers, and standards bodies important to the operation of the global Internet, including the Domain Name System (DNS), the routing and transport systems, standards bodies, and registrars. For DNS, the process of a query will be considered in some detail, who pays for each server used in the resolution process, and tools engineers can use to interact DNS. For routing and transport, the role of each kind of provider will be considered, along with how they make money to cover their costs, and how engineers can interact with the global routing table (the Default Free Zone, of DFZ). Finally, registrars and standards bodies will be considered, including their organizational structure, how they generate revenue, and how to find their standards.
You can register for the training at the link above. I’ll be giving part 2 of How the Internet Really Works next month.
Just about everyone prepends AS’ to shift inbound traffic from one provider to another—but does this really work? First, a short review on prepending, and then a look at some recent research in this area.
What is prepending meant to do?
Looking at this network diagram, the idea is for AS6500 (each router is in its own AS) to steer traffic through AS65001, rather than AS65002, for 100::/64. The most common method to trying to accomplish this is AS65000 can prepend its own AS number on the AS Path Multiple times. Increasing the length of the AS Path will, in theory, cause a route to be less preferred.
In this case, suppose AS65000 prepends its own AS number on the AS Path once before advertising the route towards AS65001, and not towards AS65002. Assuming there is no link between AS65001 and AS65002, what would we expect to happen? What we would expect is AS65001 will receive one route towards 100::/64 with an AS Path of 2 and use this route. AS65002 will, likewise, receive one route towards 100::/64 with an AS Path of 1 and use this route.
AS65003, however, will receive two routes towards 100::/64, one with an AS Continue reading
Intentionally poisoning BGP routes in the Default-Free Zone (DFZ) would always be a bad thing, right? Actually, this is a fairly common method to steer traffic flows away from and through specific autonomous systems. How does this work, how common is it, and who does this? Jared Smith joins us on this episode of the Hedge to discuss the technique, and his research into how frequently it is used.
Recent research into the text of RFCs versus the security of the protocols described came to this conclusion—
This should come as no surprise to network engineers—after all, complexity is the enemy of security. Beyond the novel ways the authors use to understand the shape of the world of RFCs (you should really read the paper; it’s really interesting), this desire to increase security by decreasing the ambiguity of specifications is fascinating. We often think that writing better specifications requires having better requirements, but down this path only lies despair.
Better requirements are the one thing a network engineer can never really hope for.
It’s not just that networks are often used as a sort of “complexity sink,” the place where every hard problem goes to be solved. It’s also the uncertainty of the environment in which the network must operate. What new application will be stuffed on top of the network this week? Will anyone tell the network folks about this new application, or just open a ticket when it doesn’t work right? What about all Continue reading
Attacks on virtual private networks, like those this week targeting a trio of known vulnerabilities in Pulse Secure appliances, have intensified in recent months along with the increase in remote and hybrid work environments since the outbreak of COVID-19.
One position I think more people should be aware of is a CISO. What does this actually mean – besides being made redundant when a breach is announced? I have personally worked within a CISO-as-a-Service position, but I wanted to get some more insight from those who are working in the trenches daily in an in-house CISO position.
According to Help Net Security, organizations in the pharmaceutical and biotech sectors witnessed a 50% increase in digital attacks between 2019 and 2020. It appears that at least part of those attacks originated from nation-state actors who specifically sought to steal COVID-19 vaccine research.
QUIC is a middle-aged protocol at this point—it’s several years old, and widely deployed although TCP still dominates the transport layer of the Internet. In this episode of the Hedge, Jana Iyengar joins Alvaro Retana and Russ White to discuss the motivation for developing QUIC, and its ongoing development and deployment.
One of the big movements in the networking world is disaggregation—splitting the control plane and other applications that make the network “go” from the hardware and the network operating system. This is, in fact, one of the movements I’ve been arguing in favor of for many years—and I’m not about to change my perspective on the topic. There are many different arguments in favor of breaking the software from the hardware. The arguments for splitting hardware from software and componentizing software are so strong that much of the 5G transition also involves the open RAN, which is a disaggregated stack for edge radio networks.
If you’ve been following my work for any amount of time, you know what comes next: If you haven’t found the tradeoffs, you haven’t looked hard enough.
This article on hardening Linux (you should go read it, I’ll wait ’til you get back) exposes some of the complexities and tradeoffs involved in disaggregation in the area of security. Some further thoughts on hardening Linux here, as well. Two points.
First, disaggregation has serious advantages, but disaggregation is also hard work. With a commercial implementation you wouldn’t necessarily think about these kinds of supply chain issues. Continue reading
The Court ruled that Google did not violate copyright law when it included parts of Oracle’s Java programming code in its Android operating system—ending a decade-long multibillion dollar legal battle.
Today, however, choice is fast becoming an empty mantra as consumers face the iron law of compatibility. Forced to “opt in,” users submit to a relentless schedule of upgrades and updates among the ever-proliferating gadgets and technologies that bring us so much while governing our lives more and more.
I like my colleagues, but I’ve never met them in person. I found my own doctor; I cook my own food. My manager is 26 — too young for me to expect any parental warmth from him. When people ask me how I feel about my new position, I shrug: It’s a job.
Another day, another horrific Facebook privacy scandal. We know what comes next: Facebook will argue that losing a lot of our data means bad third-party actors are the real problem that we should trust Facebook to make more decisions about our data to protect against them.
The average software application depends on more than 500 open source libraries and components, up 77% from 298 dependencies in two years, highlighting the difficulty of tracking the vulnerabilities in every software component, according to a new report from software management firm Synopsys.
Ambient computing is a broad term that describes an environment of smart devices, data, A.I. decisions, and human activity that enables computer actions alongside everyday life, without the need for direct human commands or intervention.
In early March 2021, a hacker group publicly exposed the username and password of an administrative account of a security camera vendor. The credentials enabled them to access 150,000 commercial security systems and, potentially, set up subsequent attacks on other critical equipment.
Security automation for posture assessment has been difficult to achieve even though many standards-based and proprietary solutions have been developed. The primary problem is the complexity of solutions requiring customisation by each enterprise.
Although there are varying opinions 5G—is it real? Is it really going to have extremely low latency? Does the disaggregation of software and hardware really matter? Is it really going to provide a lot more bandwidth? Are existing backhaul networks going to be able to handle the additional load? For network engineers in particular, the world of 5G is a foreign country with its own language, expectations, and ways of doing things.
On this episode of the Hedge, Ian Goetz joins Tom Ammon and Russ White to provide a basic overview of 5G, and inject some reality into the discussion.
Back in January, I ran into an interesting article called The many lies about reducing complexity:
Reducing complexity sells. Especially managers in IT are sensitive to it as complexity generally is their biggest headache. Hence, in IT, people are in a perennial fight to make the complexity bearable.
Gerben then discusses two ways we often try to reduce complexity. First, we try to simply reduce the number of applications we’re using. We see this all the time in the networking world—if we could only get to a single pane of glass, or reduce the number of management packages we use, or reduce the number of control planes (generally to one), or reduce the number of transport protocols … but reducing the number of protocols doesn’t necessarily reduce complexity. Instead, we can just end up with one very complex protocol. Would it really be simpler to push DNS and HTTP functionality into BGP so we can use a single protocol to do everything?
Second, we try to reduce complexity by hiding it. While this is sometimes effective, it can also lead to unacceptable tradeoffs in performance (we run into the state, optimization, surfaces triad here). It can also make the system Continue reading
In this study, we investigate DoT using 3.2k RIPE Atlas home probes deployed across more than 125 countries in July 2019. We issue DNS measurements using Do53 and DoT, which RIPE Atlas has been supporting since 2018, to a set of 15 public resolver services (five of which support DoT), in addition to the local probe resolvers, shown in the figures below.
In the early days of congestion control signaling, we originally came up with the rather crude “Source Quench” control message, an ICMP control message sent from an interior gateway to a packet source that directed the source to reduce its packet sending rate.
Posture assessment requires periodic comparisons of current controls to expected values. Having posture assessment as part of products when deployed is possible, but it will take some time until the industry can provide these capabilities.
Many networks are designed and operationally drive by the configuration and management of features supporting applications and use cases. For network engineering to catch up to the rest of the operational world, it needs to move rapidly towards data driven management based on a solid understanding of the underlying protocols and systems. Brooks Westbrook joins Tom Amman and Russ White to discuss the data driven lens in this episode of the Hedge.
When I was in the military we were constantly drilled about the problem of Essential Elements of Friendly Information, or EEFIs. What are EEFis? If an adversary can cast a wide net of surveillance, they can often find multiple clues about what you are planning to do, or who is making which decisions. For instance, if several people married to military members all make plans to be without their spouses for a long period of time, the adversary can be certain a unit is about to be deployed. If the unit of each member can be determined, then the strength, positioning, and other facts about what action you are taking can be guessed.
Given enough broad information, an adversary can often guess at details that you really do not want them to know.
What brings all of this to mind is a recent article in Dark Reading about how attackers take advantage of publicly available information to form Spear Phishing attacks—
Most security leaders are acutely aware of the threat phishing scams pose to enterprise security. What garners less attention is the vast amount of publicly available information about organizations and their employees that enables these attacks.
Going back further Continue reading
The Resource Public Key Infrastructure (RPKI) is a specialized PKI designed and deployed to improve the security of the Internet BGP routing system. Some of the ‘resources’ that make up the RPKI include IP address prefixes and Autonomous System numbers (ASNs).
New research into 5G architecture has uncovered a security flaw in its network slicing and virtualized network functions that could be exploited to allow data access and denial of service attacks between different network slices on a mobile operator’s 5G network.
The “Milan” Epyc 7003 processors, the third generation of AMD’s revitalized server CPUs, is now in the field, and we await the entry of the “Ice Lake” Xeon SPs from Intel for the next jousting match in the datacenter to begin.
Those who follow my work know I’ve been focused on building live webinars for the last year or two, but I am still creating pre-recorded material for Pearson. The latest is built from several live webinars which I no longer give; I’ve updated the material and turned them into a seven-hour course called How Networks Really Work. Although I begin here with the “four things,” the focus is on a problem/solution view of routed control planes. From the description:
There are many elements to a networking system, including hosts, virtual hosts, routers, virtual routers, routing protocols, discovery protocols, etc. Each protocol and device (whether virtual or physical) is generally studied as an individual “thing.” It is not common to consider all these parts as components of a system that works together to carry traffic through a network. To show how all these components work together to form a complete system, this video course presents a series of walk throughs showing the processing involved in various kinds of network events, and how control planes use those events to build the information needed to carry traffic through a network.
Communication is one of those soft skills so often cited as a key to success—but what does effective communication entail? Mike Bushong joins Eyvonne Sharp and Russ White on the Hedge to discuss radical candor, and the importance of giving and taking honest feedback to relationships and business.
The Internet was originally designed as a research network, but eventually morphed into a primarily commercial system. While “Internet 2” sounds like it might be a replacement for the Internet, it was really started as a way to interconnect high speed computing systems for researchers—a goal the Internet doesn’t really provide any longer. Dale Finkelsen joins Donald Sharp and Russ White for this episode of the History of Networking to discuss the origins of Internet 2.
I began writing this post just to remind readers this blog does have a number of RSS feeds—but then I thought … well, I probably need to explain why that piece of information is important.
The amount of writing, video, and audio being thrown at the average person today is astounding—so much so that, according to a lot of research, most people in the digital world have resorted to relying on social media as their primary source of news. Why do most people get their news from social media? I’m pretty convinced this is largely a matter of “it saves time.” The resulting feed might not be “perfect,” but it’s “close enough,” and no-one wants to spend time seeking out a wide variety of news sources so they will be better informed.
The problem, in this case, is that “close enough” is really a bad idea. We all tend to live in information bubbles of one form or another (although I’m fully convinced it’s much easier to live in a liberal/progressive bubble, being completely insulated from any news that doesn’t support your worldview, than it is to live in a conservative/traditional one). If you think about the role of Continue reading
I’ve started publishing in the Public Discourse on topics of technology and culture; the following is the first article they’ve accepted. Note the contents might be classified as a little controversial.
The recent “takedown” of Parler by Amazon, Apple, and Google has spurred discussion in technological circles. Do the companies’ actions constitute an abuse of free speech? Or were the tech giants well within their rights, simply enforcing their own “terms of service”? Looking back at the history of free speech in the United States, especially the historical moment during which the right to free speech was codified, can offer guidance.