Author Archives: Russ
Author Archives: Russ
And the ugly truth is that you’ve become addicted to arguing with the “End Is Nigh” sandwich board guy. The guy you used to quietly skirt, you now seek him out and you bring your friends and for some idiotic reason you think that if you just post a little bit more you’re going to get him to see reason. Or put him in his place.
On July 27, two companies — open source project management firm Snyk and development services firm xs:code — announced they have teamed up to provide a browser plug-in that will give developers important metrics by which to gauge the security of open source projects.
In any case, many of us are now living at work. And living at work means you have a new responsibility to your coworkers and clients: how you sound, how you look, and the visual appeal of your workspace is now your problem. You may feel that this isn’t your responsibility, but your home is now your office.
In this episode of the Hedge, Scott Burleigh joins Alvaro Retana and Russ White to discuss the Bundle Protocol, which is designed to support delay tolerant data delivery over intermittently available or “stressed” networks. Examples include interstellar communication, email transmission over networks where access points move around (carrying data with them), etc. You can learn more about delay tolerant networking here, and read the most recent draft specification here.
GRE was the first tunneling protocol ever designed and deployed—and although it largely been overtaken by VXLAN and other tunnel protocols, it is still in widespread use today. For this episode of the History of Networking, Stan Hanks, the inventor of GRE—and hence the inventor of the concept of tunneling in packet switched networks—joins us to describe how and why GRE tunneling was invented.
I think we can all agree networks have become too complex—and this complexity is a result of the network often becoming the “final dumping ground” of every problem that seems like it might impact more than one system, or everything no-one else can figure out how to solve. It’s rather humorous, in fact, to see a lot of server and application folks sitting around saying “this networking stuff is so complex—let’s design something better and simpler in our bespoke overlay…” and then falling into the same complexity traps as they start facing the real problems of policy and scale.
This complexity cannot be “automated away.” It can be smeared over with intent, but we’re going to find—soon enough—that smearing intent on top of complexity just makes for a dirty kitchen and a sub-standard meal.
While this is always “top of mind” in my world, what brings it to mind this particular week is a paper by Jen Rexford et al. (I know Jen isn’t on the lead position in the author list, but still…) called A Clean Slate 4D Approach to Network Control and Management. Of course, I can appreciate the paper in part because I agree with a Continue reading
More than 4.7 million sources in five countries — the US, China, South Korea, Russia, and India — were used to level distributed denial-of-service (DDoS) attacks against victims in the second quarter of 2020, with the portmap protocol most frequently used as an amplification vector to create massive data floods, security and services firm A10 Networks says in its threat report for the second quarter.
Thousands of people graduate from colleges and universities each year with cybersecurity or computer science degrees only to find employers are less than thrilled about their hands-on, foundational skills. Here’s a look at a recent survey that identified some of the bigger skills gaps, and some thoughts about how those seeking a career in these fields can better stand out from the crowd.
Three standards for email security that are supposed to verify the source of a message have critical implementation differences that could allow attackers to send emails from one domain and have them verified as sent from a different — more legitimate-seeming — domain, says a research team who will present their findings at the virtual Black Hat conference next month.
While many network engineers think about getting a certification, not many think about going after a degree. Is there value in getting a degree for the network engineer? If so, what is it? What kinds of things do you learn in a degree program for network engineering? Eric Osterweil, a professor at George Mason University, joins Jeremy Filliben and Russ White on this episode of the Hedge to consider degrees for network engineers.
Did you know that today Video, gaming and social media account for almost 80% of the world’s internet traffic? The change in the composition of traffic has been accompanied by a dramatic change in the way content is delivered to the internet user: ISPs and transit providers have diminished in number and importance as the power and profile of a few Content Distribution Networks has increased as content is being pushed closer to the edge. But isn’t that just competition at work? What are the long-term consequences of such a change? In the second episode of our three-part series on “Internet Consolidation”, we talk to Russ White, co-host of The History of Networking and The Hedge podcast and a distinguished infrastructure architect and internet transit and routing expert.
Should the network be dumb or smart? Network vendors have recently focused on making the network as smart as possible because there is a definite feeling that dumb networks are quickly becoming a commodity—and it’s hard to see where and how steep profit margins can be maintained in a commodifying market. Software vendors, on the other hand, have been encroaching on the network space by “building in” overlay network capabilities, especially in virtualization products. VMWare and Docker come immediately to mind; both are either able to, or working towards, running on a plain IP fabric, reducing the number of services provided by the network to a minimum level (of course, I’d have a lot more confidence in these overlay systems if they were a lot smarter about routing … but I’ll leave that alone for the moment).
How can this question be answered? One way is to think through what sorts of things need to be done in processing packets, and then think through where it makes most sense to do those things. Another way is to measure the accuracy or speed at which some of these “packet processing things” can be done so you can decide in a more Continue reading
Blame it on the webinar yesterday… I’m late again.
The remote work environment is different from the office. Different environments call for different communication systems. A communication system that is adapted to the remote work reality can unlock amazing benefits: (even) better productivity, a competitive advantage in the long run and much a better work-life balance on top of it all.
Krill already lets you manage and publish ROAs seamlessly across multiple Regional Internet Registries (RIR). Now Krill will also tell you what is the effect of all of the ROAs that you’ve created, indicating which announcements seen in BGP are authorised and which ones are not, along with the reason. This ensures your ROAs accurately reflect your intended routing at all times.
Though CMMI is not an exact science, it is a way to present a quantifiable level of risk within the different elements of the ISP. CMMI can be a tool to provide the justification for necessary investment in information security.
Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works. In other words, the present and future might not repeat the past, but it will certainly rhyme. The number of times this has happened in the world of networking technology is almost beyond counting—mostly because there are only a few real problems to solve in every area of networks, and there are only a few real solutions to those problems.
Certifications are a perennial topic (like weeds, perhaps) in the world of network engineering. While we often ask whether you should get a certification or a degree, or whether you should get a certification at all, we don’t often ask—now that you have the certification, how long should you keep it? Do you keep recertifying “forever,” or is there a limit? Join us as Mike Bolitho, Eyvonne Sharp, Tom Ammon, and Russ White discuss when you should give up on that certification.
Scott Bradner was given his first email address in the 1970’s, and his workstation was the gateway for all Internet connectivity at Harvard for some time. Join Donald Sharp and Russ White as Scott recounts the early days of networking at Harvard, including the installation of the first Cisco router, the origins of comparative performance testing and Interop, and the origins of the SHOULD, MUST, and MAY as they are used in IETF standards today.
On a Spring 2019 walk in Beijing I saw two street sweepers at a sunny corner. They were beat-up looking and grizzled but probably younger than me. They’d paused work to smoke and talk. One told a story; the other’s eyes widened and then he laughed so hard he had to bend over, leaning on his broom. I suspect their jobs and pay were lousy and their lives constrained in ways I can’t imagine. But they had time to smoke a cigarette and crack a joke. You know what that’s called? Waste, inefficiency, a suboptimal outcome. Some of the brightest minds in our economy are earnestly engaged in stamping it out. They’re winning, but everyone’s losing. —Tim Bray
This, in a nutshell, is what is often wrong with our design thinking in the networking world today. We want things to be efficient, wringing the last little dollar, and the last little bit of bandwidth, out of everything.
This is also, however, a perfect example of the problem of triads and tradeoffs. In the case of the street sweeper, we might thing, “well, we could replace those folks sitting around smoking a cigarette and cracking jokes with a robot, making things Continue reading
the introduction of ATSC 3.0. This is the newest upgrade to broadcast television and is the first big upgrade since TV converted to all-digital over a decade ago. ATSC 3.0 is the latest standard that’s been released by the Advanced Television Systems Committee that creates the standards used by over-the-air broadcasters. —Doug Dawson
Adjacency to the current set of capabilities provides a disciplined way to think about where to invest next when working to stave off irrelevance. If distribution of runtimes is blocked, competition falters, and adjacent capabilities can go un-addressed. —Alex Russell
One strong message from the 2018 APNIC Survey was that not all organizations are ready to deploy IPv6, therefore, even while promoting IPv6 deployment, APNIC must continue to support access to IPv4 address space. —Vivek Nigam
How can we find a way to gain 80% of the benefits for 20% of the work? Named after Italian economist Vilfredo Pareto, the “Pareto Principle” asserts that for many events, roughly 80% of the effects come from 20% of the causes. Can we identify a Cybersecurity Pareto Principle? We can if security teams concentrate on these six priorities: —Dan Blum
Open source software is everywhere, it seems—and yet it’s nowhere at the same time. Everyone is talking about it, but how many people and organizations are actually using it? Pete Lumbis at NVIDIA joins Tom Ammon and Russ White to discuss the many uses and meanings of open source software in the networking world.
In old presentations on network security (watch this space; I’m working on a new security course for Ignition in the next six months or so), I would use a pair of chocolate chip cookies as an illustration for network security. In the old days, I’d opine, network security was like a cookie that was baked to be crunchy on the outside and gooey on the inside. Now-a-days, however, I’d say network security needs to be more like a store-bought cookie—crunchy all the way through. I always used this illustration to make a point about defense-in-depth. You cannot assume the thin crunchy security layer at the edge of your network—generally in the form of stateful packet filters and the like (okay, firewalls, but let’s leave the appliance world behind for a moment)—is what you really need.
There are such things as insider attacks, after all. Further, once someone breaks through the thin crunchy layer at the edge, you really don’t want them being able to move laterally through your network.
The United States National Institute of Standards and Technology (NIST) has released a draft paper describing Zero Trust Architecture, which addresses many of the same concerns as the cookie that’s crunchy Continue reading
Better late that never …
For decades, we have prized efficiency in our economy. We strive for it. We reward it. In normal times, that’s a good thing. Running just at the margins is efficient. A single just-in-time global supply chain is efficient. Consolidation is efficient. And that’s all profitable. Inefficiency, on the other hand, is waste. Extra inventory is inefficient. Overcapacity is inefficient. Using many small suppliers is inefficient. Inefficiency is unprofitable. —Bruce Schneier
In this post, we describe the challenges associated with measuring anycast services and propose a tool called the Border Gateway Protocol (BGP) Tuner. By using our open-source tool, operators can see in advance how changes in their BGP policies may impact the traffic load distribution over the anycast sites. This post is a short description of our technical report available here. —Joao M. Ceron
Can you really trust what a routing protocol tells you about how to reach a given destination? Ivan Pepelnjak joins Nick Russo and Russ White to provide a longer version of the tempting one-word answer: no! Join us as we discuss a wide range of issues including third-party next-hops, BGP communities, and the RPKI.
Steve Bellovin began working on networks as a system administrator, helping to build USENIX, which supports operating system research. His work as a system administrator drew his interest into security and cryptographic protection of data, leading him into working on some of the foundational protocols on the Internet.
While the pandemic circling the globe has undermined many critical systems and institutions of our society, I believe it also has the potential to strengthen the resolve of the Internet community to embrace the vision Berners-Lee had more than 50 years ago. We have the opportunity to enter the next major phase of the Internet — the era of trust. —Byron Holland
MANRS began as a collaboration among network operators and internet exchange providers, with Verisign formally becoming a participant in its Network Operator Program in 2017. Since then, with the help of Verisign and other MANRS participants, the initiative has grown to also include content delivery networks (CDN) and cloud providers. —Yong Kim
Insider threats can be accidental or intentional, but the impact of insider breaches remain the same. Negligence at the organization regarding data privacy requirements and compliance can cause catastrophic data loss. To implement effective mitigation measures, employees must be aware of their Continue reading