A short while back, the Linux Foundation (Networking), or LFN, published a white paper about the open source networking ecosystem. Rather than review the paper, or try to find a single theme, I decided to just write down “random thoughts” as I read through it. This is the (rather experimental) result.
The paper lists five goals of the project which can be reduced to three: reducing costs, increasing operator’s control over the network, and increasing security (by increasing code inspection). One interesting bit is the pairing of cost reduction with increasing control. Increasing control over a network generally means treating it less like an opaque box and more like a disaggregated set of components, each of which can be tuned in some way to improve the fit between network services, network performance, and business requirements. The less a network is an opaque box, however, the more time and effort required to manage it. This only makes sense—tuning a network to perform better requires time and talent, both of which cost money.
The offsetting point here is disaggregation and using open source can save money—although in my experience it never does. Again, running disaggregated software and hardware requires time and talent, Continue reading
The fundamental technologies for creating digital clones of people — text, audio, and video that sound and look like a specific person — have rapidly advanced and are within striking distance of a future in which digital avatars can sound and act like specific people, Tamaghna Basu, co-founder and chief technology officer of neoEYED, a Continue reading
For those not following the current state of the ITU, a proposal has been put forward to (pretty much) reorganize the standards body around “New IP.” Don’t be confused by the name—it’s exactly what it sounds like, a proposal for an entirely new set of transport protocols to replace the current IPv4/IPv6/TCP/QUIC/routing protocol stack nearly 100% of the networks in operation today run on. Ignoring, for the moment, the problem of replacing the entire IP infrastructure, what can we learn from this proposal?
What I’d like to focus on is deterministic networking. Way back in the days when I was in the USAF, one of the various projects I worked on was called PCI. The PCI network was a new system designed to unify the personnel processing across the entire USAF, so there were systems (Z100s, 200s, and 250s) to be installed in every location across the base where personnel actions were managed. Since the only wiring we had on base at the time was an old Strowger mainframe, mechanical crossbars at a dozen or so BDFs, and varying sizes of punch-downs at BDFs and IDFs, everything for this system needed to be punched- or wrapped-down as physical circuits.
In this episode of the Hedge, Scott Burleigh joins Alvaro Retana and Russ White to discuss the Bundle Protocol, which is designed to support delay tolerant data delivery over intermittently available or “stressed” networks. Examples include interstellar communication, email transmission over networks where access points move around (carrying data with them), etc. You can learn more about delay tolerant networking here, and read the most recent draft specification here.
GRE was the first tunneling protocol ever designed and deployed—and although it largely been overtaken by VXLAN and other tunnel protocols, it is still in widespread use today. For this episode of the History of Networking, Stan Hanks, the inventor of GRE—and hence the inventor of the concept of tunneling in packet switched networks—joins us to describe how and why GRE tunneling was invented.
I think we can all agree networks have become too complex—and this complexity is a result of the network often becoming the “final dumping ground” of every problem that seems like it might impact more than one system, or everything no-one else can figure out how to solve. It’s rather humorous, in fact, to see a lot of server and application folks sitting around saying “this networking stuff is so complex—let’s design something better and simpler in our bespoke overlay…” and then falling into the same complexity traps as they start facing the real problems of policy and scale.
This complexity cannot be “automated away.” It can be smeared over with intent, but we’re going to find—soon enough—that smearing intent on top of complexity just makes for a dirty kitchen and a sub-standard meal.
While this is always “top of mind” in my world, what brings it to mind this particular week is a paper by Jen Rexford et al. (I know Jen isn’t on the lead position in the author list, but still…) called A Clean Slate 4D Approach to Network Control and Management. Of course, I can appreciate the paper in part because I agree with a Continue reading
While many network engineers think about getting a certification, not many think about going after a degree. Is there value in getting a degree for the network engineer? If so, what is it? What kinds of things do you learn in a degree program for network engineering? Eric Osterweil, a professor at George Mason University, joins Jeremy Filliben and Russ White on this episode of the Hedge to consider degrees for network engineers.
Should the network be dumb or smart? Network vendors have recently focused on making the network as smart as possible because there is a definite feeling that dumb networks are quickly becoming a commodity—and it’s hard to see where and how steep profit margins can be maintained in a commodifying market. Software vendors, on the other hand, have been encroaching on the network space by “building in” overlay network capabilities, especially in virtualization products. VMWare and Docker come immediately to mind; both are either able to, or working towards, running on a plain IP fabric, reducing the number of services provided by the network to a minimum level (of course, I’d have a lot more confidence in these overlay systems if they were a lot smarter about routing … but I’ll leave that alone for the moment).
How can this question be answered? One way is to think through what sorts of things need to be done in processing packets, and then think through where it makes most sense to do those things. Another way is to measure the accuracy or speed at which some of these “packet processing things” can be done so you can decide in a more Continue reading
Blame it on the webinar yesterday… I’m late again.
However, there have been multiple reported attempts to exploit the transfer market as part of malicious operations — such as the hijacking of dormant address space and spamming — which, judging by exchanges between operators in mailing Continue reading
Certifications are a perennial topic (like weeds, perhaps) in the world of network engineering. While we often ask whether you should get a certification or a degree, or whether you should get a certification at all, we don’t often ask—now that you have the certification, how long should you keep it? Do you keep recertifying “forever,” or is there a limit? Join us as Mike Bolitho, Eyvonne Sharp, Tom Ammon, and Russ White discuss when you should give up on that certification.
Scott Bradner was given his first email address in the 1970’s, and his workstation was the gateway for all Internet connectivity at Harvard for some time. Join Donald Sharp and Russ White as Scott recounts the early days of networking at Harvard, including the installation of the first Cisco router, the origins of comparative performance testing and Interop, and the origins of the SHOULD, MUST, and MAY as they are used in IETF standards today.
This, in a nutshell, is what is often wrong with our design thinking in the networking world today. We want things to be efficient, wringing the last little dollar, and the last little bit of bandwidth, out of everything.
This is also, however, a perfect example of the problem of triads and tradeoffs. In the case of the street sweeper, we might thing, “well, we could replace those folks sitting around smoking a cigarette and cracking jokes with a robot, making things Continue reading
Open source software is everywhere, it seems—and yet it’s nowhere at the same time. Everyone is talking about it, but how many people and organizations are actually using it? Pete Lumbis at NVIDIA joins Tom Ammon and Russ White to discuss the many uses and meanings of open source software in the networking world.
In old presentations on network security (watch this space; I’m working on a new security course for Ignition in the next six months or so), I would use a pair of chocolate chip cookies as an illustration for network security. In the old days, I’d opine, network security was like a cookie that was baked to be crunchy on the outside and gooey on the inside. Now-a-days, however, I’d say network security needs to be more like a store-bought cookie—crunchy all the way through. I always used this illustration to make a point about defense-in-depth. You cannot assume the thin crunchy security layer at the edge of your network—generally in the form of stateful packet filters and the like (okay, firewalls, but let’s leave the appliance world behind for a moment)—is what you really need.
There are such things as insider attacks, after all. Further, once someone breaks through the thin crunchy layer at the edge, you really don’t want them being able to move laterally through your network.
The United States National Institute of Standards and Technology (NIST) has released a draft paper describing Zero Trust Architecture, which addresses many of the same concerns as the cookie that’s crunchy Continue reading