Technology Short Take 110

Welcome to Technology Short Take #110! Here’s a look at a few of the articles and posts that have caught my attention over the last few weeks. I hope something I’ve included here is useful for you also!

Networking

  • Via Kirk Byers (who is himself a fantastic resource), I read a couple of articles on network automation that I think readers may find helpful. First up is a treatise from Mircea Ulinic on whether network automation is needed. Next is an older article from Patrick Ogenstad that provides an introduction to ZTP (Zero Touch Provisioning).
  • The folks over at Cilium took a look at a recent CNI benchmark comparison and unpacked it a bit. There’s some good information in their article.
  • I first ran into Forward Networks a few years ago at Fall ONUG in New York. At the time, I was recommending that they explore integration with NSX. Fast-forward to this year, and the company announces support for NSX and (more recently) support for Cisco ACI. The recent announcement of their GraphQL-based Network Query Engine (NQE)—more information is available in this blog post—is also pretty interesting to me.

Servers/Hardware

Worth Reading: Should I Write a Book?

Erik Dietrich (of the Expert Beginner fame) published another great blog post explaining when and why you should write a book. For the attention-challenged here’s my CliffNotes version:

  • Realize you have no idea what you’re doing (see also: Dunning-Kruger effect)
  • Figure out why you’d want to spend a significant amount of your time on a major project like book writing;
  • It will take longer (and will be more expensive) than you expect even when considering Hofstadter’s law.

Fixed it for you: protocol repair using lineage graphs

Fixed it for you: protocol repair using lineage graphs Oldenburg et al., CIDR’19

This is a cool paper on a number of levels. Firstly, the main result that catches my eye is that it’s possible to build a distributed systems ‘debugger’ that can suggest protocol-level fixes. E.g. say you have a system that sometimes sends acks before it really should, resulting in the possibility of state loss. Nemo can uncover this and suggest a change to the protocol that removes the window. Secondly, it uses an obscure (from the perspective of most readers of this blog) programming language called Dedalus. Dedalus is a great example of how a different programming paradigm can help us to think about a problem differently and generate new insights and possibilities. (Dedalus is a temporal logic programming language based on datalog). Now, it would be easy for practitioner readers to immediately dismiss the work as not relevant to them given they won’t be coding systems in Dedalus anytime soon. The third thing I want to highlight in this introduction therefore is the research strategy:

Nemo operates on an idealized model in which distributed executions are centrally simulated, record-level provenance of these Continue reading

Understanding RSVP EROs

In our last post we covered the basics of getting an RSVP LSP setup. This was a tedious process at least when compared to what we saw with LDP setting up LSPs. So I think it’s worthwhile to spend some time talking about RSVP and what it offers that merit it’s consideration as a label distribution protocol. First off – let’s talk about naming. When talking about MPLS – RSVP is typically just called RSVP – but without the MPLS context – it might be a little confusing. That’s because RSVP itself initially had nothing to do with MPLS. RSVP was initially a means to reserve resources for flows across a network. The protocol was then extended to support setting up MPLS LSPs. In this use case, it is often referred to as “RSVP-TE” or “MPLS RSVP-TE”. For the sake of my sanity – when I reference RSVP going forward I’ll be referring to RSVP that’s used to setup MPLS LSPs.

So now let’s talk about some differences between LDP and RSVP. The first thing that everyone points out is that LDP is tightly bound to the underlying IGP. While this is an accurate statement, it doesn’t mean that RSVP Continue reading

Event-Driven Automation: The TL;DR No One Told You About

Event-Driven automation is an umbrella term much like "coffee" (also see here, it turns out I’ve used coffee anecdotes way too much). How many times do you go to a popular chain and just blurt out "coffee". At 5am, it might be the nonsensical mysterious noise automagically leaving one’s mouth but once we decide it’s bean time, we get to the specifics.

There are multiple tools that give you different capabilities. Some are easier to get started with than others and some are feature rich and return high yields of capability against invested time.

Friendly dictator advice; try not to get wrapped up in the message bus used or data encapsulation methodologies. Nerdy fun, but fairly pointless when it comes to convincing anyone or organisation to make a fundamental shift.

Event-Driven is about receiving a WebHook and annoying people on Slack

This is a terrible measure and one we needed to have dropped yesterday. In more programming languages than I can remember, I’ve written the infamous "Hello World" and played with such variables, struct instances and objects as the infamous "foo" and the much revered "bar". Using an automation platform to receive an HTTP post and updating a support Continue reading

What is a digital twin? [And how it’s changing IoT, AI and more]

Digital twin technology has moved beyond manufacturing and into the merging worlds of the Internet of Things, artificial intelligence and data analytics.As more complex “things” become connected with the ability to produce data, having a digital equivalent gives data scientists and other IT professionals the ability to optimize deployments for peak efficiency and create other what-if scenarios.[ Now read 20 hot jobs ambitious IT pros should shoot for. ] What is a digital twin? A digital twin is a digital representation of a physical object or system. The technology behind digital twins has expanded to include large items such as buildings, factories and even cities, and some have said people and processes can have digital twins, expanding the concept even further. The idea first arose at NASA: full-scale mockups of early space capsules, used on the ground to mirror and diagnose problems in orbit, eventually gave way to fully digital simulations.To read this article in full, please click here

Software-Defined and Cloud-Native Foundations for 5G Networks

Cloud-Native Foundations AvidThink (formerly SDxCentral Research) has put together a research brief that explains the infrastructure changes required, and the role that software-defined and cloud-native technologies will play in the 5G world, including supporting network slicing.

Related Stories

DRaaS options grow, but no one size fits all

AutoNation spent years trying to establish a disaster recovery plan that inspired confidence. It went through multiple iterations, including failed attempts at a full on-premises solution and a solution completely in the cloud. The Fort Lauderdale, Fla.-based auto retailer, which operates 300 locations across 16 states, finally found what it needed with a hybrid model featuring disaster recovery as a service.“Both the on-premises and public cloud disaster recovery models were expensive, not tested often or thoroughly enough, and were true planning and implementation disasters that left us open to risk,” says Adam Rasner, AutoNation’s vice president of IT and operations, who was brought on two years ago in part to revamp the disaster recovery plan. The public cloud approach sported a hefty price tag: an estimated $3 million if it were needed in the wake of a three-month catastrophic outage. “We were probably a little bit too early in the adoption of disaster recovery in the cloud,” Rasner says, noting that the cloud providers have matured substantially in recent years.To read this article in full, please click here

DARPA explores new computer architectures to fix security between systems

Solutions are needed to replace the archaic air-gapping of computers used to isolate and protect sensitive defense information, the U.S. Government has decided.Air-gapping is the common practice of physically isolating data-storing computers from other systems, computers and networks so they theoretically can’t be compromised because there is nothing connecting the machines.[ Also read: What to consider when deploying a next generation firewall | Get regularly scheduled insights: Sign up for Network World newsletters ] However, many say air-gapping is no longer practical, as the cloud and internet take a hold of massive swaths of data and communications.To read this article in full, please click here