You may have heard in the news this week that there was a big issue with Southwest Airlines this holiday season. The issues are myriad and this is going to make for some great case studies for students in the future. However, one thing I wanted to touch on briefly in this whole debacle was the issue of a cascade failure.
The short version is that a weather disruption in the flight schedule became a much bigger problem when the process for rescheduling the flight crews was overwhelmed. Turns out that even after the big computer system upgrades and all the IT work that has gone into putting together a modern airfare booking system that one process was still very manual. The air crew rescheduling department was relatively small in nature and couldn’t keep up with the demands placed on it by the disruptions. It got to the point where Southwest had to reduce their number of flights in order to get the system back to normal.
I’m not an expert at airline scheduling but I have spent a lot of time planning for disaster recovery. One of the things that we focus on more than anything Continue reading
At over thirty years old, HTTP is still the foundation of the web and one of the Internet’s most popular protocols—not just for browsing, watching videos and listening to music, but also for apps, machine-to-machine communication, and even as a basis for building other protocols, forming what some refer to as a “second waist” in the classic Internet hourglass diagram.
What makes HTTP so successful? One answer is that it hits a “sweet spot” for most applications that need an application protocol. “Building Protocols with HTTP” (published in 2022 as a Best Current Practice RFC by the HTTP Working Group) argues that HTTP’s success can be attributed to factors like:
- familiarity by implementers, specifiers, administrators, developers, and users;
- availability of a variety of client, server, and proxy implementations;
- ease of use;
- availability of web browsers;
- reuse of existing mechanisms like authentication and encryption;
- presence of HTTP servers and clients in target deployments; and
- its ability to traverse firewalls.
Another important factor is the community of people using, implementing, and standardising HTTP. We work together to maintain and develop the protocol actively, to assure that it’s interoperable and meets today’s needs. Continue reading
Containers are a great way to package applications, with minimal libraries required. It guarantees that you will have the same deployment experience, regardless of where the containers are deployed. Container orchestration software pushes this further by preparing the necessary foundation to create containers at scale.
Linux and Windows support containerized applications and can participate in a container orchestration solution. There is an incredible number of guides and how-to articles on Linux containers and container orchestration, but these resources get scarce when it comes to Windows, which can discourage companies from running Windows workloads.
This blog post will examine how to set up a Windows-based Kubernetes environment to run Windows workloads and secure them using Calico Open Source. By the end of this post, you will see how simple it is to apply your current Kubernetes skills and knowledge to rule a hybrid environment.
A container is similar to a lightweight packaging technique. Each container packages an application in an isolated environment that shares its kernel with the underlying host, making it bound by the limits of the host operating system. These days, everyone is familiar with Linux containers, a popular way to run Linux-based binary files in an Continue reading
I just passed the Microsoft AZ-700 exam, Designing and Implementing Microsoft Azure Networking Solutions, which means I am now certified in the two major clouds (AWS and Azure) when it comes to networking. As always after an exam, I write a summary of my experience with it and the resources I used. This is this post.
This exam is for those that want to get certified on the networking component of Azure. Microsoft describes the exam in the following manner:
Candidates for this exam should have subject matter expertise in planning, implementing, and maintaining Azure networking solutions, including hybrid networking, connectivity, routing, security, and private access to Azure services
The breakdown of major topics and their percentage is the following:
There is a more detailed breakdown available as well. Always go through the exam blueprint before studying for a certification.
My goal when studying for this exam was to build a proficiency working with networking in Azure. That Continue reading
In this last episode of 2022, Tom, Eyvonne, and Russ sit around and talk about some interesting things going on in the world of network engineering. We start with a short discussion about SONiC, which we intend to build at least one full episode about sometime in 2023. We also discuss state and antipatterns, and finally the idea of acquiring another company to build network resilience.
I finally got around to learn Rust. Well, starting to.
It’s amazing.
I’m comparing this to learning Basic, C, C++, Erlang, Fortran, Go, Javascript, Pascal, Perl, PHP, Prolog and Python. I wouldn’t say I know all these languages well, but I do know C, C++, Go, and Python pretty well.
I can critique all these languages, but I’ve not found anything frustrating or stupid in Rust yet.
Rust is like taking C++, but all the tricky parts are now the default, and checked at compile time.
With C++11 we got move semantics, so we have to carefully disable the (usually) default-created copy constructor and copy assignments, or if you have to allow copies then every single use of the type has to be very careful to not trigger copies unless absolutely necessary.
And you have to consider exactly when RVO kicks in. And even
the best of us will sometimes get it wrong, especially with
refactors. E.g. who would have thought that adding this optimization
would suddenly trigger a copy of the potentially very heavy Obj
object, a copy that was not there before?
--- before.cc 2022-12-28 10:32:50.969273274 +0000
+++ after.cc 2022-12-28 10:32:50.969273274 +0000
Continue reading
Network design discussions often involve anecdotal evidence, and the arguments for preferring something follow up with “We should do X because at Y place, we did this.”. This is alright in itself as we want to bring the experience to avoid repeating past mistakes in the future. Still, more often than not, it feels like we have memorized the answers and without reading the question properly, we want to write down the answer vs. learning the problem and solution space, putting that into the current context we are trying to solve with discussions about various tradeoffs and picking the best solution in the given context. Our best solution for the same problem may change as the context changes. Also, this problem is everywhere. For example: Take a look at this twitter thread
Maybe one way to approach on how to think is to adopt stochastic thinking and add qualifications while making a case if we don’t have all the facts. The best engineers I have seen do apply similar thought processes. As world-class poker player Annie Duke points out in Thinking in Bets, even if you start at 90%, your ego will have a much easier time with Continue reading
Hello my friend,
In one of the past blogposts we have highlighted the journey for automation for engineers, who is willing to develop further outside of their core remit, such as networking, compute, or storage. In today’s blogpost we’ll provide some of examples how the different moving pieces are coming along together.
1
2
3
4
5 No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.
Yes, you do. Start today with our Network Automation Training programs:
We offer the following training programs for you:
During these trainings you will learn the following topics:
In my blog post The uselessness of bash I made a tool to improve pipes in shell, to assemble a better pipeline.
It solves the problem, but it’s a bit too different, with its own language.
While complaining with some people at work that one of the main features of shell (the pipe operator) is broken, someone joked that it should be replaced by a protobuf based protocol.
But on second thought it’s not really a joke.
How about instead of this:
$ goodpipe <<EOF
[
["gsutil", "cat", "gs://example/input-unsorted.txt"],
["sort", "-S300M", "-n"],
["gzip", "-9"],
["gsutil", "cp", "-", "gs://example/input-sorted-numerically.txt.gz"]
]
EOF
how about this:
$ wp -o gsutil cat gs://example/input-unsorted.txt \
| wp -io sort -S300M -n \
| wp -io gzip -9 \
| wp -i gsutil cp - gs://example/input-sorted-numerically.txt.gz
It doesn’t use protobufs, but a simpler regular protocol. This in order to avoid well known bugs types. Before implementing any protocol also see formal theory and science of insecurity.
First I hacked it together in Go, but I think the main implementation I’ll maintain is the one I made while porting it to Rust, as a way to learn Rust. The Continue reading