Kubernetes has revolutionized cloud-native applications, but networking remains a crucial aspect of ensuring scalability, security, and performance. Default networking approaches, such as iptables-based packet filtering, often introduce performance bottlenecks due to inefficient packet processing and complex rule evaluations. This is where Calico eBPF comes into play, offering a powerful alternative that enhances networking efficiency and security at scale.
Kubernetes networking consists of two primary components:
Choosing the right data plane is critical for optimal performance. Factors such as cluster size, throughput, and security requirements should guide this choice. Poor networking choices can lead to congestion, excessive latency, and resource starvation.
Networking in Kubernetes is an abstract idea. While Kubernetes lays the foundation, your Container Networking Interface (CNI) is in charge of the actual networking. To better understand networking, we usually divide it into two sections: a control plane and a data plane.
Dmytro Shypovalov published another article well worth reading: why should you use an SDN controller for RSVP-TE. It covers:
Have fun!
When I first started working with Python classes, some of the most confusing topics were getters, setters, and @property
. There are plenty of tutorials on how to use them, but very few actually explain why do we need them or what problem do they solve. So, I thought I’d write a dedicated post covering what they are and the problems they solve. Let’s get to it.
As always, if you find this post helpful, press the ‘clap’ button. It means a lot to me and helps me know you enjoy this type of content.
Before diving in, let's have a quick look at a Python class. Here’s a simple example of a Person
class with two attributes name
and age
.
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
I'm going to create an instance of the class called p1
, passing Continue reading
Photo by Pixabay on Pexels.com
We’ve all been in a situation where we’re listening to a presentation or in a class where someone is sharing knowledge. The presenter or expert finishes a point and stops to take a breath or move on to the next point when you hear a voice.
“What they meant to say was…”
You can already picture the person doing it. I don’t need to describe the kind of person that does this. We all know who it is and, if you’re like me, it drives you crazy. I know it because I’ve found myself being that person several times and it’s something I’m working hard to fix.
People that want to chime in feel like they have important things to share. Maybe they know something deeper about the subject. Perhaps they’ve worked on a technology and have additional information to add to the discussion. They mean well. They’re eager to add to the discussion. They mean well. Most of the time.
What about the other times? Maybe it’s someone that thinks they’re smarter than the presenter. I know I’ve had to deal with that plenty of times. It could be an Continue reading
Last Monday, I decided to review and merge the “VXLAN on Cumulus Linux 5.x with NVUE” pull request. I usually run integration tests on the modified code to catch any remaining gremlins, but this time, all the integration tests started failing during the VM creation phase. I was completely weirded out, considering everything worked a week ago.
Fortunately, Vagrant debugging is pretty good1 and I was quickly able to pinpoint the issue (full printout):
Methods of steering traffic into MPLS and Segment Routing LSP is one of the least standardized and most confusing parts of traffic engineering.
Despite some existing interoperability issues, in general, the MPLS and SR control …
Donatas Abraitis asked me to spread the word about the first ever Baltic NOG meeting in the second half of September 2025 (more details)
If you were looking for a nice excuse to visit that part of Europe (it’s been on my wish list for a very long time), this might be a perfect opportunity to do it 😎.
On a tangential topic of fascinating destinations 😉, there’s also ITNOG in Bologna (May 19th-20th, 2025), Autocon in Prague (May 26th-30th, 2025), and SWINOG in Bern (late June 2025).
Civil society organizations have always been at the forefront of humanitarian relief efforts, as well as safeguarding civil and human rights. These organizations play a large role in delivering services during crises, whether it is fighting climate change, support during natural disasters, providing health services to marginalized communities and more.
What do many of these organizations have in common? Many times, it’s cyber attacks from adversaries looking to steal sensitive information or disrupt their operations. Cloudflare has seen this firsthand when providing free cybersecurity services to vulnerable groups through programs like Project Galileo, and found that in aggregate, organizations protected under the project experience an average of 95 million attacks per day. While cyber attacks are a problem across all industries in the digital age, civil society organizations are disproportionately targeted, many times due to their advocacy, and because attackers know that they typically operate with limited resources. In most cases, these organizations don’t even know they have been attacked until it is too late.
Over the last 10 years of Project Galileo, we’ve had the opportunity to work more closely with leading civil society organizations. This has led to a number of exciting new partnerships, Continue reading
The results of netlab integration tests are stored in YAML files, making it easy to track changes improvements with Git. However, once I added the time of test and netlab version to the test results, I could no longer use git diff to figure out which test results changed after a test run – everything changed.
For example, these are partial test results from the OSPFv2 tests:
Sequence-to-sequence (seq2seq) language translation and Generative Pretrained Transformer (GPT) models are subcategories of Natural Language Processing (NLP) that utilize the Transformer architecture. Seq2seq models are typically using Long Short-Term Memory (LSTM) networks or encoder-decored based Transformers. In contrast, GPT is an autoregressive language model that uses decoder-only Transformer mechanism. The purpose of this chapter is to provide an overview of the decoder-only Transformer architecture.
The Transformer consists of stacks of decoder modules. A word embedding vector, a result of the word tokenization and embbeding, is fed as input to the first decoder module. After processing, the resulting context vector is passed to the next decodeer, and so on. After the final decoder, a softmax layer evaluates the output against the complete vocabulary to predict the next word. As an autoregressive model, the predicted word vector from the softmax layer is converted into a token before being fed back into the subsequent decoder layer. This process involves a token-to-word vector transformation prior to re-entering the decoder.
Each decoder module consists of an attention layer, Add & Normalization layer and a feedforward neural network (FFNN). Rather than feeding the embedded word vector (i.e., token embedding plus positional encoding) directly Continue reading