Veronika Smolik

Author Archives: Veronika Smolik

Do You Need a Service Mesh? Understanding the Role of CNI vs. Service Mesh

The world of Kubernetes networking can sometimes be confusing. What’s a CNI? A service mesh? Do I need one? Both? And how do they interact in my cluster? The questions can go on and on.

Even for seasoned platform engineers, making sense of where these two components overlap and where the boundaries of responsibility end can be challenging. Seemingly bewildering obstacles can stand in the way of getting the most out of their complementary features.

One way to cut through the confusion is to start by defining what each of them is, then look at their respective capabilities, and finally clarify where they intersect and how they can work together.

This post will clarify:

  • What a CNI is responsible for
  • What a service mesh adds on top
  • When you need one, the other, or both

What a CNI Actually Does

Container Network Interface (CNI) is a standard way to connect and manage networking for containers in Kubernetes. It is a set of standards defined by Kubernetes for configuring container network interfaces and maintaining connectivity between pods in a dynamic environment where network peers are constantly being created and destroyed.

Those standards are implemented by CNI plugins. A CNI plugin is Continue reading

Ingress NGINX Controller Is Dead — Should You Move to Gateway API?

Now What? Understanding the Impact of the Ingress NGINX Deprecation

Ingress NGINX Controller, the trusty staple of countless platform engineering toolkits, is about to be put out to pasture. This news was announced by the Kubernetes community recently, and very quickly circulated throughout the cloud-native space. It’s big news for any platform team that currently uses the NGINX Controller because, as of March 26, 2026, there will be no more bug fixes, no more critical vulnerability patches and no more enhancements when Kubernetes continues to release new versions.

If you’re feeling ambushed, you’re not alone. For many teams, this isn’t just an inconvenient roadmap update, its unexpected news that now puts long-term traffic management decisions front and center. You know you need to migrate yesterday but the best path forward can be a confusing labyrinth of platforms and unfamiliar tools. Questions you might ask yourself:

❓Do you find a quick drop-in Ingress replacement?

❓Does moving to Gateway API make sense and can you commit enough resources to do a full migration?

❓If you decide on Gateway API then what is the best option for a smooth transition?

With Ingress NGINX on the way out, platform teams are standing at a Continue reading

How to Turbocharge Your Kubernetes Networking With eBPF

When your Kubernetes cluster handles thousands of workloads, every millisecond counts. And that pressure is no longer the exception; it is the norm. According to a recent CNCF survey, 93% of organizations are using, piloting, or evaluating Kubernetes, revealing just how pervasive it has become.

Kubernetes has grown from a promising orchestration tool into the backbone of modern infrastructure. As adoption climbs, so does pressure to keep performance high, networking efficient, and security airtight.

However, widespread adoption brings a difficult reality. As organizations scale thousands of interconnected workloads, traditional networking and security layers begin to strain. Keeping clusters fast, observable, and protected becomes increasingly challenging.

Innovation at the lowest level of the operating system—the kernel—can provide faster networking, deeper system visibility, and stronger security. But developing programs at this level is complex and risky. Teams running large Kubernetes environments need a way to extend the Linux kernel safely and efficiently, without compromising system stability.

Why eBPF Matters for Kubernetes Networking

Enter eBPF (extended Berkeley Packet Filter), a powerful technology that allows small, verified programs to run safely inside the kernel. It gives Kubernetes platforms a way to move critical logic closer to where packets actually flow, providing sharper visibility Continue reading

5 Reasons to Switch to the Calico Ingress Gateway (and How to Migrate Smoothly)

The End of Ingress NGINX Controller is Coming: What Comes Next?

The Ingress NGINX Controller is approaching retirement, which has pushed many teams to evaluate their long-term ingress strategy. The familiar Ingress resource has served well, but it comes with clear limits: annotations that differ by vendor, limited extensibility, and few options for separating operator and developer responsibilities.

The Gateway API addresses these challenges with a more expressive, standardized, and portable model for service networking. For organizations migrating off Ingress NGINX, the Calico Ingress Gateway, a production-hardened, 100% upstream distribution of Envoy Gateway, provides the most seamless and secure path forward.

If you’re evaluating your options, here are the five biggest reasons teams are switching now followed by a step-by-step migration guide to help you make the move with confidence.


Reason 1: The Future Is Gateway API and Ingress Is Being Left Behind

Ingress NGINX is entering retirement. Maintaining it will become increasingly difficult as ecosystem support slows. The Gateway API is the replacement for Ingress and provides:

  • A portable and standardized configuration model
  • Consistent behaviour across vendors
  • Cleaner separation of roles
  • More expressive routing
  • Support for multiple protocols

Calico implements the Gateway API directly and gives you an Continue reading