SUSE and Tigera: Empowering Secure, Scalable Kubernetes with Calico Enterprise

Modern Workloads Demand Modern Kubernetes Infrastructure

As organizations expand Kubernetes adoption—modernizing legacy applications on VMs and bare metal, running next-generation AI workloads, and deploying intelligence at the edge—the demand for infrastructure that is scalable, flexible, resilient, secure, and performant has never been greater. At the same time, compliance, consistent visibility, and efficient management without overburdening teams remain critical.

The combination of Calico Enterprise from Tigera and SUSE Rancher Prime delivers a resilient and scalable platform that combines high-performance networking, robust network security, and operational simplicity in one stack.

Comprehensive Security Without Compromise

Calico Enterprise provides a unified platform for Kubernetes networking, security, and observability:

  • eBPF-powered networking for high performance without sidecar overhead
  • One platform for all Kubernetes traffic: ingress, egress, in-cluster, and multi-cluster
  • Security for every workload type: containers, VMs, and bare metal
  • Seamless scaling with built-in multi-cluster networking and security
  • Zero-trust security with identity-aware policies and workload-based microsegmentation
  • Integrated observability for policy enforcement and troubleshooting
  • Compliance features that simplify audits (PCI-DSS, HIPAA, SOC 2, FedRAMP)

Deployed with Rancher Prime, these capabilities extend directly into every cluster, enabling security-conscious industries such as finance, healthcare, and government to confidently run Kubernetes for any use case—from application modernization to AI and edge Continue reading

How to Connect Nested KubeVirt Clusters with Calico and BGP Peering

Running Kubernetes inside Kubernetes isn’t just a fun experiment anymore – it’s becoming a key pattern for delivering multi-environment platforms at scale. With KubeVirt, a virtualization add-on for Kubernetes that uses QEMU (an open-source machine emulator and virtualizer), you can run full-featured Kubernetes clusters as virtual machines (VMs) inside a parent Kubernetes cluster. This nested architecture makes it possible to unify containerized and virtualized workloads, and opens the door to new platform engineering use cases.

But here’s the challenge: how can you ensure that these nested clusters, and the workloads within, can reach, and be reached by, your physical network and are treated the same way as any other cluster?

That’s where Calico’s Advanced BGP (Border Gateway Protocol) peering with workloads comes into play. By enabling BGP route exchange between the parent cluster and nested KubeVirt VMs, Calico extends dynamic routing directly to virtualized workloads. This allows nested clusters to participate in the broader network topology and advertise their pod and service IPs just like any other node. Thus eliminating the need for tunnels or overlays to achieve true layer 3 connectivity.

In this blog, we’ll walk through the big picture, prerequisites, and step-by-step configuration for setting up BGP Continue reading

LIU001: Growing Pains

Starting any new endeavor is hard. That’s particularly true for a career in tech. And that’s the reason Alexis Bertholf and Kevin Nanns are launching the Life In Uptime podcast. In each episode they’ll sit down with engineers, leaders, and builders in tech to uncover the stories behind their careers to help you see how... Read more »

netlab: Applying Simple Configuration Changes

For years, netlab has had custom configuration templates that can be used to deploy custom configurations onto lab devices. The custom configuration templates can be Jinja2 templates, and you can create different templates (for the same functionality) for different platforms. However, using that functionality if you need an extra command or two makes approximately as much sense as using a Kubernetes cluster to deploy a BusyBox container.

netlab release 25.09 solves that problem with the files plugin and the inline config functionality.

Ultra Ethernet: Address Vector (AV)

 Address Vector (AV)

The Address Vector (AV) is a provider-managed mapping that connects remote fabric addresses to compact integer handles (fi_addr_t) used in communication operations. Unlike a routing table, the AV does not store IP (device mappings). Instead, it converts an opaque Fabric Address (FA)—which may contain IP, port, and transport-specific identifiers—into a simple handle that endpoints can use for sending and receiving messages. The application never needs to reference the raw IP addresses directly.

Phase 1: Application – Request & Definition

The application begins by requesting an Address Vector (AV) through the fi_av_open() call. To do this, it first defines the desired AV properties in a fi_av_attr structure:

int fi_av_open(struct fid_domain *domain, struct fi_av_attr *attr, struct fid_av **av, void *context);

struct fi_av_attr av_attr = {

    .type        = FI_AV_TABLE,   

    .count       = 16,            

    .rx_ctx_bits = 0,          

    .ep_per_node = 1,          

    .name        = "my_av",    

    .map_addr    = NULL,       

    .flags       = 0           

};

Example 4-1: structure Continue reading

TCG059: From Source of Truth to Knowledge Graph – Rethinking Network Data

Network automation has a data problem. Traditional tools may hit limitations when managing complex infrastructure relationships. We explore how OpsMill’s InfraHub uses graph databases and temporal versioning to create what our guest calls “the knowledge graph of infrastructure” – enabling true version control at the database level while maintaining the flexibility to model anything from... Read more »

EVPN Designs: Multi-Pod Fabrics

In the EVPN Designs: Layer-3 Inter-AS Option A, I described the simplest multi-site design in which the WAN edge routers exchange IP routes in individual VRFs, resulting in two isolated layer-2 fabrics connected with a layer-3 link.

Today, let’s explore a design that will excite the True Believers in end-to-end layer-2 networks: two EVPN fabrics connected with an EBGP session to form a unified, larger EVPN fabric. We’ll use the same “physical” topology as the previous example; the only modification is that the WA-WB link is now part of the underlay IP network.

Get Your Money For Nothing, Chips Definitely Not For Free

What’s the difference between Meta Platforms and OpenAI? The big one – and perhaps the most important one in the long run – is that when Meta Platforms does a deal with neocloud CoreWeave, it actually has a revenue stream from its advertising on various Web properties that it can pump back into AI investments while OpenAI is still burning money much faster than it is making it.

Get Your Money For Nothing, Chips Definitely Not For Free was written by Timothy Prickett Morgan at The Next Platform.

Payload on Workers: a full-fledged CMS, running entirely on Cloudflare’s stack

Tucked behind the administrator login screen of countless websites is one of the Internet’s unsung heroes: the Content Management System (CMS). This seemingly basic piece of software is used to draft and publish blog posts, organize media assets, manage user profiles, and perform countless other tasks across a dizzying array of use cases. One standout in this category is a vibrant open-source project called Payload, which has over 35,000 stars on GitHub and has generated so much community excitement that it was recently acquired by Figma.

Today we’re excited to showcase a new template from the Payload team, which makes it possible to deploy a full-fledged CMS to Cloudflare’s platform in a single click: just click the Deploy to Cloudflare button to generate a fully-configured Payload instance, complete with bindings to Cloudflare D1 and R2. Below we’ll dig into the technical work that enables this, some of the opportunities it unlocks, and how we’re using Payload to help power Cloudflare TV. But first, a look at why hosting a CMS on Workers is such a game changer.

Behind the scenes: Cloudflare TV’s Payload instance

Serverless by design

Most CMSs are designed to be hosted on a conventional server that runs Continue reading

Ultra Ethernet: Completion Queue

Completion Queue Creation (fi_cq_open)


Phase 1: Application – Request & Definition


The purpose of this phase is to define the queue where operation completions will be reported. Completion queues are used to report the completion of operations submitted to endpoints, such as data transfers, RMA accesses, or remote write requests. By preparing a struct fi_cq_attr, the application describes exactly what it needs, so the provider can allocate a CQ that meets its requirements.


Example API Call:

struct fi_cq_attr cq_attr = {

    .size = 2048,

    .format = FI_CQ_FORMAT_DATA,

    .wait_obj = FI_WAIT_FD,

    .flags = FI_WRITE | FI_REMOTE_WRITE | FI_RMA,

    .data_size = 64

};


struct fid_cq *cq;

int ret = fi_cq_open(domain, &cq_attr, &cq, NULL);


Explanation of fields:

.size = 2048:  The CQ can hold up to 2048 completions. This determines how many completed operations can be buffered before the application consumes them.

.format = FI_CQ_FORMAT_DATA: This setting determines the level of detail included in each completion entry. With FI_CQ_FORMAT_DATA, the CQ entries contain information about the operation, such as the buffer pointer, the length of data, and optional completion data. If the application uses tagged messaging, choosing FI_CQ_FORMAT_TAGGED expands the entries to Continue reading

1 2 3 3,816