Enhancing Kubernetes Network Security with Microsegmentation

Microsegmentation represents a transformative approach to enhancing network security within Kubernetes environments. This technique divides networks into smaller, isolated segments, allowing for granular control over traffic flow and significantly bolstering security posture. At its core, microsegmentation leverages Kubernetes network policies to isolate workloads, applications, namespaces and entire clusters, tailoring security measures to specific organizational needs and compliance requirements. The Essence of Microsegmentation Strategies Scalability and Flexibility The fundamental advantage of microsegmentation through network policies lies in its scalability and flexibility. Kubernetes’ dynamic, label-based selection process facilitates the addition of new segments without compromising existing network infrastructure, enabling organizations to adapt to evolving security landscapes seamlessly. Labeling the assets is a key to microsegmentation success. Prevent Lateral Movement of Threats Workload isolation, a critical component of microsegmentation, emphasizes the importance of securing individual microservices within a namespace or tenant by allowing only required and approved communication. This minimizes the attack surface and prevents unauthorized lateral movement. Namespace and Tenant Isolation Namespace isolation further enhances security by segregating applications into unique namespaces, ensuring operational independence and reducing the impact of potential security breaches. Similarly, tenant isolation addresses the needs of multitenant environments by securing shared Kubernetes infrastructure, thus protecting tenants from Continue reading

With MTIA v2 Chip, Meta Can Do AI Inference, But Not Training

If you control your code base and you have only a handful of applications that run at massive scale – what some have called hyperscale – then you, too, can win the Chip Jackpot like Meta Platforms and a few dozen companies and governments in the world have.

With MTIA v2 Chip, Meta Can Do AI Inference, But Not Training was written by Timothy Prickett Morgan at The Next Platform.

Tetrate Enterprise Gateway for Envoy Graduates

Istio and Tetrate Enterprise Gateway for Envoy (TEG). This release provides businesses with a modern and secure alternative to traditional Envoy Gateway version 1.0. TEG extends its features by including cross-cluster service discovery and load balancing, OpenID Connect (OIDC), OAuth2, Web Application Firewall (WAF), and rate limiting out of the box along with Federal Information Processing Standard (FIPS) 140-2 compliance. A standout feature of the Envoy Gateway, and by extension TEG, is its native support for the newly introduced

Zero Trust for Legacy Apps: Load Balancer Layer Can Be a Solution

When most security and platform teams think about implementing zero trust, they tend to focus on the identity and access management layer and, in Kubernetes, on the service mesh. These are fine approaches, but they can cause challenges for constellations of legacy internal apps designed to run with zero exposure to outside connections. One solution to this problem is to leverage the load balancer as the primary implementation component for zero trust architectures covering legacy apps. True Story: A Large Bank, Load Balancers and Legacy Code This is a true story: A large bank has thousands of legacy web apps running on dedicated infrastructure. In the past, it could rely on a “hard perimeter defense” for protection with very brittle access control in front of the web app tier. That approach no longer works. Zero trust mandates that even internal applications maintain a stronger security posture. And for the legacy apps to remain useful, they must connect with newer apps and partner APIs. This means exposure to the public internet or broadly inside the data center via East-West traffic — something that these legacy apps were never designed for. Still, facing government regulatory pressure to enhance security, the bank Continue reading

Stateful Firewall Cluster High Availability Theater

Dmitry Perets wrote an excellent description of how typical firewall cluster solutions implement control-plane high availability, in particular, the routing protocol Graceful Restart feature (slightly edited):


Most of the HA clustering solutions for stateful firewalls that I know implement a single-brain model, where the entire cluster is seen by the outside network as a single node. The node that is currently primary runs the control plane (hence, I call it single-brain). Sessions and the forwarding plane are synchronized between the nodes.

1000BASE-T Part 4 – Link Down Detection

In the previous three parts, we learned about all the interesting things that go on in the PHY with scrambling, descrambling, synchronization, auto negotiation, FEC encoding, and so on. This is all essential knowledge that we need to have to understand how the PHY can detect that a link has gone down, or is performing so badly that it doesn’t make sense to keep the link up.

What Does IEEE 802.3 1000BASE-T Say?

The function in 1000BASE-T that is responsible for monitoring the status of the link is called link monitor and is defined in 40.4.2.5. The standard does not define much on what goes on in link monitor, though. Below is an excerpt from the standard:

Link Monitor determines the status of the underlying receive channel and communicates it via the variable
link_status. Failure of the underlying receive channel typically causes the PMA’s clients to suspend normal
operation.
The Link Monitor function shall comply with the state diagram of Figure 40–17.

The state diagram (redrawn by me) is shown below:

While 1000BASE-T leaves what the PHY monitors in link monitor to the implementer, there are still some interesting variables and timers that you should be Continue reading

HS069: Regulating AI

In today’s episode Greg and Johna spar over how, when, and why to regulate AI. Does early regulation lead to bad regulation? Does late regulation lead to a situation beyond democratic control? Comparing nascent regulation efforts in the EU, UK, and US, they analyze socio-legal principles like privacy and distributed liability. Most importantly, Johna drives... Read more »

Google Joins The Homegrown Arm Server CPU Club

If you are wondering why Intel chief executive officer Pat Gelsinger has been working so hard to get the company’s foundry business not only back on track but utterly transformed into a merchant foundry that, by 2030 or so can take away some business from archrival Taiwan Semiconductor Manufacturing Co, the reason is simple.

Google Joins The Homegrown Arm Server CPU Club was written by Timothy Prickett Morgan at The Next Platform.

Prevent Data Exfiltration in Kubernetes: The Critical Role of Egress Access Controls

Data exfiltration and ransomware attacks in cloud-native applications are evolving cyber threats that pose significant risks to organizations, leading to substantial financial losses, reputational damage, and operational disruptions. As Kubernetes adoption grows for running containerized applications, it becomes imperative to address the unique security challenges it presents. This article explores the economic impact of data exfiltration and ransomware attacks, their modus operandi in Kubernetes environments, and effective strategies to secure egress traffic. We will delve into the implementation of DNS policies and networksets, their role in simplifying egress control enforcement, and the importance of monitoring and alerting for suspicious egress activity. By adopting these measures, organizations can strengthen their containerized application’s security posture running in Kubernetes and mitigate the risks associated with these prevalent cyber threats.

Economic impact of data exfiltration and ransomware attacks

Data exfiltration and ransomware attacks have emerged as formidable threats to organizations worldwide, causing substantial financial losses and service outage. According to IBM’s 2023 Cost of a Data Breach report, data exfiltration attacks alone cost businesses an average of $3.86 million per incident, a staggering figure that underscores the severity of this issue. Ransomware attacks, on the other hand, can inflict even more damage, with Continue reading

With Gaudi 3, Intel Can Sell AI Accelerators To The PyTorch Masses

We have said it before, and we will say it again right here: If you can make a matrix math engine that runs the PyTorch framework and the Llama large language model, both of which are open source and both of which come out of Meta Platforms and both of which will be widely adopted by enterprises, then you can sell that matrix math engine.

With Gaudi 3, Intel Can Sell AI Accelerators To The PyTorch Masses was written by Timothy Prickett Morgan at The Next Platform.