Hedge 221: Energy Aware Protocols

A lot of people are spending time thinking about how to make transport and control plane protocols more energy efficient. Is this effort worth it? What amount of power are we really like to save, and what downside potential is there in changing protocols to save energy? George Michaelson joins us from Australia to discuss energy awareness in protocols.

 

 

download

Enhancing Kubernetes Network Security with Microsegmentation

Microsegmentation represents a transformative approach to enhancing network security within Kubernetes environments. This technique divides networks into smaller, isolated segments, allowing for granular control over traffic flow and significantly bolstering security posture. At its core, microsegmentation leverages Kubernetes network policies to isolate workloads, applications, namespaces and entire clusters, tailoring security measures to specific organizational needs and compliance requirements. The Essence of Microsegmentation Strategies Scalability and Flexibility The fundamental advantage of microsegmentation through network policies lies in its scalability and flexibility. Kubernetes’ dynamic, label-based selection process facilitates the addition of new segments without compromising existing network infrastructure, enabling organizations to adapt to evolving security landscapes seamlessly. Labeling the assets is a key to microsegmentation success. Prevent Lateral Movement of Threats Workload isolation, a critical component of microsegmentation, emphasizes the importance of securing individual microservices within a namespace or tenant by allowing only required and approved communication. This minimizes the attack surface and prevents unauthorized lateral movement. Namespace and Tenant Isolation Namespace isolation further enhances security by segregating applications into unique namespaces, ensuring operational independence and reducing the impact of potential security breaches. Similarly, tenant isolation addresses the needs of multitenant environments by securing shared Kubernetes infrastructure, thus protecting tenants from Continue reading

With MTIA v2 Chip, Meta Can Do AI Inference, But Not Training

If you control your code base and you have only a handful of applications that run at massive scale – what some have called hyperscale – then you, too, can win the Chip Jackpot like Meta Platforms and a few dozen companies and governments in the world have.

With MTIA v2 Chip, Meta Can Do AI Inference, But Not Training was written by Timothy Prickett Morgan at The Next Platform.

Narrowboat: Lessons learnt

At this stage of the build internally there is only really the bedroom, snags and trims (windows, ceiling edges and doors) left to do. I am currently in Australia having a bit of RnR so this is a reflective post to show the build at its present state and go through the things I have learnt along the way. I could plan all I like but as I haven’t lived on a boat before there was always going to be a few wrong design decisions.

Tetrate Enterprise Gateway for Envoy Graduates

Istio and Tetrate Enterprise Gateway for Envoy (TEG). This release provides businesses with a modern and secure alternative to traditional Envoy Gateway version 1.0. TEG extends its features by including cross-cluster service discovery and load balancing, OpenID Connect (OIDC), OAuth2, Web Application Firewall (WAF), and rate limiting out of the box along with Federal Information Processing Standard (FIPS) 140-2 compliance. A standout feature of the Envoy Gateway, and by extension TEG, is its native support for the newly introduced

Zero Trust for Legacy Apps: Load Balancer Layer Can Be a Solution

When most security and platform teams think about implementing zero trust, they tend to focus on the identity and access management layer and, in Kubernetes, on the service mesh. These are fine approaches, but they can cause challenges for constellations of legacy internal apps designed to run with zero exposure to outside connections. One solution to this problem is to leverage the load balancer as the primary implementation component for zero trust architectures covering legacy apps. True Story: A Large Bank, Load Balancers and Legacy Code This is a true story: A large bank has thousands of legacy web apps running on dedicated infrastructure. In the past, it could rely on a “hard perimeter defense” for protection with very brittle access control in front of the web app tier. That approach no longer works. Zero trust mandates that even internal applications maintain a stronger security posture. And for the legacy apps to remain useful, they must connect with newer apps and partner APIs. This means exposure to the public internet or broadly inside the data center via East-West traffic — something that these legacy apps were never designed for. Still, facing government regulatory pressure to enhance security, the bank Continue reading

Stateful Firewall Cluster High Availability Theater

Dmitry Perets wrote an excellent description of how typical firewall cluster solutions implement control-plane high availability, in particular, the routing protocol Graceful Restart feature (slightly edited):


Most of the HA clustering solutions for stateful firewalls that I know implement a single-brain model, where the entire cluster is seen by the outside network as a single node. The node that is currently primary runs the control plane (hence, I call it single-brain). Sessions and the forwarding plane are synchronized between the nodes.

Stateful Firewall Cluster High Availability Theater

Dmitry Perets wrote an excellent description of how typical firewall cluster solutions implement control-plane high availability, in particular, the routing protocol Graceful Restart feature (slightly edited):


Most of the HA clustering solutions for stateful firewalls that I know implement a single-brain model, where the entire cluster is seen by the outside network as a single node. The node that is currently primary runs the control plane (hence, I call it single-brain). Sessions and the forwarding plane are synchronized between the nodes.

1000BASE-T Part 4 – Link Down Detection

In the previous three parts, we learned about all the interesting things that go on in the PHY with scrambling, descrambling, synchronization, auto negotiation, FEC encoding, and so on. This is all essential knowledge that we need to have to understand how the PHY can detect that a link has gone down, or is performing so badly that it doesn’t make sense to keep the link up.

What Does IEEE 802.3 1000BASE-T Say?

The function in 1000BASE-T that is responsible for monitoring the status of the link is called link monitor and is defined in 40.4.2.5. The standard does not define much on what goes on in link monitor, though. Below is an excerpt from the standard:

Link Monitor determines the status of the underlying receive channel and communicates it via the variable
link_status. Failure of the underlying receive channel typically causes the PMA’s clients to suspend normal
operation.
The Link Monitor function shall comply with the state diagram of Figure 40–17.

The state diagram (redrawn by me) is shown below:

While 1000BASE-T leaves what the PHY monitors in link monitor to the implementer, there are still some interesting variables and timers that you should be Continue reading

HS069: Regulating AI

In today’s episode Greg and Johna spar over how, when, and why to regulate AI. Does early regulation lead to bad regulation? Does late regulation lead to a situation beyond democratic control? Comparing nascent regulation efforts in the EU, UK, and US, they analyze socio-legal principles like privacy and distributed liability. Most importantly, Johna drives... Read more »