Load Balancing — functionality that’s been around for the last 30 years to help businesses leverage their existing infrastructure resources. Load balancing works by proactively steering traffic away from unhealthy origin servers and — for more advanced solutions — intelligently distributing traffic load based on different steering algorithms. This process ensures that errors aren’t served to end users and empowers businesses to tightly couple overall business objectives to their traffic behavior.
We are no longer in the age where setting up a fixed amount of servers in a data center is enough to meet the massive growth of users browsing the Internet. This means that we are well past the time when there is a one size fits all solution to suffice the needs of different businesses. Today, customers look for load balancers that are easy to use, propagate changes quickly, and — especially now — provide the most feature flexibility. Feature flexibility has become so important because different businesses have different paths to success and, consequently, different challenges! Let’s go through a few common use cases:
Sometimes the best way to understand something is to take it apart and see how it works. This blog post will help you take the lid off your Calico eBPF data plane based Kubernetes cluster and see how the forwarding is actually happening. The bonus is, unlike home repairs, you don’t even have to try to figure out how to put it back together again!
The target audience for this post is users who are already running a cluster with the eBPF data plane, either as a proof-of-concept or in production. Therefore, we will not go through the steps to set up a cluster from scratch. If you would like to learn how to do that, the best starting point is this documentation.
In the best case and likely scenario, you will have no data plane issues in the future and this knowledge will still help you to make informed decisions about the Calico eBPF data plane and your future clusters, and how to get the best from them. Knowledge is power!
If you are unlucky enough to experience future issues, being armed with a good understanding of the underlying technologies will Continue reading
At the Internet Society, we’re committed to building a bigger and stronger Internet. To make sure it remains open, globally connected, secure, and trustworthy, we connect the right people to discuss different aspects of key legislative proposals. It’s a healthy way to create debate, setting up the right conditions for different parties to find common […]
The post Encryption: A Building Block of a Trustworthy Internet appeared first on Internet Society.
In this blog post, I will be looking at 10 best practices for Kubernetes security policy design. Application modernization is a strategic initiative that changes the way enterprises are doing business. The journey requires a significant investment in people, processes, and technology in order to achieve the desired business outcomes of accelerating the pace of innovation, optimizing cost, and improving an enterprise’s overall security posture. It is crucial to establish the right foundation from the beginning, to avoid the high cost of re-architecture. Developing a standard and scalable security design for your Kubernetes environment helps establish the framework for implementing the necessary checks, enforcement, and visibility to enable your strategic business objectives.
Building a scalable Kubernetes security policy design requires that you adopt a fully cloud-native mindset that takes into account how your streamlined CI/CD process should enable security policy provisioning. A sound design would enable your enforcement and policy provisioning requirements in Day-N, while accommodating Day-1 requirements. The following list summarizes the fundamental requirements that a cloud-native security policy design should include:
On today's IPv6 Buzz, Ed and Scott talk about IPv6 in the management plane with network engineers Nick Buraglio and Chris Cummings, including management challenges, dual-stack vs. IPv6 only, IPv6 prefix space for lab deployments, and more.
The post IPv6 Buzz 080: Working With IPv6 In The Management Plane appeared first on Packet Pushers.
What five issues would be top of mind for IT architects ? Security, Backup.Recovery, Cloud, Skills Development and Distributed/Hybrid Work. Listen in on why and how these issues are our choices. If you have feedback or want us to followup then head over to our Follow Up page and send us your anonymous (or not) feedback.
The post Heavy Strategy 008: Five Core Issues for IT Architects in 2021 appeared first on Packet Pushers.
We use Kubernetes to run many of the diverse services that help us control Cloudflare’s edge. We have five geographically diverse clusters, with hundreds of nodes in our largest cluster. These clusters are self-managed on bare-metal machines which gives us a good amount of power and flexibility in the software and integrations with Kubernetes. However, it also means we don’t have a cloud provider to rely on for virtualizing or managing the nodes. This distinction becomes even more prominent when considering all the different reasons that nodes degrade. With self-managed bare-metal machines, the list of reasons that cause a node to become unhealthy include:
We have plenty of examples of failures in the aforementioned categories, but one example has been particularly tedious to deal with. It starts with the following log line from the kernel:
unregister_netdevice: waiting for lo to become free. Usage count = 1
The issue is further observed with the number of network interfaces on the node owned by the Container Network Interface (CNI) plugin getting out of proportion with the number of running pods:
$ Continue reading
What if you could connect a lot of devices to the Internet—without any kind of firewall or other protection—and observe attackers trying to find their way “in?” What might you learn from such an exercise? One thing you might learn is a lot of attacks seem to originate from within a relatively small group of IP addresses—IP addresses acing badly. Listen in as Leslie Daigle of Thinking Cat and the Techsequences podcast, Tom Ammon, and Russ White discuss just such an experiment and its results.
On today's Day Two Cloud we discuss the notion of open cloud. The premise is about reducing or minimizing costs of migrating from a public cloud. In theory, open cloud lets organizations keep their options open to make changes and reduces lock-in. But is open cloud even feasible? Our guest is Chris Psaltis, co-founder and CEO of Mist.io, a startup building an open-source, multi-cloud management platform.
The post Day Two Cloud 106: Towards A More Open Cloud appeared first on Packet Pushers.