Got this question from a networking engineer attending the Building Next-Generation Data Center online course:
Has anyone an advice on LACP fast rate? When and why should you use it instead of normal LACP?
Apart from forming link aggregation groups, you can use LACP to detect link- and node failures (more details). However:
Got this question from a networking engineer attending the Building Next-Generation Data Center online course:
Has anyone an advice on LACP fast rate? When and why should you use it instead of normal LACP?
Apart from forming link aggregation groups, you can use LACP to detect link- and node failures (more details). However:
In Kubernetes, the Domain Name System (DNS) plays a crucial role in enabling service discovery for pods to locate and communicate with other services within the cluster. This function is essential for managing the dynamic nature of Kubernetes environments and ensuring that applications can operate seamlessly. For organizations migrating their workloads to Kubernetes, it’s also important to establish connectivity with services outside the cluster. To accomplish this, DNS is also used to resolve external service names to their corresponding IP addresses. The DNS functionality in Kubernetes is typically implemented using a set of core-dns pods that are exposed as a service called kube-dns. The DNS resolvers for workload pods are automatically configured to forward queries to the kube-dns service.
The output below shows the implementation of the kube-dns services in a Kubernetes cluster.
kubectl get service kube-dns -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP
The core-dns pods have to rely on external DNS servers to perform domain name resolution for services outside the cluster. By default, the pods are configured to forward DNS queries to the DNS server configured in the underlying host in the /etc/resolv.conf file. The output below displays Continue reading
IT organisations tend towards two strategic approaches - enduring, permanent and susatined. Or short term, consumable and fungible IT.
The post HS049 Evanescent vs Enduring IT appeared first on Packet Pushers.
Today's Day Two Cloud explores some design themes that emerged from the Cloud Field Day event. These themes include platform engineering, data protection and recovery, and how to deal with the fact that old technology never dies. Guest Michael Levan joins Ned Bellavance and Ethan Banks to discuss these themes and their implications for cloud application builders and operators.
The post Day Two Cloud 198: Modern Cloud Design Themes From CFD 17 appeared first on Packet Pushers.


In January 2021, we gave you a behind-the-scenes look at how we built Waiting Room on Cloudflare’s Durable Objects. Today, we are thrilled to announce the launch of Waiting Room Analytics and tell you more about how we built this feature. Waiting Room Analytics offers insights into end-user experience and provides visualizations of your waiting room traffic. These new metrics enable you to make well-informed configuration decisions, ensuring an optimal end-user experience while protecting your site from overwhelming traffic spikes.
If you’ve ever bought tickets for a popular concert online you’ll likely have been put in a virtual queue. That’s what Waiting Room provides. It keeps your site up and running in the face of overwhelming traffic surges. Waiting Room sends excess visitors to a customizable virtual waiting room and admits them to your site as spots become available.
While customers have come to rely on the protection Waiting Room provides against traffic surges, they have faced challenges analyzing their waiting room’s performance and impact on end-user flow. Without feedback about waiting room traffic as it relates to waiting room settings, it was challenging to make Waiting Room configuration decisions.
Up until now, customers could only monitor their waiting room's Continue reading
Distributed systems are complicated. Add networking to the mix, and you get traumatic challenges like the CAP theorem and Byzantine fault tolerance. Most of those challenges are unknown to engineers who have to suffer through the vendor marketing presentations, making it hard to determine whether the latest shiny gizmo works outside of PowerPoint.
I started collecting articles describing distributed-system gotchas years ago, wrote numerous blog posts on the topic in the heydays of the SDN Will Save the World lemming run, and organized them into the Distributed Systems Resources page.
Distributed systems are complicated. Add networking to the mix, and you get traumatic challenges like the CAP theorem and Byzantine fault tolerance. Most of those challenges are unknown to engineers who have to suffer through the vendor marketing presentations, making it hard to determine whether the latest shiny gizmo works outside of PowerPoint.
I started collecting articles describing distributed-system gotchas years ago, wrote numerous blog posts on the topic in the heydays of the SDN Will Save the World lemming run, and organized them into the Distributed Systems Resources page.
Today, the cloud platform engineers are facing new challenges when running cloud native applications. Those applications are designed, deployed, maintained and monitored unlike traditional monolithic applications they are used to working with.
Cloud native applications are designed and built to exploit the scale, elasticity, resiliency, and flexibility the cloud provides. They are a group of micro-services that are run in containers within a Kubernetes cluster and they all talk to each other. It can quickly become overwhelming for any cloud engineer to understand and visualize their environment.
Visualizing Kubernetes network traffic and service dependencies presents significant challenges due to the dynamic and distributed nature of Kubernetes environments. The dynamic nature of Kubernetes clusters, with frequent scaling of pods, creation and deletion of services, and changes in network connections, makes it difficult to capture an accurate and up-to-date representation of network traffic and service dependencies.
Additionally, the complexity of Kubernetes networking, involving multiple components such as pods, services, and network policies, further complicates the visualization of network traffic flow and understanding of service dependencies. The use of microservices architecture in Kubernetes, where applications consist of interconnected services, adds to the complexity, particularly as the number of services grows.
Moreover, the scalability Continue reading