A leading open source reverse proxy and load balancer, Emile Vauge, Traefik’s creator, said previously in The New Stack, “Traditional reverse proxies were not well-suited for these dynamic environments.” Now, the Traefik Labs, the project’s parent company, introduced the first Release Candidate of Traefik Proxy v3. This new version now supports WebAssembly (Wasm), OpenTelemetry, and Kubernetes Gateway API.
A Game-Changer for WebAssembly?
WebAssembly support inclusion may prove a game-changer. Besides offering high-performance, language-agnostic capabilities for serverless and containerized applications, Traefik’s support provides Wasm with a larger potential market.
“This is a major step towards a low friction extensibility story for Traefik as it brings broader plugins into its growing ecosystem while providing a great developer experience. with plugins that can be written in different languages and compiled directly into Wasm,” said Open Worldwide Application Security Project (OWASP) OpenTelemetry protocol (OTLP), will provide users with improved visibility into their applications.
Since the Prometheus and Jesse Haka, a cloud architect at
CHICAGO — Incoming traffic looking to access your network and platform probably uses the network’s ingress. But the ingress carries with it scaling, availability and security issues.
For instance, said Kate Osborn, a software engineer at NGINX, suggested in this episode of TNS Makers recorded On the Road at KubeCon + CloudNative Con North America.
“One of the biggest issues is it’s not extensible,” Osborn said. “So it’s a very simple resource. But there’s a bunch of complex routing that people want to do. And in Continue reading
Photo by David Woolley, cc0
Dr. David L. Mills, the visionary behind the Network Time Protocol (NTP) that synchronizes time across billions of devices globally, died at age 85 on Jan. 17, 2024.
The Chicago song goes, “Network Time Protocol (NTP) was, and is, essential for running the internet. As Cerf wrote, announcing the news of his passing, “He was such NTP. We don’t think about how hard it is to synchronize time around the world to within milliseconds. But everything, and I mean everything, depends on NTP’s accuracy. It’s not just the internet, it’s financial markets, power grids, GPS, cryptography, and far, far more.
This new vulnerability, Terrapin, breaks the integrity of SSH’s secure channel. Yes, that’s just as bad as it sounds.
Anyone who does anything on the cloud or programming uses Secure Shell (SSH). So any vulnerability is bad news. Guess what? I’ve got some bad news. Researchers at Ruhr University have found a significant vulnerability in the SSH cryptographic network protocol, which they’ve labeled CVE-2023-48795: General Protocol Flaw; CVE-2023-46446: Rogue Session Attack in AsyncSSH poses a serious threat to internet security. Terrapin enables attackers to compromise the integrity of SSH connections, which are widely used for secure access to network services.
The Terrapin attack targets the SSH protocol by manipulating prefix sequence numbers during the handshake process. This manipulation enables attackers to remove messages sent by the client or server at the beginning of the secure channel without detection. The attack can lead to using less secure client authentication algorithms and deactivation-specific countermeasures against keystroke timing attacks in OpenSSH 9.5.
Terrapin is a Man-in-the-Middle
The good news — yes, there is good news — is that while the Terrapin attack Continue reading
Imagine you’re developing an application for your internal network that requires a certain network speed to function properly. You could open a web browser and point it to one of the many network speed tests on the market but I’m sure you know what that does… it tests your connection to the outside world.
What if you’re looking to test the speed of your LAN itself? OpenSpeedTest comes in.
OpenSpeedTest is a free, open source HTML5 network performance estimation tool that doesn’t require any client-side software or plugin to function. Once deployed, the tool can be accessed from a standard, modern web browser. Even better, OpenSpeedTest can be deployed with Docker. It uses a combination of NGINX and Alpine Linux to use very little resources on your Docker server.
You can run OpenSpeedTest with or without
IPv6, the most recent version of the Internet Protocol, was designed to overcome the address-space limitations of IPv4, which has been overwhelmed by the explosion of the digital ecosystem.
Although major companies like Google, Meta, Microsoft and YouTube are gradually adopting IPv6, the overall adoption of this technologically superior protocol has been slow. As of September, only 22% of websites have made the switch. What is slowing the adoption of IPv6? Let’s take a walk through the possible causes and potential solutions.
Why IPv6?
IPv6 has a 128-bit address format that allows for a vastly larger number of unique IP addresses than its predecessor, IPv4. The latter uses a 32-bit address format and has an address catalog sufficient for only340 undecillion (340 trillion³) addresses, more than enough to accommodate the projected surge of devices.
In addition to expanding the address space, IPv6 offers these improvements:
Streamlined network management: Unlike IPv4, which requires manual configuration or external servers like DHCP (Dynamic Host Configuration Protocol), IPv6 supports stateless Continue reading
In this article, we will embark on an in-depth journey into Kubernetes Gateway API policies and their pivotal role in managing and controlling traffic within Kubernetes clusters.
Gateway API logo
With a comprehensive understanding of these policies, how they can be effectively leveraged, and the transformative impact they can have on traffic management strategies, you will be equipped with the knowledge and practical insights needed to harness the full potential of Kubernetes Gateway API policies for optimized traffic management.
Benefits of Using Kubernetes Gateway API for Traffic Management
Kubernetes Gateway API introduces a paradigm shift in how we manage and control traffic within Kubernetes clusters, offering a range of significant advantages. First and foremost, it simplifies configuration by abstracting away complexities and providing a user-friendly, declarative approach to define routing and traffic policies.
Furthermore, its native integration with Kubernetes ensures a seamless fit, leveraging Kubernetes’ orchestration and scalability capabilities. With the Kubernetes Gateway API, fine-grained control over traffic becomes possible, allowing for precise management with policies applied at various stages, from request routing to response transformations.
As applications scale, the Kubernetes Gateway API scales effortlessly, handling high traffic loads and adapting to changing workloads without manual intervention. It incorporates Continue reading
Kubernetes, the stalwart of container orchestration, has ushered in a new era of application deployment and management. But as the Kubernetes ecosystem evolves, networking within these clusters has posed persistent challenges. Enter the Gateway API, a transformative solution poised to redefine Kubernetes networking as we know it.
At its core, the Gateway API represents a paradigm shift in Kubernetes networking. It offers a standardized approach to configuring and managing network routing, traffic shaping, and security policies within Kubernetes clusters. This standardization brings with it a host of compelling advantages.
Firstly, it simplifies the intricate world of networking. By providing a declarative and consistent method to define routing rules, it liberates developers and operators from the complexities of network intricacies. This shift allows them to channel their energies toward refining application logic.
The Gateway API doesn’t stop there; it brings scalability to the forefront. Traditional Kubernetes networking solutions, like Ingress controllers, often falter under the weight of burgeoning workloads. In contrast, the Gateway API is engineered to gracefully handle high loads, promising superior performance for modern, dynamic applications.
NGINX, now a part of F5, is the company behind the popular open source project, NGINX. NGINX offers a suite of technologies Continue reading
eBPF (extended Berkeley packet filter) is a powerful technology that operates directly within the Linux kernel, offering robust hooks for extending runtime observability, security, and networking capabilities across various deployment environments. While eBPF has gained widespread adoption, organizations are encouraged to leverage tools and layers built on eBPF to effectively harness its functionality. For instance, Gartner advises that most enterprises lack the expertise to directly utilize Cilium offers additional capabilities with eBPF to help secure the network connectivity between runtimes deployed on Docker and Kubernetes, as well as other environments, including bare metal and virtual machines. Isovalent, which created Cilium and donated it to the CNCF, and the contributors are also, in parallel, developing Cilium capabilities to offer network observability and network security functionality through Cilium sub-projects consisting of Hubble and Tetragon, respectively.
This graduation certifies that Cilium — created by
To keep the world connected, telecommunication networks demand performance and programmability to meet customers when and where they are, from streaming the winning goal of the world cup to coordinating responses to the latest natural disaster.
When switchboards were still run by human operators, telco companies were all about custom hardware with “black boxes” from vendors providing the speed the network needed. These black boxes controlled the performance of the network, which also made it dependent on where they were actually deployed.
As telcos moved from traditional phone calls to additional services like messaging and mobile data, the demands on the network pushed the boundaries of what was possible. Network Functions Virtualization (NFV) sought to allow telcos to use “white box” commodity hardware to scale out throughput and increase flexibility.
Technologies like the Data Plane Development Kit (
At some point in either your cloud- or container-development life, you’re going to have to share a folder from the Linux server. You may only have to do this in a dev environment, where you want to be able to share files with other developers on a third-party, cloud-hosted instance of Linux. Or maybe file sharing is part of an app or service you are building.
And because
Although legacy applications and infrastructure may not be a popular topic, their significance to organizations is crucial.
As cloud native technologies are poised to become a dominant part of computing, certain applications and infrastructure must remain on premises, particularly in regulated and other industries.
Amid the buzz surrounding no-code and low-code platforms, technologists must prioritize acquiring the appropriate tools and insights to manage on-premises environments’ availability and performance. Consumer expectations for flawless digital experiences continue to rise, so companies must optimize their on-premises customer-facing applications to accommodate.
For Some, On-Premises Infrastructure Will Remain Essential
Much of the recent digital transformation across multiple industries can be attributed to a substantial shift to the cloud. Cloud native technologies are in high demand due to their ability to expedite release velocity and optimize operations with speed, agility, scale and resilience.
Nevertheless, it’s easy to overlook the fact that many organizations, especially larger enterprises, still run their applications and infrastructure on premises. While this may seem surprising, it’s partially due to the time-consuming process of seamlessly and securely migrating highly intricate, legacy applications to the cloud. Often, only a portion of an application may be migrated to the cloud while major components will remain Continue reading
Data networks are generally used for file sharing, application operations or internet access, but what about a network strictly for distributing application programming interfaces? After all, an API is pretty esoteric, given that it is not standard data but a set of rules that define how two pieces of software can interact with each other.
Well, that out-of-the-ordinary system now exists, and it’s designed to do a ton of heavy lifting behind the scenes that developers will appreciate.
Bangalore- and San Francisco-based Hasura DDN, a new edge network using Graph Query Language and designed for transporting real-time, streaming and analytical data. It enables developers to run low-latency/high-performance data APIs at a global scale, with no additional effort and no additional fees, according to the company.
Hasura CEO and co-founder
Where are the application networking features heading, and how might this affect the way we design and approach distributed applications in the future? The revelations might surprise you. Let’s explore the shifting sands of application networking, focusing on the movement of networking concerns with the rise of the Istio’s Continue reading
VANCOUVER — At OpenStack Platform 17.1. This release is the product of the company’s ongoing commitment to support telecoms as they build their next-generation 5G network infrastructures.
In addition to bridging existing 4G technologies with emerging 5G networks, the platform enables advanced use cases like Red Hat OpenShift, the company’s
WithSecure has unveiled a mission to reduce software energy consumption, backing research on how users trade off energy consumption against performance and developing a test bench for measuring energy use, which it ultimately plans to make open source.
The Finnish cyber security firm has also kicked off discussions on establishing standards for measuring software power consumption with government agencies in Finland and across Europe, after establishing that there is little in the way of guidance currently.
Power Consumption
Power consumption by backend infrastructure is a known problem. Data centers, for example, account for up to 1.3% of worldwide electricity consumption, user devices consume more energy than networks and data centers combined.
Sphere 2023 in Helsinki, saying that most of the firm’s own operations run in the cloud, which gives it good visibility into the resources it was using and their CO2 impact.
Most of the data centers Continue reading
As communication service providers (CSPs) continue to provide essential services to businesses and individuals, the demand for faster and more reliable network connectivity continues to grow in demand and in complexity. To meet these demands, CSPs must offer a variety of connectivity services that provide high-quality network performance, reliability and scalability.
When it comes to offering network connectivity services, CSPs have many options when providing Layer 2 (data link) or Layer 3 (network or packet layer) connectivity of the Open Systems Interconnection (OSI) model for network communication.
This article will explore some of the advantages and benefits of each type of connectivity, in order for CSPs to determine which one may be better suited for different types of environments or applications.
What Is Layer 2 Connectivity?
At a basic level, Layer 2 connectivity refers to the use of the data link layer of the