Archive

Category Archives for "Networking"

Coalescing Connections to Improve Network Privacy and Performance

Coalescing Connections to Improve Network Privacy and Performance
Coalescing Connections to Improve Network Privacy and Performance

Web pages typically have a large number of embedded subresources (e.g., JavaScript, CSS, image files, ads, beacons) that are fetched by a browser on page loads. Requests for these subresources can prompt browsers to perform further DNS lookups, TCP connections, and TLS handshakes, which can have a significant impact on how long it takes for the user to see the content and interact with the page. Further, each additional request exposes metadata (such as plaintext DNS queries, or unencrypted SNI in TLS handshake) which can have potential privacy implications for the user. With these factors in mind, we carried out a measurement study to understand how we can leverage Connection Coalescing (aka Connection Reuse) to address such concerns, and study its feasibility.

Background

The web has come a long way and initially consisted of very simple protocols. One of them was HTTP/1.0, which required browsers to make a separate connection for every subresource on the page. This design was quickly recognized as having significant performance bottlenecks and was extended with HTTP pipelining and persistent connections in HTTP/1.1 revision, which allowed HTTP requests to reuse the same TCP connection. But, yet again, this was no Continue reading

The Dell/VMware relationship remains strong despite split

VMware is in the process of spinning out from Dell Technologies, but the working relationship remains as strong as ever with a bunch of announcements from VMworld.All told, the pair made four significant announcements at the show, the first being that VMware Cloud will be sold on systems acquired through Dell's Apex pay-as-you-go program. The new Apex offering gives customers the ability to move workloads across multiple cloud environments and scale resources quickly with predictable pricing and costs.The new offering combines Dell’s hyperconverged infrastructure VxRail with VMware Cloud, VMware Tanzu for building cloud-native applications, and VMware HCX for application migration. Businesses can deploy the offering in their data center, at an edge location or a colocation facility with partners like Equinix.To read this article in full, please click here

The Dell/VMware relationship remains strong despite split

VMware is in the process of spinning out from Dell Technologies, but the working relationship remains as strong as ever with a bunch of announcements from VMworld.All told, the pair made four significant announcements at the show, the first being that VMware Cloud will be sold on systems acquired through Dell's Apex pay-as-you-go program. The new Apex offering gives customers the ability to move workloads across multiple cloud environments and scale resources quickly with predictable pricing and costs.The new offering combines Dell’s hyperconverged infrastructure VxRail with VMware Cloud, VMware Tanzu for building cloud-native applications, and VMware HCX for application migration. Businesses can deploy the offering in their data center, at an edge location or a colocation facility with partners like Equinix.To read this article in full, please click here

Why Does DHCPv6 Matter?

In case you missed it, there’s a new season of Lack of DHCPv6 on Android soap opera on v6ops mailing list. Before going into the juicy details, I wanted to look at the big picture: why would anyone care about lack of DHCPv6 on Android?

Please note that I’m not a DHCPv6 fan. DHCPv6 is just a tool not unlike sink plunger – nobody loves it (I hope), but when you need it, you better have it handy.

The requirements for DHCPv6-based address allocation come primarily from enterprise environments facing legal/compliance/other layer 8-10 reasons to implement policy (are you allowed to use the network), control (we want to decide who uses the network) and attribution (if something bad happens, we want to know who did it).

Why Does DHCPv6 Matter?

In case you missed it, there’s a new season of Lack of DHCPv6 on Android soap opera on v6ops mailing list. Before going into the juicy details, I wanted to look at the big picture: why would anyone care about lack of DHCPv6 on Android?

Please note that I’m not a DHCPv6 fan. DHCPv6 is just a tool not unlike sink plunger – nobody loves it (I hope), but when you need it, you better have it handy.

The requirements for DHCPv6-based address allocation come primarily from enterprise environments facing legal/compliance/other layer 8-10 reasons to implement policy (are you allowed to use the network), control (we want to decide who uses the network) and attribution (if something bad happens, we want to know who did it).

Kubespray 2.17 released with Calico eBPF and WireGuard support

Congratulations to the Kubespray team on the release of 2.17! This release brings support for two of the newer features in Calico: support for the eBPF data plane, and also for WireGuard encryption.

Let’s dive into configuring Kubespray to enable these new features.

If you’re interested in getting started with Kubespray and Calico, you can refer to Using Calico with Kubespray, which covers some of the settings you might want to use, as well as how to enable Calico in several of the quick start guides.

To configure Calico options when using Kubespray to deploy a cluster, you’ll need to configure some variables. If you’re using the examples in the Kubespray repository, those files are under inventory/…/group_vars/k8s_cluster/, with the Calico options residing in k8s-net-calico.yml.

eBPF data plane

Calico offers several different data planes, ensuring that end users can choose the technology that’s right for their particular use case. eBPF is a relatively new set of facilities in the Linux kernel that lets developers write code to modify its functionality at runtime in a way that is safe and efficient.

Calico’s eBPF data plane offers increased efficiency, as well as functionality like providing source IP preservation Continue reading

Tech Bytes: Why Fortinet Zero Trust Works For You

Today on the Tech Bytes podcast we’re talking Zero Trust Network Access, or ZTNA, with sponsor Fortinet. As organizations grapple with controlling end user access to applications and services, particularly when those end users and applications could be anywhere, Fortinet is here to make the case that it’s the right platform for ZTNA. Our guest […]

The post Tech Bytes: Why Fortinet Zero Trust Works For You appeared first on Packet Pushers.

Grafana Cloud


Grafana Cloud is a cloud hosted version of Grafana, Prometheus, and Loki. The free tier makes it easy to try out the service and has enough capability to satisfy simple use cases. In this article we will explore how metrics based on sFlow streaming telemetry can be pushed into Grafana Cloud.

The diagram shows the elements of the solution. Agents in host and network devices are configured to stream sFlow telemetry to an sFlow-RT real-time analytics engine instance. The Grafana Agent queries sFlow-RT's REST API for metrics and pushes them to Grafana Cloud.
docker run -p 8008:8008 -p 6343:6343/udp --name sflow-rt -d sflow/prometheus
Use Docker to run the pre-built sflow/prometheus image which packages sFlow-RT with the sflow-rt/prometheus application. Configure sFlow agents to stream data to this instance.
Create a Grafana Cloud account. Click on the Agent button on the home page to get the configuration settings for the Grafana Agent.
Click on the Prometheus button to get the configuration to forward metrics from the Grafana Agent.
Enter a name and click on the Create API key button to generate configuration settings that include a URL, username, and password that will be used in the Grafana Agent configuration.
server:
Continue reading

Introducing SSL/TLS Recommender

Introducing SSL/TLS Recommender
Introducing SSL/TLS Recommender

Seven years ago, Cloudflare made HTTPS availability for any Internet property easy and free with Universal SSL. At the time, few websites — other than those that processed sensitive data like passwords and credit card information — were using HTTPS because of how difficult it was to set up.

However, as we all started using the Internet for more and more private purposes (communication with loved ones, financial transactions, shopping, healthcare, etc.) the need for encryption became apparent. Tools like Firesheep demonstrated how easily attackers could snoop on people using public Wi-Fi networks at coffee shops and airports. The Snowden revelations showed the ease with which governments could listen in on unencrypted communications at scale. We have seen attempts by browser vendors to increase HTTPS adoption such as the recent announcement by Chromium for loading websites on HTTPS by default. Encryption has become a vital part of the modern Internet, not just to keep your information safe, but to keep you safe.

When it was launched, Universal SSL doubled the number of sites on the Internet using HTTPS. We are building on that with SSL/TLS Recommender, a tool that guides you to stronger configurations for the backend connection Continue reading

Dynamic Process Isolation: Research by Cloudflare and TU Graz

Dynamic Process Isolation: Research by Cloudflare and TU Graz
Dynamic Process Isolation: Research by Cloudflare and TU Graz

Last year, I wrote about the Cloudflare Workers security model, including how we fight Spectre attacks. In that post, I explained that there is no known complete defense against Spectre — regardless of whether you're using isolates, processes, containers, or virtual machines to isolate tenants. What we do have, though, is a huge number of tools to increase the cost of a Spectre attack, to the point where it becomes infeasible. Cloudflare Workers has been designed from the very beginning with protection against side channel attacks in mind, and because of this we have been able to incorporate many defenses that other platforms — such as virtual machines and web browsers — cannot. However, the performance and scalability requirements of edge compute make it infeasible to run every Worker in its own private process, so we cannot rely on the usual defenses provided by the operating system kernel and address space separation.

Given our different approach, we cannot simply rely on others to tell us if we are safe. We had to do our own research. To do this we partnered with researchers at Graz University of Technology (TU Graz) to study the impact of Spectre on our environment. The Continue reading

Handshake Encryption: Endgame (an ECH update)

Handshake Encryption: Endgame (an ECH update)
Handshake Encryption: Endgame (an ECH update)

Privacy and security are fundamental to Cloudflare, and we believe in and champion the use of cryptography to help provide these fundamentals for customers, end-users, and the Internet at large. In the past, we helped specify, implement, and ship TLS 1.3, the latest version of the transport security protocol underlying the web, to all of our users. TLS 1.3 vastly improved upon prior versions of the protocol with respect to security, privacy, and performance: simpler cryptographic algorithms, more handshake encryption, and fewer round trips are just a few of the many great features of this protocol.

TLS 1.3 was a tremendous improvement over TLS 1.2, but there is still room for improvement. Sensitive metadata relating to application or user intent is still visible in plaintext on the wire. In particular, all client parameters, including the name of the target server the client is connecting to, are visible in plaintext. For obvious reasons, this is problematic from a privacy perspective: Even if your application traffic to crypto.cloudflare.com is encrypted, the fact you’re visiting crypto.cloudflare.com can be quite revealing.

And so, in collaboration with other participants in the standardization community and members of Continue reading

Privacy Pass v3: the new privacy bits

Privacy Pass v3: the new privacy bits
Privacy Pass v3: the new privacy bits

In November 2017, we released our implementation of a privacy preserving protocol to let users prove that they are humans without enabling tracking. When you install Privacy Pass’s browser extension, you get tokens when you solve a Cloudflare CAPTCHA which can be used to avoid needing to solve one again... The redeemed token is cryptographically unlinkable to the token originally provided by the server. That is why Privacy Pass is privacy preserving.

In October 2019, Privacy Pass reached another milestone. We released Privacy Pass Extension v2.0 that includes a new service provider (hCaptcha) which provides a way to redeem a token not only with CAPTCHAs in the Cloudflare challenge pages but also hCaptcha CAPTCHAs in any website. When you encounter any hCaptcha CAPTCHA in any website, including the ones not behind Cloudflare, you can redeem a token to pass the CAPTCHA.

We believe Privacy Pass solves an important problem — balancing privacy and security for bot mitigation— but we think there’s more to be done in terms of both the codebase and the protocol. We improved the codebase by redesigning how the service providers interact with the core extension. At the same time, we made progress on the Continue reading

Very quietly, Oracle ships new Exadata servers

You have to hand it to Larry Ellison, he is persistent. Or maybe he just doesn’t know when to give up. Either way, Oracle has shipped the latest in its Exadata server appliances, making some pronounced boosts in performance.Exadata was the old Sun Microsystems hardware Oracle inherited when it bought Sun in 2010. It has since discontinued Sun’s SPARC processor but soldiered on with servers running x86-based processors, all of them Intel despite AMD’s surging acceptance in the enterprise.When Oracle bought Sun in 2010, it was made clear they had no interest in low-end, mass market servers. In that regard, the Oracle Exadata X9M platforms deliver. The new Exadata X9M offerings, designed entirely around Oracle’s database software, include Oracle Exadata Database Machine X9M and Exadata Cloud@Customer X9M, which Oracle says is the only platform that runs Oracle Autonomous Database in customer data centers.To read this article in full, please click here

Very quietly, Oracle ships new Exadata servers

You have to hand it to Larry Ellison, he is persistent. Or maybe he just doesn’t know when to give up. Either way, Oracle has shipped the latest in its Exadata server appliances, making some pronounced boosts in performance.Exadata was the old Sun Microsystems hardware Oracle inherited when it bought Sun in 2010. It has since discontinued Sun’s SPARC processor but soldiered on with servers running x86-based processors, all of them Intel despite AMD’s surging acceptance in the enterprise.When Oracle bought Sun in 2010, it was made clear they had no interest in low-end, mass market servers. In that regard, the Oracle Exadata X9M platforms deliver. The new Exadata X9M offerings, designed entirely around Oracle’s database software, include Oracle Exadata Database Machine X9M and Exadata Cloud@Customer X9M, which Oracle says is the only platform that runs Oracle Autonomous Database in customer data centers.To read this article in full, please click here

Edge computing: The architecture of the future

As technology extends deeper into every aspect of business, the tip of the spear is often some device at the outer edge of the network, whether a connected industrial controller, a soil moisture sensor, a smartphone, or a security cam.This ballooning internet of things is already collecting petabytes of data, some of it processed for analysis and some of it immediately actionable. So an architectural problem arises: You don’t want to connect all those devices and stream all that data directly to some centralized cloud or company data center. The latency and data transfer costs are too high.That’s where edge computing comes in. It provides the “intermediating infrastructure and critical services between core datacenters and intelligent endpoints,” as the research firm IDC puts it. In other words, edge computing provides a vital layer of compute and storage physically close to IoT endpoints, so that control devices can respond with low latency – and edge analytics processing can reduce the amount of data that needs to be transferred to the core.To read this article in full, please click here

Edge computing: The architecture of the future

As technology extends deeper into every aspect of business, the tip of the spear is often some device at the outer edge of the network, whether a connected industrial controller, a soil moisture sensor, a smartphone, or a security cam.This ballooning internet of things is already collecting petabytes of data, some of it processed for analysis and some of it immediately actionable. So an architectural problem arises: You don’t want to connect all those devices and stream all that data directly to some centralized cloud or company data center. The latency and data transfer costs are too high.That’s where edge computing comes in. It provides the “intermediating infrastructure and critical services between core datacenters and intelligent endpoints,” as the research firm IDC puts it. In other words, edge computing provides a vital layer of compute and storage physically close to IoT endpoints, so that control devices can respond with low latency – and edge analytics processing can reduce the amount of data that needs to be transferred to the core.To read this article in full, please click here