Privacy Pass v3: the new privacy bits

Privacy Pass v3: the new privacy bits
Privacy Pass v3: the new privacy bits

In November 2017, we released our implementation of a privacy preserving protocol to let users prove that they are humans without enabling tracking. When you install Privacy Pass’s browser extension, you get tokens when you solve a Cloudflare CAPTCHA which can be used to avoid needing to solve one again... The redeemed token is cryptographically unlinkable to the token originally provided by the server. That is why Privacy Pass is privacy preserving.

In October 2019, Privacy Pass reached another milestone. We released Privacy Pass Extension v2.0 that includes a new service provider (hCaptcha) which provides a way to redeem a token not only with CAPTCHAs in the Cloudflare challenge pages but also hCaptcha CAPTCHAs in any website. When you encounter any hCaptcha CAPTCHA in any website, including the ones not behind Cloudflare, you can redeem a token to pass the CAPTCHA.

We believe Privacy Pass solves an important problem — balancing privacy and security for bot mitigation— but we think there’s more to be done in terms of both the codebase and the protocol. We improved the codebase by redesigning how the service providers interact with the core extension. At the same time, we made progress on the Continue reading

Very quietly, Oracle ships new Exadata servers

You have to hand it to Larry Ellison, he is persistent. Or maybe he just doesn’t know when to give up. Either way, Oracle has shipped the latest in its Exadata server appliances, making some pronounced boosts in performance.Exadata was the old Sun Microsystems hardware Oracle inherited when it bought Sun in 2010. It has since discontinued Sun’s SPARC processor but soldiered on with servers running x86-based processors, all of them Intel despite AMD’s surging acceptance in the enterprise.When Oracle bought Sun in 2010, it was made clear they had no interest in low-end, mass market servers. In that regard, the Oracle Exadata X9M platforms deliver. The new Exadata X9M offerings, designed entirely around Oracle’s database software, include Oracle Exadata Database Machine X9M and Exadata Cloud@Customer X9M, which Oracle says is the only platform that runs Oracle Autonomous Database in customer data centers.To read this article in full, please click here

Very quietly, Oracle ships new Exadata servers

You have to hand it to Larry Ellison, he is persistent. Or maybe he just doesn’t know when to give up. Either way, Oracle has shipped the latest in its Exadata server appliances, making some pronounced boosts in performance.Exadata was the old Sun Microsystems hardware Oracle inherited when it bought Sun in 2010. It has since discontinued Sun’s SPARC processor but soldiered on with servers running x86-based processors, all of them Intel despite AMD’s surging acceptance in the enterprise.When Oracle bought Sun in 2010, it was made clear they had no interest in low-end, mass market servers. In that regard, the Oracle Exadata X9M platforms deliver. The new Exadata X9M offerings, designed entirely around Oracle’s database software, include Oracle Exadata Database Machine X9M and Exadata Cloud@Customer X9M, which Oracle says is the only platform that runs Oracle Autonomous Database in customer data centers.To read this article in full, please click here

Edge computing: The architecture of the future

As technology extends deeper into every aspect of business, the tip of the spear is often some device at the outer edge of the network, whether a connected industrial controller, a soil moisture sensor, a smartphone, or a security cam.This ballooning internet of things is already collecting petabytes of data, some of it processed for analysis and some of it immediately actionable. So an architectural problem arises: You don’t want to connect all those devices and stream all that data directly to some centralized cloud or company data center. The latency and data transfer costs are too high.That’s where edge computing comes in. It provides the “intermediating infrastructure and critical services between core datacenters and intelligent endpoints,” as the research firm IDC puts it. In other words, edge computing provides a vital layer of compute and storage physically close to IoT endpoints, so that control devices can respond with low latency – and edge analytics processing can reduce the amount of data that needs to be transferred to the core.To read this article in full, please click here

Edge computing: The architecture of the future

As technology extends deeper into every aspect of business, the tip of the spear is often some device at the outer edge of the network, whether a connected industrial controller, a soil moisture sensor, a smartphone, or a security cam.This ballooning internet of things is already collecting petabytes of data, some of it processed for analysis and some of it immediately actionable. So an architectural problem arises: You don’t want to connect all those devices and stream all that data directly to some centralized cloud or company data center. The latency and data transfer costs are too high.That’s where edge computing comes in. It provides the “intermediating infrastructure and critical services between core datacenters and intelligent endpoints,” as the research firm IDC puts it. In other words, edge computing provides a vital layer of compute and storage physically close to IoT endpoints, so that control devices can respond with low latency – and edge analytics processing can reduce the amount of data that needs to be transferred to the core.To read this article in full, please click here

Edge computing: 5 potential pitfalls

Edge computing is gaining steam as an enterprise IT strategy with organizations looking to push storage and analytics closer to where data is gathered, as in IoT networks. But it’s got its challenges. Tech Spotlight: Edge Computing Proving the value of analytics on the edge (CIO) The cutting edge of healthcare: How edge computing will transform medicine (Computerworld) Securing the edge: 4 trends to watch (CSO) How to choose a cloud IoT platform (InfoWorld) Edge computing: 5 potential pitfalls (Network World) Its potential upsides are undeniable, including improved latency as well as reduced WAN bandwidth and transmission costs. As a result, enterprises are embracing it. Revenues in the edge-computing market were $4.68 billion in 2020 and are expected to reach $61.14 billion by 2028, according to a May 2021 report by Grand View Research.To read this article in full, please click here

Edge computing: 5 potential pitfalls

Edge computing is gaining steam as an enterprise IT strategy with organizations looking to push storage and analytics closer to where data is gathered, as in IoT networks. But it’s got its challenges. Tech Spotlight: Edge Computing Proving the value of analytics on the edge (CIO) The cutting edge of healthcare: How edge computing will transform medicine (Computerworld) Securing the edge: 4 trends to watch (CSO) How to choose a cloud IoT platform (InfoWorld) Edge computing: 5 potential pitfalls (Network World) Its potential upsides are undeniable, including improved latency as well as reduced WAN bandwidth and transmission costs. As a result, enterprises are embracing it. Revenues in the edge-computing market were $4.68 billion in 2020 and are expected to reach $61.14 billion by 2028, according to a May 2021 report by Grand View Research.To read this article in full, please click here

Edge computing: 5 potential pitfalls

Edge computing is gaining steam as an enterprise IT strategy with organizations looking to push storage and analytics closer to where data is gathered, as in IoT networks. But it’s got its challenges. Tech Spotlight: Edge Computing Proving the value of analytics on the edge (CIO) The cutting edge of healthcare: How edge computing will transform medicine (Computerworld) Securing the edge: 4 trends to watch (CSO) How to choose a cloud IoT platform (InfoWorld) Edge computing: 5 potential pitfalls (Network World) Its potential upsides are undeniable, including improved latency as well as reduced WAN bandwidth and transmission costs. As a result, enterprises are embracing it. Revenues in the edge-computing market were $4.68 billion in 2020 and are expected to reach $61.14 billion by 2028, according to a May 2021 report by Grand View Research.To read this article in full, please click here

Graceful Restart and Routing Protocol Convergence

I’m always amazed when I encounter networking engineers who want to have a fast-converging network using Non-Stop Forwarding (which implies Graceful Restart). It’s even worse than asking for smooth-running heptagonal wheels.

As we discussed in the Fast Failover series, any decent router uses a variety of mechanisms to detect adjacent device failure:

  • Physical link failure;
  • Routing protocol timeouts;
  • Next-hop liveliness checks (BFD, CFM…)

Graceful Restart and Routing Protocol Convergence

I’m always amazed when I encounter networking engineers who want to have a fast-converging network using Non-Stop Forwarding (which implies Graceful Restart). It’s even worse than asking for smooth-running heptagonal wheels.

As we discussed in the Fast Failover series, any decent router uses a variety of mechanisms to detect adjacent device failure:

  • Physical link failure;
  • Routing protocol timeouts;
  • Next-hop liveliness checks (BFD, CFM…)

Kustomize Transformer Configurations for Cluster API v1beta1

The topic of combining kustomize with Cluster API (CAPI) is a topic I’ve touched on several times over the last 18-24 months. I first touched on this topic in November 2019 with a post on using kustomize with CAPI manifests. A short while later, I discovered a way to change the configurations for the kustomize transformers to make it easier to use it with CAPI. That resulted in two posts on changing the kustomize transformers: one for v1alpha2 and one for v1alpha3 (since there were changes to the API between versions). In this post, I’ll revisit kustomize transformer configurations again, this time for CAPI v1beta1 (the API version corresponding to the CAPI 1.0 release).

In the v1alpha2 post (the first post on modifying kustomize transformer configurations), I mentioned that changes were needed to the NameReference and CommonLabel transformers. In the v1alpha3 post, I mentioned that the changes to the CommonLabel transformer became largely optional; if you are planning on adding additional labels to MachineDeployments, then the change to the CommonLabels transformer is required, but otherwise you could probably get by without it.

For v1beta1, the necessary changes are very similar to v1alpha3, and (for the most part) are Continue reading

Live Stream: The Journey to Architect

On Thursday the 19th of October at 1PM ET, I’ll be joining Keith Bogart for the em>INE Live live stream. You can find the details on their web site.

In this session, Keith Bogart will interview prolific author and Network Architect, Russ White Ph.D. One of only a handful of people who have attained CCAr status, Russ White has authored several books such as “Practical BGP”, “The Art of Network Architecture” and “Computer Networking Problems And Solutions”. During this session we’ll find out about his journey to becoming a Network Architect and how his passion for technology can inspire you!

Scaling indexing and search – Algolia New Search Architecture Part 2

What would a totally new search engine architecture look like? Who better than Julien Lemoine, Co-founder & CTO of Algolia, to describe what the future of search will look like. This is the second article in a series. Here's Part 1.

Search engines need to support fast scaling for both Read and Write operations. Rapid scaling is essential in most use cases. For example, adding a vendor in a marketplace generates a spike of indexing operations (Write), and a marketing campaign generates a spike of queries (Read). In most use cases, both Read and Write operations scale but not at the exact same moment. The architecture needs to handle efficiently all these situations as the scaling of Read and Write operations varies over time in most use cases.

Until now, search engines were scaling with Read and Write operations colocated on the same VMs. This scaling method brings drawbacks, such asWrite operations unnecessarily hurting the Read performance and using a significant amount of duplicated CPU at indexing. This article explains those drawbacks and introduces a new way to scale more quickly and efficiently by splitting Read and Write operations.

1. Anatomy of an index

AWS Networking – Part X: VPC Internet Gateway Service – Part Two

 

Associate SG and Elastic-IP with EC2


In the previous section, we create an Internet Gateway for our VPC. We also add a static route towards IGW into the Route Table of Subnet 10.10.0.0/24. In this section, we first create a Security Group (SG).  The SG allows SSH connection to the EC2 instance and ICMP from the EC2. Then we launch an EC2 and attach the previously configure SG to it. As the last step, we allocate an Elastic IP address (EIP) from the AWS Ipv4 address pool and associate it with the EC instance. When we are done with all the previous steps, we will test the connection. First, we take ssh connection from MyPC to EC2. Then, we ping MyPC from the EC2. We also use AWS Reachability Analyzer to validate the path from IGE to EC2 instance. The last section introduces AWS billing related to this chapter.


Figure 3-20: EC2 Instance, Elastic IP, and Security Group.

 

Continue reading

Announcing Cloudflare Research Hub

Announcing Cloudflare Research Hub
Announcing Cloudflare Research Hub

As highlighted yesterday, research efforts at Cloudflare have been growing over the years as well as their scope. Cloudflare Research is proud to support computer science research to help build a better Internet, and we want to tell you where you can learn more about our efforts and how to get in touch.

Why are we announcing a website for Cloudflare Research?

Cloudflare is built on a foundation of open standards which are the result of community consensus and research. Research is integral to Cloudflare’s mission as is the commitment to contribute back to the research and standards communities by establishing and maintaining a growing number of collaborations.

Throughout the years we have cherished many collaborations and one-on-one relationships, but we have probably been missing a lot of interesting work happening elsewhere. This is our main motivation for this Research hub of information: to help us build further collaborations with industrial and academic research groups, and individuals across the world. We are eager to interface more effectively with the wider research and standards communities: practitioners, researchers and educators. And as for you, dear reader, we encourage you to recognize that you are our audience too: we often hear that Continue reading