Red Hat Ansible Automation Platform 2: Migration strategy considerations

Red Hat Ansible Automation Platform 2 introduces an updated architecture, new tools and an improved but familiar experience to automation teams. However, there are multiple considerations for your planning and strategy to migrate your current deployment to Ansible Automation Platform 2.

This document provides guidance to all of the stakeholders responsible for planning and executing an Ansible Automation Platform migration guidance with factors to address in your migration strategy.

This document does not provide a one-size-fits-all approach for migration. Various factors unique to your organization will impact the effort required, stakeholders involved and delivery plan.

What to consider before migrating

We understand that many factors specific to your needs affect your migration assessment and planning. This section highlights critical factors to determine your migration readiness and what approach will best suit your organization.

Assess your current environment

There will be configurations unique to your environment, and it’s crucial to perform a thorough technical assessment. We recommend including the following:

  • Analyze your current Ansible Automation Platform installation, including current deployment patterns, integrations and any complexities relevant to the migration.

  • Determine changes needed in your environment to meet the Ansible Automation Platform 2 technical requirements.

  • Assess stakeholders’ readiness to plan and execute Continue reading

Dynamic Process Isolation: Research by Cloudflare and TU Graz

Dynamic Process Isolation: Research by Cloudflare and TU Graz
Dynamic Process Isolation: Research by Cloudflare and TU Graz

Last year, I wrote about the Cloudflare Workers security model, including how we fight Spectre attacks. In that post, I explained that there is no known complete defense against Spectre — regardless of whether you're using isolates, processes, containers, or virtual machines to isolate tenants. What we do have, though, is a huge number of tools to increase the cost of a Spectre attack, to the point where it becomes infeasible. Cloudflare Workers has been designed from the very beginning with protection against side channel attacks in mind, and because of this we have been able to incorporate many defenses that other platforms — such as virtual machines and web browsers — cannot. However, the performance and scalability requirements of edge compute make it infeasible to run every Worker in its own private process, so we cannot rely on the usual defenses provided by the operating system kernel and address space separation.

Given our different approach, we cannot simply rely on others to tell us if we are safe. We had to do our own research. To do this we partnered with researchers at Graz University of Technology (TU Graz) to study the impact of Spectre on our environment. The Continue reading

Handshake Encryption: Endgame (an ECH update)

Handshake Encryption: Endgame (an ECH update)
Handshake Encryption: Endgame (an ECH update)

Privacy and security are fundamental to Cloudflare, and we believe in and champion the use of cryptography to help provide these fundamentals for customers, end-users, and the Internet at large. In the past, we helped specify, implement, and ship TLS 1.3, the latest version of the transport security protocol underlying the web, to all of our users. TLS 1.3 vastly improved upon prior versions of the protocol with respect to security, privacy, and performance: simpler cryptographic algorithms, more handshake encryption, and fewer round trips are just a few of the many great features of this protocol.

TLS 1.3 was a tremendous improvement over TLS 1.2, but there is still room for improvement. Sensitive metadata relating to application or user intent is still visible in plaintext on the wire. In particular, all client parameters, including the name of the target server the client is connecting to, are visible in plaintext. For obvious reasons, this is problematic from a privacy perspective: Even if your application traffic to crypto.cloudflare.com is encrypted, the fact you’re visiting crypto.cloudflare.com can be quite revealing.

And so, in collaboration with other participants in the standardization community and members of Continue reading

Privacy Pass v3: the new privacy bits

Privacy Pass v3: the new privacy bits
Privacy Pass v3: the new privacy bits

In November 2017, we released our implementation of a privacy preserving protocol to let users prove that they are humans without enabling tracking. When you install Privacy Pass’s browser extension, you get tokens when you solve a Cloudflare CAPTCHA which can be used to avoid needing to solve one again... The redeemed token is cryptographically unlinkable to the token originally provided by the server. That is why Privacy Pass is privacy preserving.

In October 2019, Privacy Pass reached another milestone. We released Privacy Pass Extension v2.0 that includes a new service provider (hCaptcha) which provides a way to redeem a token not only with CAPTCHAs in the Cloudflare challenge pages but also hCaptcha CAPTCHAs in any website. When you encounter any hCaptcha CAPTCHA in any website, including the ones not behind Cloudflare, you can redeem a token to pass the CAPTCHA.

We believe Privacy Pass solves an important problem — balancing privacy and security for bot mitigation— but we think there’s more to be done in terms of both the codebase and the protocol. We improved the codebase by redesigning how the service providers interact with the core extension. At the same time, we made progress on the Continue reading

Very quietly, Oracle ships new Exadata servers

You have to hand it to Larry Ellison, he is persistent. Or maybe he just doesn’t know when to give up. Either way, Oracle has shipped the latest in its Exadata server appliances, making some pronounced boosts in performance.Exadata was the old Sun Microsystems hardware Oracle inherited when it bought Sun in 2010. It has since discontinued Sun’s SPARC processor but soldiered on with servers running x86-based processors, all of them Intel despite AMD’s surging acceptance in the enterprise.When Oracle bought Sun in 2010, it was made clear they had no interest in low-end, mass market servers. In that regard, the Oracle Exadata X9M platforms deliver. The new Exadata X9M offerings, designed entirely around Oracle’s database software, include Oracle Exadata Database Machine X9M and Exadata Cloud@Customer X9M, which Oracle says is the only platform that runs Oracle Autonomous Database in customer data centers.To read this article in full, please click here

Very quietly, Oracle ships new Exadata servers

You have to hand it to Larry Ellison, he is persistent. Or maybe he just doesn’t know when to give up. Either way, Oracle has shipped the latest in its Exadata server appliances, making some pronounced boosts in performance.Exadata was the old Sun Microsystems hardware Oracle inherited when it bought Sun in 2010. It has since discontinued Sun’s SPARC processor but soldiered on with servers running x86-based processors, all of them Intel despite AMD’s surging acceptance in the enterprise.When Oracle bought Sun in 2010, it was made clear they had no interest in low-end, mass market servers. In that regard, the Oracle Exadata X9M platforms deliver. The new Exadata X9M offerings, designed entirely around Oracle’s database software, include Oracle Exadata Database Machine X9M and Exadata Cloud@Customer X9M, which Oracle says is the only platform that runs Oracle Autonomous Database in customer data centers.To read this article in full, please click here

Edge computing: The architecture of the future

As technology extends deeper into every aspect of business, the tip of the spear is often some device at the outer edge of the network, whether a connected industrial controller, a soil moisture sensor, a smartphone, or a security cam.This ballooning internet of things is already collecting petabytes of data, some of it processed for analysis and some of it immediately actionable. So an architectural problem arises: You don’t want to connect all those devices and stream all that data directly to some centralized cloud or company data center. The latency and data transfer costs are too high.That’s where edge computing comes in. It provides the “intermediating infrastructure and critical services between core datacenters and intelligent endpoints,” as the research firm IDC puts it. In other words, edge computing provides a vital layer of compute and storage physically close to IoT endpoints, so that control devices can respond with low latency – and edge analytics processing can reduce the amount of data that needs to be transferred to the core.To read this article in full, please click here

Edge computing: The architecture of the future

As technology extends deeper into every aspect of business, the tip of the spear is often some device at the outer edge of the network, whether a connected industrial controller, a soil moisture sensor, a smartphone, or a security cam.This ballooning internet of things is already collecting petabytes of data, some of it processed for analysis and some of it immediately actionable. So an architectural problem arises: You don’t want to connect all those devices and stream all that data directly to some centralized cloud or company data center. The latency and data transfer costs are too high.That’s where edge computing comes in. It provides the “intermediating infrastructure and critical services between core datacenters and intelligent endpoints,” as the research firm IDC puts it. In other words, edge computing provides a vital layer of compute and storage physically close to IoT endpoints, so that control devices can respond with low latency – and edge analytics processing can reduce the amount of data that needs to be transferred to the core.To read this article in full, please click here

Edge computing: 5 potential pitfalls

Edge computing is gaining steam as an enterprise IT strategy with organizations looking to push storage and analytics closer to where data is gathered, as in IoT networks. But it’s got its challenges. Tech Spotlight: Edge Computing Proving the value of analytics on the edge (CIO) The cutting edge of healthcare: How edge computing will transform medicine (Computerworld) Securing the edge: 4 trends to watch (CSO) How to choose a cloud IoT platform (InfoWorld) Edge computing: 5 potential pitfalls (Network World) Its potential upsides are undeniable, including improved latency as well as reduced WAN bandwidth and transmission costs. As a result, enterprises are embracing it. Revenues in the edge-computing market were $4.68 billion in 2020 and are expected to reach $61.14 billion by 2028, according to a May 2021 report by Grand View Research.To read this article in full, please click here

Edge computing: 5 potential pitfalls

Edge computing is gaining steam as an enterprise IT strategy with organizations looking to push storage and analytics closer to where data is gathered, as in IoT networks. But it’s got its challenges. Tech Spotlight: Edge Computing Proving the value of analytics on the edge (CIO) The cutting edge of healthcare: How edge computing will transform medicine (Computerworld) Securing the edge: 4 trends to watch (CSO) How to choose a cloud IoT platform (InfoWorld) Edge computing: 5 potential pitfalls (Network World) Its potential upsides are undeniable, including improved latency as well as reduced WAN bandwidth and transmission costs. As a result, enterprises are embracing it. Revenues in the edge-computing market were $4.68 billion in 2020 and are expected to reach $61.14 billion by 2028, according to a May 2021 report by Grand View Research.To read this article in full, please click here

Edge computing: 5 potential pitfalls

Edge computing is gaining steam as an enterprise IT strategy with organizations looking to push storage and analytics closer to where data is gathered, as in IoT networks. But it’s got its challenges. Tech Spotlight: Edge Computing Proving the value of analytics on the edge (CIO) The cutting edge of healthcare: How edge computing will transform medicine (Computerworld) Securing the edge: 4 trends to watch (CSO) How to choose a cloud IoT platform (InfoWorld) Edge computing: 5 potential pitfalls (Network World) Its potential upsides are undeniable, including improved latency as well as reduced WAN bandwidth and transmission costs. As a result, enterprises are embracing it. Revenues in the edge-computing market were $4.68 billion in 2020 and are expected to reach $61.14 billion by 2028, according to a May 2021 report by Grand View Research.To read this article in full, please click here

Graceful Restart and Routing Protocol Convergence

I’m always amazed when I encounter networking engineers who want to have a fast-converging network using Non-Stop Forwarding (which implies Graceful Restart). It’s even worse than asking for smooth-running heptagonal wheels.

As we discussed in the Fast Failover series, any decent router uses a variety of mechanisms to detect adjacent device failure:

  • Physical link failure;
  • Routing protocol timeouts;
  • Next-hop liveliness checks (BFD, CFM…)

Graceful Restart and Routing Protocol Convergence

I’m always amazed when I encounter networking engineers who want to have a fast-converging network using Non-Stop Forwarding (which implies Graceful Restart). It’s even worse than asking for smooth-running heptagonal wheels.

As we discussed in the Fast Failover series, any decent router uses a variety of mechanisms to detect adjacent device failure:

  • Physical link failure;
  • Routing protocol timeouts;
  • Next-hop liveliness checks (BFD, CFM…)

Kustomize Transformer Configurations for Cluster API v1beta1

The topic of combining kustomize with Cluster API (CAPI) is a topic I’ve touched on several times over the last 18-24 months. I first touched on this topic in November 2019 with a post on using kustomize with CAPI manifests. A short while later, I discovered a way to change the configurations for the kustomize transformers to make it easier to use it with CAPI. That resulted in two posts on changing the kustomize transformers: one for v1alpha2 and one for v1alpha3 (since there were changes to the API between versions). In this post, I’ll revisit kustomize transformer configurations again, this time for CAPI v1beta1 (the API version corresponding to the CAPI 1.0 release).

In the v1alpha2 post (the first post on modifying kustomize transformer configurations), I mentioned that changes were needed to the NameReference and CommonLabel transformers. In the v1alpha3 post, I mentioned that the changes to the CommonLabel transformer became largely optional; if you are planning on adding additional labels to MachineDeployments, then the change to the CommonLabels transformer is required, but otherwise you could probably get by without it.

For v1beta1, the necessary changes are very similar to v1alpha3, and (for the most part) are Continue reading

Live Stream: The Journey to Architect

On Thursday the 19th of October at 1PM ET, I’ll be joining Keith Bogart for the em>INE Live live stream. You can find the details on their web site.

In this session, Keith Bogart will interview prolific author and Network Architect, Russ White Ph.D. One of only a handful of people who have attained CCAr status, Russ White has authored several books such as “Practical BGP”, “The Art of Network Architecture” and “Computer Networking Problems And Solutions”. During this session we’ll find out about his journey to becoming a Network Architect and how his passion for technology can inspire you!

Scaling indexing and search – Algolia New Search Architecture Part 2

What would a totally new search engine architecture look like? Who better than Julien Lemoine, Co-founder & CTO of Algolia, to describe what the future of search will look like. This is the second article in a series. Here's Part 1.

Search engines need to support fast scaling for both Read and Write operations. Rapid scaling is essential in most use cases. For example, adding a vendor in a marketplace generates a spike of indexing operations (Write), and a marketing campaign generates a spike of queries (Read). In most use cases, both Read and Write operations scale but not at the exact same moment. The architecture needs to handle efficiently all these situations as the scaling of Read and Write operations varies over time in most use cases.

Until now, search engines were scaling with Read and Write operations colocated on the same VMs. This scaling method brings drawbacks, such asWrite operations unnecessarily hurting the Read performance and using a significant amount of duplicated CPU at indexing. This article explains those drawbacks and introduces a new way to scale more quickly and efficiently by splitting Read and Write operations.

1. Anatomy of an index