The Expedient Way To Build An Enterprise Cloud

The promises of the cloud – from agility and scalability to reduced costs and easier access to emerging technologies like artificial intelligence (AI) and advanced analytics – have been out there for more than a decade and enterprises are continuing to push more of their operations to such hyperscale providers like Amazon Web Services (AWS), Microsoft Azure and Google Cloud.

The Expedient Way To Build An Enterprise Cloud was written by Jeffrey Burt at The Next Platform.

Cloudflare recognized as a ‘Leader’ in The Forrester Wave for DDoS Mitigation Solutions

Cloudflare recognized as a 'Leader' in The Forrester Wave for DDoS Mitigation Solutions
Cloudflare recognized as a 'Leader' in The Forrester Wave for DDoS Mitigation Solutions

We’re thrilled to announce that Cloudflare has been named a leader in The Forrester WaveTM: DDoS Mitigation Solutions, Q1 2021. You can download a complimentary copy of the report here.

According to the report, written by, Forrester Senior Analyst for Security and Risk, David Holmes, “Cloudflare protects against DDoS from the edge, and fast… customer references view Cloudflare’s edge network as a compelling way to protect and deliver applications.”

Unmetered and unlimited DDoS protection for all

Cloudflare was founded with the mission to help build a better Internet — one where the impact of DDoS attacks is a thing of the past. Over the last 10 years, we have been unwavering in our efforts to protect our customers’ Internet properties from DDoS attacks of any size or kind. In 2017, we announced unmetered DDoS protection for free — as part of every Cloudflare service and plan including the Free plan — to make sure every organization can stay protected and available.

Thanks to our home-grown automated DDoS protection systems, we’re able to provide unmetered and unlimited DDoS protection for free. Our automated systems constantly analyze traffic samples asynchronously as to avoid impact to performance. They scan for Continue reading

Deploying a CNI Automatically with a ClusterResourceSet

Not too long ago I hosted an episode of TGIK8s, where I explored some features of Cluster API. One of the features I explored on the show was ClusterResourceSet, an experimental feature that allows users to automatically install additional components onto workload clusters when the workload clusters are provisioned. In this post, I’ll show how to deploy a CNI plugin automatically using a ClusterResourceSet.

A lot of this post is inspired by a similar post on installing Calico using a ClusterResourceSet. Although that post is for vSphere and this one focuses on AWS, much of the infrastructure differences are abstracted away by Kubernetes and Cluster API.

At a high level, using ClusterResourceSet to install a CNI plugin automatically looks like this:

  1. Make sure experimental features are enabled on your CAPI management cluster.
  2. Create a ConfigMap that contains the information to deploy the CNI plugin.
  3. Create a ClusterResourceSet that references the ConfigMap.
  4. Deploy one or more workload clusters that match the cluster selector specified in the ClusterResourceSet.

The sections below describe each of these steps in more detail.

Enabling Experimental Features

The preferred way to enable experimental features on your management cluster is to use a setting in the Continue reading

HTTP Protocol, Web Page Waterfalls and Complexity

One aspect of networking is that some understanding of the protocols using the network is required. As HTTPS replaces nearly all other protocols I would suggest that everyone needs a working understanding of how a web page is fetched by a web browser. WebPageTest is a community tool that reads web pages and then displays […]

How to Write a Great Talk Proposal for DockerCon LIVE 2021

First off, a big thank you to all those who have already submitted a talk proposal for DockerCon LIVE 2021. We’re seeing some really excellent proposals and we look forward to reviewing many more! We opened the CFP on February 8th and with a few more weeks to go before we close the CFP on March 15th there’s still lots of time to submit a talk. 

If you’re toying with the idea of submitting a talk but you’re still not sure whether or not your topic is interesting enough, or how to approach your topic, or if you just need a little push in the right direction to write up your proposal and click on “Send”, below are a few resources we thought you might find interesting. 

Amanda Sopkin wrote a great article a few years ago that has now become a reference for conference organizers sharing tips on how to get a technical talk accepted for a conference.  We also like Todd Lewis’ 13 tips on how to write an awesome talk proposal for a tech conference. Other interesting articles include: 

How to execute an object file: Part 1

Calling a simple function without linking

How to execute an object file: Part 1

When we write software using a high-level compiled programming language, there are usually a number of steps involved in transforming our source code into the final executable binary:

How to execute an object file: Part 1

First, our source files are compiled by a compiler translating the high-level programming language into machine code. The output of the compiler is a number of object files. If the project contains multiple source files, we usually get as many object files. The next step is the linker: since the code in different object files may reference each other, the linker is responsible for assembling all these object files into one big program and binding these references together. The output of the linker is usually our target executable, so only one file.

However, at this point, our executable might still be incomplete. These days, most executables on Linux are dynamically linked: the executable itself does not have all the code it needs to run a program. Instead it expects to "borrow" part of the code at runtime from shared libraries for some of its functionality:

How to execute an object file: Part 1

This process is called runtime linking: when our executable is being started, the operating system will invoke the dynamic Continue reading

HPE debuts new Opportunity Engine for fast AI insights

HP Enterprise has announced what it calls the Software Defined Opportunity Engine (SDOE), a cloud-based machine-learning platform that enables partners that sell HPE gear to cut the time to create custom sales proposals from weeks to just 45 seconds.In a blog post announcing the service, HPE Storage senior vice president and general manager Tom Black said SDOE does away with an outdated IT infrastructure-buying process at a time when digital transformation has never been more critical.To read this article in full, please click here

HPE debuts new Opportunity Engine for fast AI insights

HP Enterprise has announced what it calls the Software Defined Opportunity Engine (SDOE), a cloud-based machine-learning platform that enables partners that sell HPE gear to cut the time to create custom sales proposals from weeks to just 45 seconds.In a blog post announcing the service, HPE Storage senior vice president and general manager Tom Black said SDOE does away with an outdated IT infrastructure-buying process at a time when digital transformation has never been more critical.To read this article in full, please click here

Impact of Azure Subnets on High Availability Designs

Now that you know all about regions and availability zones (AZ) and the ways AWS and Azure implement subnets, let’s get to the crux of the original question Daniel Dib sent me:

As I understand it, subnets in Azure span availability zones. Do you see any drawback to this? You mentioned that it’s difficult to create application swimlanes that way. But does subnet matter if your VMs are in different AZs?

It’s time I explain the concepts of application swimlanes and how they apply to availability zones in public clouds.

Impact of Azure Subnets on High Availability Designs

Now that you know all about regions and availability zones (AZ) and the ways AWS and Azure implement subnets, let’s get to the crux of the original question Daniel Dib sent me:

As I understand it, subnets in Azure span availability zones. Do you see any drawback to this? You mentioned that it’s difficult to create application swimlanes that way. But does subnet matter if your VMs are in different AZs?

It’s time I explain the concepts of application swimlanes and how they apply to availability zones in public clouds.

Automating Red Hat Virtualization with Red Hat Ansible Automation Platform

Red Hat Virtualization (RHV) is a complete, and fully supported enterprise virtualization platform that is built upon a foundation of Red Hat Enterprise Linux (RHEL), oVirt virtualization management projects, and Kernel-based Virtual Machine (KVM) technology in order to virtualize resources, processes and applications. 

With RHEL as the compute provider, RHV addsan intuitive web interface with a robust API including SDKs for Java, Ruby, Python, JavaScript and Go for management of virtualization instances and resources that comprise a typical datacenter. 

Interacting with an API through a full fledged SDK may present a barrier to datacenter automation due to requisite knowledge of a programming language before getting started. This also means that collaboration may be stifled due to a lack of resources proficient in one of the available SDKs. Standardizing on Ansible for automating RHV allows for all teams and individuals to create and maintain automation without knowledge of a programming language. 

Red Hat Ansible Automation Platform allows for interacting with datacenter services in a cleanly formatted and human readable markup language that offers an on-ramp to automating the datacenter. By leveraging the newly released Ansible Content Collection for RHV, this gentle on-ramp to automation becomes more powerful by Continue reading

Automating Red Hat Virtualization with Red Hat Ansible Automation Platform

Red Hat Virtualization (RHV) is a complete, and fully supported enterprise virtualization platform that is built upon a foundation of Red Hat Enterprise Linux (RHEL), oVirt virtualization management projects, and Kernel-based Virtual Machine (KVM) technology in order to virtualize resources, processes and applications. 

With RHEL as the compute provider, RHV addsan intuitive web interface with a robust API including SDKs for Java, Ruby, Python, JavaScript and Go for management of virtualization instances and resources that comprise a typical datacenter. 

Interacting with an API through a full fledged SDK may present a barrier to datacenter automation due to requisite knowledge of a programming language before getting started. This also means that collaboration may be stifled due to a lack of resources proficient in one of the available SDKs. Standardizing on Ansible for automating RHV allows for all teams and individuals to create and maintain automation without knowledge of a programming language. 

Red Hat Ansible Automation Platform allows for interacting with datacenter services in a cleanly formatted and human readable markup language that offers an on-ramp to automating the datacenter. By leveraging the newly released Ansible Content Collection for RHV, this gentle on-ramp to automation becomes more powerful by Continue reading

New Year, New Name: Insights Platform Is Now Internet Society Pulse


In order to align better with the Internet Society’s brand strategy, and to further differentiate our platform from other products with similar names, we have decided to rename our measurement platform.

As of 1 March, 2021, Insights will be known as Internet Society Pulse.


The new URL is: https://pulse.internetsociety.org/

A redirect will be implemented so that anyone navigating to the previous URL will be automatically taken to Internet Society Pulse.

And, if you follow us on Twitter, you’ll see that our handle has been changed from @isoc_insights to @isoc_pulse.

The platform’s look and feel will not change.

Looking Ahead

We launched Insights in early December 2020 and are extremely proud of the impact that the platform has had in just three short months. We’re looking forward to ramping up the platform further in 2021 and will be adding three new focus areas throughout the year:

In 2021, we’ll also expand our analysis and reporting offerings, increase our engagement with the Internet measurements community, bring on board more data partners and add new features to the platform.

Stay up to date by signing up to our mailing list and Continue reading

Cisco closes $4.5B deal on optical powerhouse Acacia

After some legal wrangling earlier this year, Cisco has closed the $4.5 billion deal for optical maker Acacia Communications, Inc. Cisco coveted Acacia for its high-speed, optical interconnect technologies that let data center operators, webscale companies and service providers offer ever-faster service access to widely distributed resources. It also reinforces Cisco’s commitment to optics as a critical building block for networks of the future. “Acacia offers a complete portfolio of long-distance data-transmission solutions that address the full range of applications in the data-center-interconnect and wide-area network segments for metro, regional, long haul, and subsea links,” wrote Bill Gartner, senior vice president and general manager of Cisco’s Optical Systems and Optics Group in a blog about the acquisition.To read this article in full, please click here