Worth Reading: Discovering Issues with HTTP/2

A while ago I found an interesting analysis of HTTP/2 behavior under adverse network conditions. Not surprisingly:

When there is packet loss on the network, congestion controls at the TCP layer will throttle the HTTP/2 streams that are multiplexed within fewer TCP connections. Additionally, because of TCP retry logic, packet loss affecting a single TCP connection will simultaneously impact several HTTP/2 streams while retries occur. In other words, head-of-line blocking has effectively moved from layer 7 of the network stack down to layer 4.

What exactly did anyone expect? We discovered the same problems running TCP/IP over SSH a long while ago, but then too many people insist on ignoring history and learning from their own experience.

Popular is cheaper: curtailing memory costs in interactive analytics engines

Popular is cheaper: curtailing memory costs in interactive analytics engines Ghosh et al., EuroSys’18

(If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site).

We’re sticking with the optimisation of data analytics today, but at the other end of the spectrum to the work on smart arrays that we looked at yesterday. Getafix (extra points for the Asterix-inspired name, especially as it works with Yahoo!’s Druid cluster) is aimed at reducing the memory costs for large-scale in-memory data analytics, without degrading performance of course. It does this through an intelligent placement strategy that decides on replication level and data placement for data segments based on the changing popularity of those segments over time. Experiments with workloads from Yahoo!’s production Druid cluster that Getafix can reduce memory footprint by 1.45-2.15x while maintaining comparable average and tail latencies. If you translate that into a public cloud setting, and assuming a 100TB hot dataset size — a conservative estimate in the Yahoo! case — we’re looking at savings on the order of $10M per year.

Real-time analytics is projected to Continue reading

DockerCon Guest Speaker: Liberty Mutual

In yesterday’s DockerCon keynote, Eric Drobisewski, Senior Architect at Liberty Mutual Insurance, shared how Docker Enterprise Edition has been a foundational technology for their digital transformation.

If you missed it, the replay of the keynote is available below:

Turning Disruption into Opportunities

Liberty Mutual – the 3rd largest property and casualty insurance provider in the United States –  recognizes that the new digital economy is bringing a faster cycle of technology evolution. Disruptive technologies like autonomous vehicles and smart homes are changing the way customers interact and transact. Liberty Mutual sees these as opportunities to bring new services to market and ways to reinvent traditional insurance models, but they needed to become more flexible and agile while managing their technical debt.

Rapid Expansion of Docker EE

As a 106-year old company, Liberty Mutual recognized that they were not going to become agile overnight. Liberty Mutual has instead built a “multi-lane highway” that enables both traditional apps and new microservices apps to modernize at different speeds according to their needs, all based on Docker Enterprise Edition.

“(Docker Enterprise Edition) began to open multiple paths for our teams to modernize traditional applications and move them to the cloud in a Continue reading

Open Source Serverless Frameworks on Docker EE

Since the advent of AWS Lambda in 2014, the Function as a Service (FaaS) programming paradigm has gained a lot of traction in the cloud community. At first, only large cloud providers such as AWS Lambda, Google Cloud Functions or Azure Functions provided such services with a pay-per-invocation model, but since then interest has increased for developers and entreprises to build their own solutions on an open source model.

The maturation of container platforms such as Docker EE has made this process even easier, resulting in a number of competing frameworks in this space. We have identified at least 9 different frameworks*. In this study, we start with the following six: OpenFaaS, nuclio, Gestalt, Riff, Fn and OpenWhisk. You can find an introduction (including slides and videos) to some of these frameworks in this blog post from the last DockerCon Europe.

These frameworks vary a lot in feature set, but can be generalized as having several key elements shown in the following diagram, from the Serverless Architecture from CNCF Serverless Working Group whitepaper:

serverless Docker

  • Event sources – trigger or stream events into one or more function instances
  • Function instances – a single function/microservice, that can be Continue reading

Facebook releases its load balancer as open-source code

Google is known to fiercely guard its data center secrets, but not Facebook. The social media giant has released two significant tools it uses internally to operate its massive social network as open-source code.The company has released Katran, the load balancer that keeps the company data centers from overloading, as open source under the GNU General Public License v2.0 and available from GitHub. In addition to Katran, the company is offering details on its Zero Touch Provisioning tool, which it uses to help engineers automate much of the work required to build its backbone networks.To read this article in full, please click here

Facebook releases its load balancer as open-source code

Google is known to fiercely guard its data center secrets, but not Facebook. The social media giant has released two significant tools it uses internally to operate its massive social network as open-source code.The company has released Katran, the load balancer that keeps the company data centers from overloading, as open source under the GNU General Public License v2.0 and available from GitHub. In addition to Katran, the company is offering details on its Zero Touch Provisioning tool, which it uses to help engineers automate much of the work required to build its backbone networks.To read this article in full, please click here

VRF route leaking: time to get a little more social!

Virtual Routing and Forwarding (VRF) is a ubiquitous concept in networking, first introduced in the late 1990s as the control and data plane mechanism to provide traffic isolation at layer 3 over a shared network infrastructure. VRF for Linux is an excellent blog that describes the technology behind VRFs, especially as it pertains to the Linux kernel. With the introduction of support for leaking of routes, VRFs get to enjoy their isolation while also having the nous to mix and mingle.

Wait, aren’t VRFs meant to be completely isolated?

You have a valid question there. That was certainly the initial use case for VRFs. Each VRF was intended to represent a customer of a service provider and isolation was a fundamental tenet. Each VRF had its own routing protocol sessions and IPv4 and IPv6 routing tables and route computation as well as packet forwarding was independent from other VRFs. All communication stayed within the VRF other than specific scenarios such as reaching the Internet. Hershey’s wouldn’t want to get too chatty with Lindt, right? No, VRFs weren’t meant to be gregarious.

As VRFs moved outside the realm of the service provider and started finding application elsewhere, such as in the Continue reading

Check Out Our New Software Testing Course – Software Testing QA: A Comprehensive Overview





Instructor: Justin Spears

Course Duration: 1hr 45min



About the Course

The modern accessibility of public, private and hybrid cloud environments has led rise to a bastion of cloud-centric practices. One of the most notable is the idea of QA and Testing in the cloud. This course will describe the concepts, methodologies and implementations of testing in a cloud environment. We will go through the full software QA lifecycle and describe where and how each component of that lifecycle can be offloaded into the cloud and further describe methods and mechanisms on how to do so effectively.

Red Hat Single Sign-on Integration with Ansible Tower

RH-Ansible-Tower-SSO

As you might know, Red Hat Ansible Tower supports SAML authentication (both N and Z) by default. This document will guide you through the steps for configuring both products to delegate the authentication to RHSSO/Keycloak (Red Hat Single Sign-On).

Requirements:

  • A running RHSSO/Keycloak instance
  • Ansible Tower
  • Admin rights for both
  • DNS resolution

 

Hands-On Lab

Unless you have your own certificate already, the first step will be to create one. To do so, execute the following command:

openssl req -new -x509 -days 365 -nodes -out saml.crt -keyout saml.key

Now we need to create the Ansible Tower Realm on the RHSSO platform. Go to the "Select Realm" drop-down and click on "Add new realm":

Ansible-Tower-SSO-Screen-16

Once created, go to the "Keys" tab and delete all certificates, keys, etc. that were created by default.

Now that we have a clean realm, let's populate it with the appropriate information. Click on "Add Keystore" in the upper right corner and click on RSA:

Ansible-Tower-SSO-Screen-15

Click on Save and create your Ansible Tower client information. It is recommend to start with the Tower configuration so that you can inject the metadata file and customize a few of the fields.

Log in as the admin user Continue reading