Show 394: Technology Problems Are Mostly People Problems

You are a problem…maybe the biggest problem of all. No? The crashing router code is the biggest problem? The leaking memory in the switch?

The app needs layer 2 stretched between data centers–what problem could be worse than that?

Today on the show, we re here to argue that, no…it s you. And me. And everyone else you work with.

With us today to defend the idea that technology problems are really people problems is Eyvonne Sharp, network architect and co-founder of The Network Collective.

We talk about how people and processes can contribute more to a problem than a technology. We also talk about three different organizational culture types (Pathological, Bureaucratic, and Generative), how to evaluate your own organization, and Eyvonne recommends a few books on team building and culture development.

Show Links:

Eyvonne Sharp on Twitter

The Network Collective

Using the Westrum typology to measure culture Andy Kelk

Forget about broad-based pay hikes, executives say – Axios

The Undoing Project – Michael Lewis

The Five Dysfunctions of a Team: A Leadership Fable Patrick M. Lencioni

Team of Teams: New Rules of Engagement for a Complex World – General Stanley McChrystal

The post Show 394: Technology Continue reading

Stuff The Internet Says On Scalability For June 15th, 2018

Hey, it's HighScalability time:

 

Scaling fake ratings. A 5 star 10,000 phone Chinese click farm. (English Russia)

Do you like this sort of Stuff? Please lend me your support on Patreon. It would mean a great deal to me. And if you know anyone looking for a simple book that uses lots of pictures and lots of examples to explain the cloud, then please recommend my new book: Explain the Cloud Like I'm 10. They'll love you even more.

  • 1.6x: better deep learning cluster scheduling on k8s; 100,000: Large-scale Diverse Driving Video Database; 3rd: reddit popularity in the US; 50%: increase in Neural Information Processing System papers, AI bubble? 420 tons: leafy greens from robot farms; 75%: average unused storage on EBS volumes; 12TB: RAM on new Azure M-series VM; 10%: premium on Google's single-tenant nodes; $7.5B: Microsoft's cost of courting developers; 100th: flip-flop invention anniversary; 1 million: playlist dataset from Spotify; 38GB torrent: Stackoverflow public database; 85%: teens use YouTube; 20%-25%: costs savings using Aurora; 80%: machine learning Ph.D.s work at Google or Facebook; 18: years of Continue reading

EuroDIG 2018 Gathers the Internet Community: What’s New

The 12th edition of the European Dialogue on Internet Governance or the EuroDIG, as it is commonly known, took place in Tbilisi, Georgia, on 5-6 June. The Internet Society (ISOC) is an institutional partner to EuroDIG and the ISOC European Regional Bureau helped shape the agenda and were involved in several sessions.

This year, a few specific aspects caught my attention and created a lot of debate during the sessions and in the corridors.

Reinforcing the multistakeholder model

While European governments have traditionally been strong supporters of the Internet Governance Fora (IGF) and the multistakeholder model, this support has been to some extent compromised by concerns over national security and other priorities in the recent times. Several core members of the European Internet community have talked about a “fatigue” with the regional and national IGFs.

This year’s EuroDIG offered some fresh food for thought. Larry Strickling, who leads the Internet Society’s Collaborative Governance project, made several interventions during the EuroDIG. Strickling’s extensive experience of driving multistakeholder processes and his practical approach were received with great interest and curiosity. In parallel, high participation from young people injected heaps of new energy and optimism to the event.

Embracing the Internet opportunity Continue reading

Hackathon@AIS: Summary report

The annual Hackathon@AIS, in its second year, is aimed at exposing engineers from the Africa region to open Internet Standards Development. This year, the event was held 9-10 May 2018 in Dakar Senegal at the Radisson Blu Hotel during the Africa Internet Summit (AIS-2018).

The event was attended by more than 75 engineers from 15 countries including 11 fellows who were supported to attend the event. The event featured participants with English and or French-speaking backgrounds encouraging collaboration to work. Organized into 3 tracks, the event allowed participants to choose which track they were interested in participating in. The tracks were as follows:

1. Network Time Protocol Track

Objectives:

  • Make NTP more secure (Privacy)
  • Using WireShark NTP Plugin to read/analyze NTP traffic
  • Code changes to NTP implementations to make them compliant with the draft
  • Read and understand Draft RFC

Facilitators:

  • Loganaden Velvindron (Mauritius)
  • Nitin Mutkawoa (Mauritius)
  • Serge-Parfait Goma (Congo)

Participants were introduced to NTP and asked to test out an IETF draft and implement it in open source NTP clients.

Outcome:

Participants were able to successfully implement draft and made presentations demonstrating their work and accomplishments.

2. Network Programmability

Objectives:

  • Introduce participants to Software Defined Networking (SDN)
  • Introduce network Continue reading

Worth Reading: Discovering Issues with HTTP/2

A while ago I found an interesting analysis of HTTP/2 behavior under adverse network conditions. Not surprisingly:

When there is packet loss on the network, congestion controls at the TCP layer will throttle the HTTP/2 streams that are multiplexed within fewer TCP connections. Additionally, because of TCP retry logic, packet loss affecting a single TCP connection will simultaneously impact several HTTP/2 streams while retries occur. In other words, head-of-line blocking has effectively moved from layer 7 of the network stack down to layer 4.

What exactly did anyone expect? We discovered the same problems running TCP/IP over SSH a long while ago, but then too many people insist on ignoring history and learning from their own experience.

Popular is cheaper: curtailing memory costs in interactive analytics engines

Popular is cheaper: curtailing memory costs in interactive analytics engines Ghosh et al., EuroSys’18

(If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site).

We’re sticking with the optimisation of data analytics today, but at the other end of the spectrum to the work on smart arrays that we looked at yesterday. Getafix (extra points for the Asterix-inspired name, especially as it works with Yahoo!’s Druid cluster) is aimed at reducing the memory costs for large-scale in-memory data analytics, without degrading performance of course. It does this through an intelligent placement strategy that decides on replication level and data placement for data segments based on the changing popularity of those segments over time. Experiments with workloads from Yahoo!’s production Druid cluster that Getafix can reduce memory footprint by 1.45-2.15x while maintaining comparable average and tail latencies. If you translate that into a public cloud setting, and assuming a 100TB hot dataset size — a conservative estimate in the Yahoo! case — we’re looking at savings on the order of $10M per year.

Real-time analytics is projected to Continue reading

DockerCon Guest Speaker: Liberty Mutual

In yesterday’s DockerCon keynote, Eric Drobisewski, Senior Architect at Liberty Mutual Insurance, shared how Docker Enterprise Edition has been a foundational technology for their digital transformation.

If you missed it, the replay of the keynote is available below:

Turning Disruption into Opportunities

Liberty Mutual – the 3rd largest property and casualty insurance provider in the United States –  recognizes that the new digital economy is bringing a faster cycle of technology evolution. Disruptive technologies like autonomous vehicles and smart homes are changing the way customers interact and transact. Liberty Mutual sees these as opportunities to bring new services to market and ways to reinvent traditional insurance models, but they needed to become more flexible and agile while managing their technical debt.

Rapid Expansion of Docker EE

As a 106-year old company, Liberty Mutual recognized that they were not going to become agile overnight. Liberty Mutual has instead built a “multi-lane highway” that enables both traditional apps and new microservices apps to modernize at different speeds according to their needs, all based on Docker Enterprise Edition.

“(Docker Enterprise Edition) began to open multiple paths for our teams to modernize traditional applications and move them to the cloud in a Continue reading

Open Source Serverless Frameworks on Docker EE

Since the advent of AWS Lambda in 2014, the Function as a Service (FaaS) programming paradigm has gained a lot of traction in the cloud community. At first, only large cloud providers such as AWS Lambda, Google Cloud Functions or Azure Functions provided such services with a pay-per-invocation model, but since then interest has increased for developers and entreprises to build their own solutions on an open source model.

The maturation of container platforms such as Docker EE has made this process even easier, resulting in a number of competing frameworks in this space. We have identified at least 9 different frameworks*. In this study, we start with the following six: OpenFaaS, nuclio, Gestalt, Riff, Fn and OpenWhisk. You can find an introduction (including slides and videos) to some of these frameworks in this blog post from the last DockerCon Europe.

These frameworks vary a lot in feature set, but can be generalized as having several key elements shown in the following diagram, from the Serverless Architecture from CNCF Serverless Working Group whitepaper:

serverless Docker

  • Event sources – trigger or stream events into one or more function instances
  • Function instances – a single function/microservice, that can be Continue reading

Facebook releases its load balancer as open-source code

Google is known to fiercely guard its data center secrets, but not Facebook. The social media giant has released two significant tools it uses internally to operate its massive social network as open-source code.The company has released Katran, the load balancer that keeps the company data centers from overloading, as open source under the GNU General Public License v2.0 and available from GitHub. In addition to Katran, the company is offering details on its Zero Touch Provisioning tool, which it uses to help engineers automate much of the work required to build its backbone networks.To read this article in full, please click here

Facebook releases its load balancer as open-source code

Google is known to fiercely guard its data center secrets, but not Facebook. The social media giant has released two significant tools it uses internally to operate its massive social network as open-source code.The company has released Katran, the load balancer that keeps the company data centers from overloading, as open source under the GNU General Public License v2.0 and available from GitHub. In addition to Katran, the company is offering details on its Zero Touch Provisioning tool, which it uses to help engineers automate much of the work required to build its backbone networks.To read this article in full, please click here