Stuff The Internet Says On Scalability For September 14th, 2018

Hey, it's HighScalability time:

 

The Cloud Native Interactive Landscape is fecund. You are viewing 581 cards with a total of 1,237,157 stars, market cap of $6.86T and funding of $20.1B. (changelog)

 

Do you like this sort of Stuff? Please lend me your support on Patreon. It would mean a great deal to me. And if you know anyone looking for a simple book that uses lots of pictures and lots of examples to explain the cloud, then please recommend my new book: Explain the Cloud Like I'm 10. They'll love you even more.

 

  • 72: signals sensed from a distant galaxy using AI; 12M: reddit posts per month; 10 trillion: per day Google generated test inputs with 100s of servers for several months using OSS-Fuzz; 200%: growth in Cloud Native technologies used in production; $13 trillion: potential economic impact of AI by 2030; 1.8 trillion: plastic pieces eaten by giant garbage pac-man; 100: min people needed to restart humanity; 

  • Quotable Quotes:
    • Joel Hruska: Farmers in California have lost the fight to be allowed to repair their own tractors and equipment thanks to the capitulation Continue reading

Weekend Reads 091418

Security

You install a new app on your phone, and it asks for access to your email accounts. Should you, or shouldn’t you? TL;DR? You shouldn’t. When an app asks for access to your email, they are probably reading your email, performing analytics across it, and selling that information. Something to think about: how do they train their analytics models? By giving humans the job of reading it.

When you shut your computer down, the contents of memory are not wiped. This means an attacker can sometimes grab your data while the computer is booting, before any password is entered. Since 2008, computers have included a subsystem that wipes system memory before starting any O/S launch—but researchers have found a way around this memory wipe.

You know when your annoying friend talks about the dangers of IoT when you bragging about your latest install of that great new electronic doorlock that works off your phone? You know the one I’m talking about. Maybe that annoying friend has some things right, and we should really be paying more attention to the problems inherent in large scale IoT deployments. For instance, what would happen if you could get the electrical grid in Continue reading

From Idea to Action: Beyond the Net Selects 15 Amazing Chapter Projects!

The Beyond the Net Funding Programme is pleased to announce the results of our 2018 Grant Cycle. A total of 49 applications were received, and after a thorough reviewing process, 15 amazing projects were selected.

These projects are at the core of our mission, and will use the Internet to develop Community Networks in underserved areas, to empower women through ICT, as well as bringing awareness on  Internet policies around the world.

This is the result of months of effort from our Chapter Community. Many discussions, numerous clarifications and proposals, updates, and revisions form the Beyond the Net Selection Committee. We are proud of you all.

Please join us in celebrating the following projects!

Developing community networks in the Northern region of Brazil – Brazil Chapter

Supporting and promoting the development of the Internet to enrich people’s lives, the project aim is to contribute to the growth and improvement of community networks policies and practices in Brazilian rural areas, in order to strengthen those who are marginalized. Instituto Nupef will work to develop a new network in the state of Maranhão as well as a developing a communications plan for the Babassu coconut breakers organizations and movements. Objectives include Continue reading

Cache API for Cloudflare Workers is now in Beta!

Cache API for Cloudflare Workers is now in Beta!

In October of last year we announced the launch of Cloudflare Workers. Workers allows you to run JavaScript from 150+ of Cloudflare’s data centers. This means that from the moment a request hits the Cloudflare network, you have full control over its destiny. One of the benefits of using Workers in combination with Cloudflare’s cache is that Workers allow you to have programmatic, and thus very granular control over the Cloudflare cache.

You can choose what to cache, how long to cache it for, the source it should be cached from, and you can even modify the cached result after it is retrieved from the cache.


We have seen many of our existing customers use Workers to enhance their usage of the Cloudflare cache, and we have seen many new customers join Cloudflare to take advantage of these unique benefits.

(Re-)Introducing the Cache API

You can always have more control, so today we are announcing support for the Cache API! As some of you may know, Cloudflare Workers are built against the existing Service Worker APIs. One of the reasons we originally chose to model Cloudflare Workers after Service Workers was due to the existing familiarity and audience of Service Continue reading

Nvidia revs up AI with GPU-powered data-center platform

Nvidia is raising its game in data centers, extending its reach across different types of AI workloads with the Tesla T4 GPU, based on its new Turing architecture and, along with related software, designed for blazing acceleration of applications for images, speech, translation and recommendation systems.The T4 is the essential component in Nvidia's new TensorRT Hyperscale Inference Platform, a small-form accelerator card, expected to ship in data-center systems from major server makers in the fourth quarter.The T4 features Turing Tensor Cores, which support different levels of compute precision for different AI applications, as well as the major software frameworks – including TensorFlow, PyTorch, MXNet, Chainer, and Caffe2 – for so-called deep learning, machine learning involving multi-layered neural networks.To read this article in full, please click here

Nvidia revs up AI with GPU-powered data-center platform

Nvidia is raising its game in data centers, extending its reach across different types of AI workloads with the Tesla T4 GPU, based on its new Turing architecture and, along with related software, designed for blazing acceleration of applications for images, speech, translation and recommendation systems.The T4 is the essential component in Nvidia's new TensorRT Hyperscale Inference Platform, a small-form accelerator card, expected to ship in data-center systems from major server makers in the fourth quarter.The T4 features Turing Tensor Cores, which support different levels of compute precision for different AI applications, as well as the major software frameworks – including TensorFlow, PyTorch, MXNet, Chainer, and Caffe2 – for so-called deep learning, machine learning involving multi-layered neural networks.To read this article in full, please click here

WISP Design – An overview of adding IPv6 to your WISP

The challenge of adding IPv6 to your WISP

IPv6 is one of those technologies that can feel pretty overwhelming, but it doesn’t have to be. Many of the same ideas and concepts learned in IPv4 networking still apply.

This guide is meant to give you an overview of an example IPv6 addressing plan for an entire WISP as well as the config needed in MikroTik to deploy IPv6 from a core router all the way to a subscriber device.

 

Benefits of adding IPv6

  • Public addressing for all subscribers – reduced need for NAT
  • Regulatory compliance – public addressing that is persistent makes it much easier to be compliant for things like CALEA
  • Reduced complaints from gamers – Xbox and Playstation both have IPv6 networks and prefer IPv6. This reduces complaints from customers who have gaming consoles that have detected an “improper” NAT configuration.
  • Increased security – IPv6, while not impervious to security threats makes it much harder for attackers to scan IPs due to the sheer size of the IP space. If using privacy extensions with SLAAC, it also makes it much harder to target someone online as the IP address seen on the internet changes randomly.
  • Improved real Continue reading

WISP Design – An overview of adding IPv6 to your WISP

The challenge of adding IPv6 to your WISP

IPv6 is one of those technologies that can feel pretty overwhelming, but it doesn’t have to be. Many of the same ideas and concepts learned in IPv4 networking still apply.

This guide is meant to give you an overview of an example IPv6 addressing plan for an entire WISP as well as the config needed in MikroTik to deploy IPv6 from a core router all the way to a subscriber device.

 

Benefits of adding IPv6

  • Public addressing for all subscribers – reduced need for NAT
  • Regulatory compliance – public addressing that is persistent makes it much easier to be compliant for things like CALEA
  • Reduced complaints from gamers – Xbox and Playstation both have IPv6 networks and prefer IPv6. This reduces complaints from customers who have gaming consoles that have detected an “improper” NAT configuration.
  • Increased security – IPv6, while not impervious to security threats makes it much harder for attackers to scan IPs due to the sheer size of the IP space. If using privacy extensions with SLAAC, it also makes it much harder to target someone online as the IP address seen on the internet changes randomly.
  • Improved real Continue reading

A Matter of Perspective

Have you ever taken the opportunity to think about something from a completely different perspective? Or seen someone experience something you have seen through new eyes? It’s not easy for sure. But it is a very enlightening experience that can help you understand why people sometimes see things entirely differently even when presented with the same information.

Overcast Networking

The first time I saw this in action was with Aviatrix Systems. I first got to see them at Cisco Live 2018. They did a 1-hour presentation about their solution and gave everyone an overview of what it could do. For the networking people in the room it was pretty straightforward. Aviatrix did a lot of the things that networking should do. It was just in the cloud instead of in a data center. It’s not that Aviatrix wasn’t impressive. It’s the networking people have a very clear idea of what a networking platform should do.

Fast forward two months to Cloud Field Day 4. Aviatrix presents again, only this time to a group of cloud professionals. The message was a little more refined from their first presentation. They included some different topics to appeal more to a cloud audience, such Continue reading

An empirical analysis of anonymity in Zcash

An empirical analysis of anonymity in Zcash Kappos et al., USENIX Security’18

As we’ve seen before, in practice Bitcoin offers little in the way of anonymity. Zcash on the other hand was carefully designed with privacy in mind. It offers strong theoretical guarantees concerning privacy. So in theory users of Zcash can remain anonymous. In practice though it depends on the way those users interact with Zcash. Today’s paper choice, ‘An empirical analysis of anonymity in Zcash’ studies how identifiable transaction participants are in practice based on the 2,242,847 transactions in the blockchain at the time of the study.

We conclude that while it is possible to use Zcash in a private way, it is also possible to shrink its anonymity set considerably by developing simple heuristics based on identifiable patterns of usage.

The analysis also provides some interesting insights into who is using Zcash and for what as well. Founders and miners combined account for around 66% of the value drawn from the shielded pool.

The code for the analysis is available online at https://github.com/manganese/zcash-empirical-analysis

Zcash guarantees and the shielded pool

Zcash is based on highly regarded research including a cryptographic proof of the main privacy feature Continue reading

Running the gcloud CLI in a Docker Container

A few times over the last week or two I’ve had a need to use the gcloud command-line tool to access or interact with Google Cloud Platform (GCP). Because working with GCP is something I don’t do very often, I prefer to not install the Google Cloud SDK; instead, I run it in a Docker container. However, there is a trick to doing this, and so to make it easier for others I’m documenting it here.

The gcloud tool stores some authentication data that it needs every time it runs. As a result, when you run it in a Docker container, you must take care to store this authentication data outside the container. Most of the tutorials I’ve seen, like this one, suggest the use of a named Docker container. For future invocations after the first, you would then use the --volumes-from parameter to access this named container.

There’s only one small problem with this approach: what if you’re using another tool that also needs access to these GCP credentials? In my case, I needed to be able to run Packer against GCP as well. If the authentication information is stored inside a named Docker container (and then accessed Continue reading

IDG Contributor Network: On the road to your IoT adventure: planning, deployment and measurement (Part 2)

Welcome to the second installment in my series on how organizations can get started on the road to success with their Internet of Things (IoT) projects. In the first part of my series based on “Building the Internet of Things – a Project Workbook,” I explained how to identify your IoT vision and path to value. The next steps include planning, deployment and measuring your success. Let’s get started.Benchmark your organization against industry peers The first step is to determine how your organization stacks up to its industry peers.  Benchmarking will help establish metrics you can use to validate your project, secure funding, evaluate your team and promote success after the project is complete. It also helps establish a baseline, allowing you to see where you stand at the beginning of the project, so you can measure how far you’ve come at the end. You can use the benchmarking method of your choice, but I encourage you to evaluate the following areas:To read this article in full, please click here