Online meetup recap: Introduction to LinuxKit

At DockerCon 2017 we introduced LinuxKit: A toolkit for building secure, lean and portable Linux subsystems. Here are the key principles and motivations behind the project:

  • Secure defaults without compromising usability
  • Everything is replaceable and customizable
  • Immutable infrastructure applied to building Linux distributions
  • Completely stateless, but persistent storage can be attached
  • Easy tooling, with easy iteration
  • Built with containers, for running containers
  • Designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes
  • Designed from the experience of building Docker Editions, but redesigned as a general-purpose toolkit
  • Designed to be managed by external tooling, such as Infrakit or similar tools
  • Includes a set of longer-term collaborative projects in various stages of development to innovate on kernel and userspace changes, particularly around security

For this Online Meetup, Docker Technical Staff member Rolf Neugebauer gave an introduction to LinuxKit, explained the rationale behind its development and gave a demo on how to get started using it.

LinuxKit

Watch the recording and slides

You’ll find below a list of additional questions asked by attendees at the end of the online meetups:

You said the ONBOOT containers are run sequentially, does it wait for one to finish before it Continue reading

US defense contractor stored intelligence data on Amazon server without a password

About 28GB of sensitive US intelligence data was discovered on a publicly-accessible Amazon Web Services’ S3 storage bucket. The cache, containing over 60,000 files, was linked to defense and intelligence contractor Booz Allen Hamilton, which was working on a project for the US National Geospatial-Intelligence Agency (NGA). NGA provides satellite and drone surveillance imagery for the Department of Defense and the US intelligence community.The unsecured data was discovered by Chris Vickery, who now works as a cyber risk analyst for the security firm UpGuard.According to UpGuard, the “information that would ordinarily require a Top Secret-level security clearance from the DoD was accessible to anyone looking in the right place; no hacking was required to gain credentials needed for potentially accessing materials of a high classification level.”To read this article in full or to leave a comment, please click here

US defense contractor stored intelligence data on Amazon server without a password

About 28GB of sensitive US intelligence data was discovered on a publicly-accessible Amazon Web Services’ S3 storage bucket. The cache, containing over 60,000 files, was linked to defense and intelligence contractor Booz Allen Hamilton, which was working on a project for the US National Geospatial-Intelligence Agency (NGA). NGA provides satellite and drone surveillance imagery for the Department of Defense and the US intelligence community.The unsecured data was discovered by Chris Vickery, who now works as a cyber risk analyst for the security firm UpGuard.According to UpGuard, the “information that would ordinarily require a Top Secret-level security clearance from the DoD was accessible to anyone looking in the right place; no hacking was required to gain credentials needed for potentially accessing materials of a high classification level.”To read this article in full or to leave a comment, please click here

Capacity Planner Version 2.0 Released

Modern Wi-Fi networks are complex beasts. Despite all the fancy new features in products, the technology is only becoming more complex and the demands on the network are only growing. Wi-Fi is the most heavily used method to transport user data today, eclipsing cellular and LAN traffic volumes according to multiple reports from analysis firms including Cisco, Ofcom, Mobidia, Ovum, and others. Meanwhile, the technical complexity contained within the IEEE 802.11 standard results in a technical document that is over 3,200 pages long!  This means deploying a network right is no easy task.

One of the most difficult aspects to get right when deploying a Wi-Fi network is understanding capacity requirements. It is not sufficient enough to use rule-of-thumb guidelines based on number of clients per access point or number of access points per square foot/meter since they often result in networks that do not adequately meet actual end-user demands and perform poorly. More rigor is required while maintaining simplicity of use so that most network administrators can be confident of a successful outcome.

Essential to wireless network performance and capacity planning is understanding the interaction between access point capabilities, network configuration, client device capabilities, and the RF Continue reading

Tempered Networks makes it HIP to connect the unconnectable

IP networks were originally designed to be fairly simple. There’s a source and a destination address, and the network devices use this information to perform some fancy calculations—and magically, things connect. But as the internet has grown and more endpoints have been connected, networking has become a black magic. Since it’s impossible to give every device its own unique IP address, the clever folks at networking companies came up with an assortment of workarounds, such as being able to NAT (network address translation) non-routable, private addresses. And as we’ve added more dynamic environments, such as private and public cloud, defining policy based on addresses or ranges has become unsustainable. To read this article in full or to leave a comment, please click here

Tempered Networks makes it HIP to connect the unconnectable

IP networks were originally designed to be fairly simple. There’s a source and a destination address, and the network devices use this information to perform some fancy calculations—and magically, things connect. But as the internet has grown and more endpoints have been connected, networking has become a black magic. Since it’s impossible to give every device its own unique IP address, the clever folks at networking companies came up with an assortment of workarounds, such as being able to NAT (network address translation) non-routable, private addresses. And as we’ve added more dynamic environments, such as private and public cloud, defining policy based on addresses or ranges has become unsustainable. To read this article in full or to leave a comment, please click here

$10 off TP-Link AC1200 Wi-Fi Range Extender Powerline Edition – Deal Alert

The AC1200 is a wifi range extender that transmits its signal through your home wiring via your wall outlets, so walls and floors won't slow it down. Game online and watch HD movies in any room. The powerline adapter is simple to use -- it sets up in minutes, plugs into any power outlet, works with all routers, and up to 16 can be added to the same network, making it easy to expand your Wi-Fi across your home. Right now the price on this highly rated wifi extender will be reduced $10 to $99.99 in your shopping cart when you "clip" a special coupon.  See this deal now on Amazon.To read this article in full or to leave a comment, please click here

$10 off TP-Link AC1200 Wi-Fi Range Extender Powerline Edition – Deal Alert

The AC1200 is a wifi range extender that transmits its signal through your home wiring via your wall outlets, so walls and floors won't slow it down. Game online and watch HD movies in any room. The powerline adapter is simple to use -- it sets up in minutes, plugs into any power outlet, works with all routers, and up to 16 can be added to the same network, making it easy to expand your Wi-Fi across your home. Right now the price on this highly rated wifi extender will be reduced $10 to $99.99 in your shopping cart when you "clip" a special coupon.  See this deal now on Amazon.To read this article in full or to leave a comment, please click here

Transforming the Internet Society’s web presence

Have you struggled to find information on our current website? Have you found it difficult to know what actions you can take on important issues such as connecting the unconnected and building trust on the Internet?

You are not alone.

In one of the most visible and important changes we are making this year, we are working hard on giving our website a deep refresh.  We are building it to be a direct vehicle for action. We are redesigning it from the ground up to help us achieve our objective of connecting everyone, everywhere to a globally connected, trusted Internet. 

It will look different, it will feel different, it will be more accessible and will be more aligned with this strategic goal. 

James Wood

Multi-site Active-Active Solutions with NSX-V and F5 BIG-IP DNS

I’ve written several prior blogs on multi-site solutions with NSX-V discussing topics such as fundamentals, design options, multi-site security, and disaster recovery; see below links to review some of the prior material. In this post, I’ll discuss how VMware NSX-V and F5 BIG-IP DNS (prior known as F5 GTM) can be used together for Active/Active solutions where an application is spanning multiple sites and site-local ingress/egress for the application is desired. F5 offers both virtual and physical appliances; in this post I demonstrate using only the virtual (VE) F5 appliances. Big thanks to my friend Kent Munson at F5 Networks for helping with the F5 deployment in my lab and for providing some of the details to help with this blog post. This is the first of several blog posts to come on this topic.  Continue reading

CentOS Atomic Host Customization Using cloud-init

Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init on a public cloud provider (AWS, in this case).

Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init seemed like a pretty good candidate.

All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading

CentOS Atomic Host Customization Using cloud-init

Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init on a public cloud provider (AWS, in this case).

Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init seemed like a pretty good candidate.

All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading

CentOS Atomic Host Customization Using cloud-init

Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init on a public cloud provider (AWS, in this case).

Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init seemed like a pretty good candidate.

All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading