I’ve been spending some time recently with CentOS Atomic Host, the container-optimized version of CentOS (part of Project Atomic). By default, the Docker Engine on CentOS Atomic Host listens only to a local UNIX socket, and is not accessible over the network. While CentOS has its own particular way of configuring the Docker Engine, I wanted to see if I could—in a very “systemd-like” fashion—make Docker Engine on CentOS listen on a network socket as well as a local UNIX socket. So, I set out with an instance of CentOS Atomic Host and the Docker systemd docs to see what I could do.
The default configuration of Docker Engine on CentOS Atomic Host uses a systemd unit file that references an external environment file; specifically, it references values set in /etc/sysconfig/docker
, as you can see from this snippet of the docker.service
unit file:
ExecStart=/usr/bin/dockerd-current \
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--exec-opt native.cgroupdriver=systemd \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY
The $OPTIONS
variable, along with the other variables at the end of the ExecStart line, are defined in /etc/sysconfig/docker
. That value, by default, looks like this:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
The word Internet is short for internetwork. It’s just a network of networks. So the more places you can connect those networks, the more robust the whole system is. That’s what Internet Exchange Points (“IXPs”) are. They’re the connection points where networks can connect to each other, and they’re a crucial part of the infrastructure of the Internet.
In Europe, IXPs have traditionally been independent and are often run as nonprofits, whereas in North America, they’ve typically been owned and operated by commercial colocation facility operators or Internet Service Providers (ISPs). In the last several years, though, there’s been a movement in the US to build more independent, community-focused IXPs. IX-Denver is part of that movement.
Just five years ago, the infrastructure space was awash in stories about the capabilities cooked into the Hadoop platform—something that was, even then, only a few pieces of code cobbled onto the core HDFS distributed storage with MapReduce serving as the processing engine for analytics at scale.
At the center of many of the stories was Cloudera, the startup that took Hadoop to the enterprise with its commercial distribution of the open source framework. As we described in a conversation last year marking the ten-year anniversary of Hadoop with Doug Cutting, one of its creators at Yahoo, the platform …
Looking Down The Long Enterprise Road With Hadoop was written by Nicole Hemsoth at The Next Platform.
The post Worth Reading: The rise of SSL based threats appeared first on 'net work.
There is an adage, not quite yet old, suggesting that compute is free but storage is not. Perhaps a more accurate and, as far as public clouds are concerned, apt adaptation of this saying might be that computing and storage are free, and so are inbound networking within a region, but moving data across regions in a public cloud is brutally expensive, and it is even more costly spanning regions.
So much so that, at a certain scale, it makes sense to build your own datacenter and create your own infrastructure hardware and software stack that mimics the salient characteristics …
Bouncing Back To Private Clouds With OpenStack was written by Timothy Prickett Morgan at The Next Platform.