This post will provide a quick introduction to a tool called Vagrant. Unless you’ve been hiding under a rock—or, more likely, been too busy doing real work in your data center to pay attention—you’ve probably heard of Vagrant. Maybe, like me, you had some ideas about what Vagrant is (or isn’t) and what it does (or doesn’t) do. Hopefully I can clear up some of the confusion in this post.
In its simplest form, Vagrant is an automation tool with a domain-specific language (DSL) that is used to automate the creation of VMs and VM environments. The idea is that a user can create a set of instructions, using Vagrant’s DSL, that will set up one or more VMs and possibly configure those VMs. Every time the user uses the precreated set of instructions, the end result will look exactly the same. This can be beneficial for a number of use cases, including developers who want a consistent development environment or folks wanting to share a demo environment with other users.
Vagrant makes this work by using a number of different components:
Providers: These are the “back end” of Vagrant. Vagrant itself doesn’t provide any virtualization functionality; it relies on Continue reading
This is a liveblog for session DATS013, on microservers. I was running late to this session (my calendar must have been off—thought I had 15 minutes more), so I wasn’t able to capture the titles or names of the speakers.
The first speaker starts out with a review of exactly what a microserver is; Intel sees microservers as a natural evolution from rack-mounted servers to blades to microservers. Key microserver technologies include: Intel Atom C2000 family of processors; Intel Xeon E5 v2 processor family; and Intel Ethernet Switch FM6000 series. Microservers share some common characteristics, such as high integrated platforms (like integrated network) and being designed for high efficiency. Efficiency might be more important than absolute performance.
Disaggregation of resources is a common platform option for microservers. (Once again this comes back to Intel’s rack-scale architecture work.) This leads the speaker to talk about a Technology Delivery Vehicle (TDV) being displayed here at the show; this is essentially a proof-of-concept product that Intel built that incorporates various microserver technologies and design patterns.
Upcoming microserver technologies that Intel has announced or is working on incude:
This is a liveblog of IDF 2014 session DATS009, titled “Ceph: Open Source Storage Software Optimizations on Intel Architecture for Cloud Workloads.” (That’s a mouthful.) The speaker is Anjaneya “Reddy” Chagam, a Principal Engineer in the Intel Data Center Group.
Chagam starts by reviewing the agenda, which—as the name of the session implies—is primarily focused on Ceph. He next transitions into a review of the problem with storage in data centers today; specifically, that storage needs “are growing at a rate unsustainable with today’s infrastructure and labor costs.” Another problem, according to Chagam, is that today’s workloads end up using the same sets of data but in very different ways, and those different ways of using the data have very different performance profiles. Other problems with the “traditional” way of doing storage is that storage processing performance doesn’t scale out with capacity, storage environments are growing increasingly complex (which in turn makes management harder).
Chagam does admit that not all workloads are suited for distributed storage solutions. If you need high availability and high performance (like for databases), then the traditional scale-up model might work better. For “cloud workloads” (no additional context/information provided to qualify what a Continue reading
The concepts behind Amazon's Auto Scaling Groups (ASGs) are very promising. Who wouldn't want to have their infrastructure scale automatically with increases and decreases of demand? Plenty of folks are using ASGs to do that today. ASGs do bring about their own challenges, which this series of blog posts will show solutions to by taking advantage of features in Ansible and Ansible Tower.
Continue readingFollowing on from my IDF 2014 Day 1 recap, here’s a quick recap of day 2.
You can read the liveblog here if you want all the gory details. If we boil it down to the essentials, it’s actually pretty simple. First, deliver more computing power in the hardware, either through the addition of FPGAs to existing CPUs or through the continued march of CPU power (via more cores or faster clock speeds or both). Second, make the hardware programmable, through standard interfaces. Third, expand the use of “big data” and analytics.
I attended a couple technical sessions today, but didn’t manage to get any of them liveblogged. Sorry! I did tweet a few things from the sessions, in case you follow me on Twitter.
I did have an extremely productive conversation regarding Intel’s rack-scale architecture (RSA) efforts. I pushed the Intel folks on the show floor to really dive into what makes up RSA, and finally got some answers that I’ll share in a separate post. I will do my best to get a dedicated RSA piece published just as soon as I possibly can.
Also on the expo floor, I Continue reading
In this session, Paul Showalter & Karl Matthias from New Relic discuss how they succesfully leveraged Docker to have consistent, isolated, custom distributed environments over which they have centralized control; making their continuous deployment processes easy and scalable.
Learn More
Docker Events and Meetup
Try Docker and stay up-to-date
This is a liveblog of the Data Center Mega-Session from day 2 of Intel Developer Forum (IDF) 2014 in San Francisco.
Diane Bryant, SVP and GM of the Data Center Group takes the stage promptly at 9:30am to kick off the data center mega-session. Bryant starts the discussion by setting out the key drivers affecting the data center: new devices (and new volumes of devices) and new services (AWS, Netflix, Twitter, etc.). This is the “digital service economy,” and Bryant insists that today’s data centers aren’t prepared to handle the digital service economy.
Bryant posits that in the future (not-so-distant future):
Per Bryant, when you’re operating at scale then efficiency matters, and that will lead organizations to choose platforms selected specifically for the workload. This leads to a discussion of customized offerings, and Bryant talks about an announcement earlier in the summer that combined a Xeon processor and a FPGA (field-programmable gate array) on the same die.
Bryant then introduces Karl Triebes, EVP and CTO of F5 Networks, who takes the stage to talk about FPGAs in F5 and how the joint Xeon/FPGA integrated solution Continue reading
In case you hadn’t noticed, I’m at Intel Developer Forum (IDF) 2014 this week in San Francisco. Here’s a quick recap of day 1 (I should have published this last night—sorry for not getting it out sooner).
Here’s a liveblog of the IDF 2014 day 1 keynote.
The IDF keynotes are always a bit interesting for me. Intel has a very large consumer presence: PCs, ultrabooks, tablets, phones, 2-in–1/convertibles, all-in–1 devices. Naturally, this is a big part of the keynote. I don’t track or get involved in the consumer space; my focus is on the data center. It is kind of fun to see all the stuff going on in the consumer space, though. There were no major data center-centric announcements yesterday (day 1), but I suspect there will be some today (day 2) in a mega-session with Diane Bryant (SVP and GM of the Data Center Group at Intel). I’ll be liveblogging that mega-session, so stay tuned for details.
I was able to hit two technical sessions yesterday and liveblogged both of them:
Both were Continue reading
This is a live blog of session DATS004, titled “Bare-Metal, Docker Containers, and Virtualization: The Growing Choices for Cloud Applications.” The speaker is Nicholas Weaver (yes, that Nick Weaver, who now works at Intel).
Weaver starts his presentation by talking about “how we got here”, discussing the various technological shifts that have affected the computing landscape over the years. Weaver includes a discussion of the drivers behind virtualization as well as the pros and cons of virtualization.
That, naturally, leads to a discussion of containers. Containers are not all that new—Solaris Zones is a form of containers that existed back in 2004. Naturally, the recent hype associated with Docker has, according to Weaver, rejuvenated interest in the concept of containers.
Before Weaver gets too far into containers, he first provides a background of some of the core containerization pieces. This includes cgroups (the ability to control resource allocation/utilization), which is built into the Linux kernel. Namespace isolation is also important, which provides full process isolation (so that one process can’t see processes in another namespace). Namespace isolation isn’t just for processes; there’s also isolation for network entities, mounts, and users. LXC is a set of user-space tools that attempted Continue reading
This is a liveblog of IDF 2014 session DATS002, titled “Virtualizing the Network to Enable a Software-Defined Infrastructure (SDI)”. The speakers are Brian Johnson (Solutions Architect, Intel) and Jim Pinkerton (Windows Server Architect, Microsoft). I attended a similar session last year; I’m hoping for some new information this year.
Pinkerton starts the session with a discussion of why Microsoft is able to speak to network virtualization via their experience with large-scale web properties (Bing, XBox Live, Outlook.com, Office, etc.). To that point, Microsoft has over 100K servers across their cloud properties, with >200K diverse services, first-party applications, and third-party applications. This amounts to $15 billion in data center investments. Naturally, all of this runs on Windows Server and Windows Azure.
So why does networking need to be transformed for the cloud? According to Pinkerton, the goal is to drive agility and flexibility for your business. This is accomplished by pooling and automating network resources, ensuring tenant isolation, maximizing scale/performance, enabling seamless capacity expansion and workload mobility, and minimizing operational complexity.
Johnson takes over here to talk about how Intel is working to address the challenges and needs that Pinkerton just outlined. This breaks down into three core Continue reading
This is a liveblog for the day 1 keynote at Intel Developer Forum (IDF) 2014. The keynote starts with an interesting musical piece that shows how technology can be used to allow a single performer to emulate the sound of a full band, and then kicks off with a “pocket avatar” presentation by Brian Krzanich, CEO of Intel Corporation. Krzanich takes the stage in person a few minutes later.
Krzanich starts with a recap of some of the discussions from last year’s IDF, and he points out some of the results over the last year. Among the accomplishments Krzanich lists, he mentions that Intel was the #2 shipper of tablets last year. (One would assume that Apple is #1.) Krzanich clearly believes that Intel has a bright future; he points out that projections show as many as 50 billion x86-based devices by 2020 (just 6 years away). That’s pretty massive growth; there are only an estimated 2.2 billion x86-based devices today.
The line-up today includes talks from Diane Bryant (data center), Kirk Skaugen (clients), Doug Fisher (software and services), and a live Q&A by Krzanich.
Krzanich starts a discussion of wearables and related devices with a mention of Continue reading
Just a quick reminder: As of today you have only two weeks to sign up for the very first ever public Docker training class in San Francisco! Here’s your chance to rapidly get up to speed on Docker’s container technology with plenty of first-hand attention. The class, held in a small intimate setting in downtown San Francisco, will be led by myself and the legendary Jérôme Petazzoni. We are both Solutions Engineers at Docker Inc. with strong backgrounds in development and operations. The training will be held September 17th and 18th and will cover a wide range of topics from fundamentals to best practices to orchestration and beyond.
Click here to reserve your spot today!
Want to learn more and stay up-to-date?
Following the postmortem of a previous vulnerability announced on June 30th, the Docker team conducted a thorough audit of the platform code base and hired an outside consultancy to investigate the security of the Docker Registry and the Docker Hub. On the morning of 8/22 (all times PST), the security firm contacted our Security Team:
8/22 – Morning: Our Security Team was contacted regarding vulnerabilities that could be exploited to allow an attacker to bypass authorization constraints and modify container image tags stored on the Docker Hub Registry. Even though the reporting firm was unable to immediately provide a working proof of concept, our Security Team began to investigate.
8/22 – Afternoon: Our team confirms the vulnerabilities and begins preparing a fix.
8/22 – Evening: We roll out a hotfix release to production. Additional penetration tests are performed to assure resolution of these new vulnerabilities. Later, it is discovered this release introduced a regression preventing some authorized users from pulling their own private images.
8/23 – Morning: A new hotfix is deployed to production, addressing the regression and all known security issues. Our Security Team runs another set of penetration tests against the platform and confirm all issues have Continue reading
Today at VMworld we’re excited to announce a broad partnership with VMware. The objective is to provide enterprise IT customers with joint solutions that combine the application lifecycle speed and environment interoperability of the Docker platform with the security, reliability, and management of VMware infrastructure. To deliver this “better together” solution to customers, Docker and VMware are collaborating on a wide range of product, sales, and marketing initiatives. Why join forces now? In its first 12 months Docker usage rapidly spread among startups and early adopters who valued the platform’s ability to separate the concerns of application development management from those of infrastructure provisioning, configuration, and operations. Docker gave these early users a new, faster way to build distributed apps as well as a “write once, run anywhere” choice of deployment from laptops to bare metal to VMs to private and public clouds. These benefits have been widely welcomed and embraced, as reflected in some of our adoption metrics:
In its second year, Docker usage continues to spread and is now experiencing mass adoption by enterprise IT organizations. These organizations span Continue reading
Next week starts the gigantic VMworld conference at the Moscone Center in San Francisco, California. If you are attending the conference, come visit us at the Docker booth #230 and make sure to attend the following Docker-related talks, demos, discussions and meetups where you can meet and chat with fellow Dockerites:
Monday, August 25th:
3:30 PM – 4:30 PM, Moscone West, Room 2014
VMware NSX for Docker, Containers & Mesos by Aaron Rosen (Staff Engineer, VMware) and Somik Behera (NSX Product Manager, VMware)
This session will provide a recipe for architecting massively elastic applications, be it big data applications or developer environments such as Jenkins on top of VMware SDDC Infrastructure. We will describe the use of app isolation technologies such as LxC & Docker together with Resource Managers such as Apache Mesos & Yarn to deliver an Open Elastic Applications & PaaS for mainstream apps such as Jenkins as well as specialized big data applications. We will cover a customer case study that leverages VMware SDDC to create an Open Elastic PaaS leveraging VMware NSX for Data communication fabric.
5:30 PM – 6:30 PM, Moscone West, Room 2006
VMware and Docker – Better Together by Ben Golub (CEO, Continue reading
In the last blog post about Fig we showed how you could define and run a multi-container app locally.
We’re now going to show you how you can deploy this app to production. Here’s a screencast of the whole process:
Let’s continue from where we left off in the last blog post. First, we want to put the code we wrote up onto GitHub. You’ll need to initialize and commit your code into a new Git repository.
$ git init $ git add . $ git commit -m "Initial commit"
Then create a new repository on GitHub and follow the instructions for how to set up a remote on your local GitHub repository. For example, if your repository were called bfirsh/figdemo, you’d run these commands:
$ git remote add origin [email protected]:bfirsh/figdemo.git $ git push -u origin master
Next, you’ll need to get yourself a server to host your app. Any cloud provider will work, so long as it is running Ubuntu and available on a public IP address.
Log on to your server using SSH and follow the instructions for installing Docker and Fig on Ubuntu.
$ ssh root@[your server’s IP address] # curl -sSL https://get.docker.io/ubuntu/ | Continue reading
The hardworking folk at Docker, Inc. are proud to announce the release of version 1.2.0 of Docker. We’ve made improvements throughout the Docker platform, including updates to Docker Engine, Docker Hub, and our documentation.
Highlights include these new features:
restart policies
We added a --restart
flag to docker run
to specify a restart policy for your container. Currently, there are three policies available:
This deprecates the --restart
flag on the Docker daemon.
docker run --restart=always redis
docker run --restart=on-failure:5 redis
–cap-add –cap-drop
Currently, Docker containers can either be given complete capabilities or they can all follow a whitelist of allowed capabilities while dropping all others. Further, previously, using --privileged
would grant all capabilities inside a container, rather than applying a whitelist. This was not Continue reading