Archive

Category Archives for "Systems"

Carrier-Grade SDN-Based OpenStack Networking Solution

This session was titled “Carrier-Grade SDN Based OpenStack Networking Solution,” led by Daniel Park and Sangho Shin. Both Park and Shin are from SK Telecom (SKT), and (based on the description) this session is a follow-up to a session from the Boston summit where SK Telecom talked about an SDN-based networking solution they’d developed and released for use in their own 5G-based network.

Shin starts the session with some presenter introductions, and sets the stage for the presentation. Shin first provides some background on SKT, and discusses the steps that SKT has been taking to prepare their network for 5G infrastructure/services. This involves more extensive use of virtual network functions (VNFs) and software-defined infrastructure based on open software and open hardware. Shin reinforces that the SKT project (which is apparently called COSMOS?) exclusively leverages open source software.

Diving into a bit more detail, Shin talks about SONA Fabric (which is used to control the leaf/spine fabric used as the network underlay), SONA (which handles virtual network management), and TACO (which is an SKT-specific version of OpenStack). The network monitoring solution is called TINA, and this feeds into an integrated monitoring system known as 3DV.

TACO (stands for SKT All Continue reading

Can OpenStack Beat AWS in Price

This is a liveblog of the session titled “Can OpenStack Beat AWS in Price: The Trilogy”. The presenters are Rico Lin, Bruno Lago, and Jean-Daniel Bonnetot. The “trilogy” refers to the third iteration of this presentation; each time the comparison has been done in a different geographical region (first in Europe, then in North America, and finally here in Asia-Pacific).

Lago starts the presentation with an explanation of the session, and each of the presenters introduce themselves, their companies, and their backgrounds. In this particular case, the presenters are representing Catalyst (runs Catalyst Cloud in New Zealand), OVH, and EasyStack—all three are OpenStack-powered public cloud offerings.

Lago explains that they’ll cover three common OpenStack scenarios:

  • Private cloud
  • Managed private cloud
  • OpenStack-powered public cloud

Lin takes point to talk a bit about price differences in different geographical regions. Focusing on AWS, Lin points out that AWS services are about 8% higher in Europe than in North America. Moving to APAC, AWS services are about 29% higher than in North America. With this 29% price increase, I can see where OpenStack might be much more competitive in APAC than in North America (and this, in turn, may explain why OpenStack seems much Continue reading

Lessons Learnt from Running a Container-Native Cloud

This is a liveblog of the session titled “Lessons Learnt from Running a Container-Native Cloud,” led by Xu Wang. Wang is the CTO and co-founder of Hyper.sh, a company that has been working on leveraging hypervisor isolation for containers. This session claims to discuss some lessons learned from running a cloud leveraging this sort of technology.

Wang starts with a brief overview of Hyper.sh. The information for this session comes from running a Hypernetes (Hyper.sh plus Kubernetes)-based cloud for a year.

So, what is a “container-native” cloud? Wang provides some criteria:

  • A container is a first-class citizen in the cloud. This means container-level APIs and the ability to launch containers without a VM.
  • The cloud offers container-centric resources (floating IPs, security groups, etc.).
  • The cloud offers container-based services (load balancing, scheduled jobs, functions, etc.).
  • Billing is handled on a per-container level (not on a VM level).

To be honest, I don’t see how any cloud other than Hyper.sh’s own offering could meet these criteria; none of the major public cloud providers (Microsoft Azure, AWS, GCP) currently satisfy Wang’s requirements. A “standard” OpenStack installation doesn’t meet these requirements. This makes the session more like a Continue reading

Make Your Application Serverless

This is a liveblog from the last day of the OpenStack Summit in Sydney, Australia. The title of the session is “Make Your Application Serverless,” and discusses Qinling, a project for serverless (Functions-as-a-Service, or FaaS) architectures/applications on OpenStack. The presenters for the session are Lingxian Kong and Feilong Wang from Catalyst Cloud.

Kong provides a brief background on himself and his co-presenter (Wang), and explains that Catalyst Cloud is an OpenStack-based public cloud based in New Zealand. Both presenters are active technical contributors to OpenStack projects.

Kong quickly transitions into the core content of the presentation, which focuses on serverless computing and Qinling, a project for implementing serverless architectures on OpenStack. Kong points out that serverless computing doesn’t mean there are no servers, only that the servers (typically VMs) are hidden from view. Functions-as-a-Service, or FaaS, is a better term that Kong prefers. He next provides an example of how a FaaS architecture may benefit applications, and contrasts solutions like AutoScaling Groups (or the equivalent in OpenStack) with FaaS.

Some key characteristics of serverless, as summarized by Kong:

  • No need to think about servers
  • Run your code, not the whole application
  • Highly available and horizontally scalable
  • Stateless/ephemeral
  • Lightweight/single-purpose functions
  • Event-driven Continue reading

How to Deploy 800 Servers in 8 Hours

This is a liveblog of the session titled “How to deploy 800 nodes in 8 hours automatically”, presented by Tao Chen with T2Cloud (Tencent).

Chen takes a few minutes, as is customary, to provide an overview of his employer, T2cloud, before getting into the core of the session’s content. Chen explains that the drive to deploy such a large number of servers was driven in large part by a surge in travel due to the Spring Festival travel rush, an annual event that creates high traffic load for about 40 days.

The “800 servers” count included 3 controller nodes, 117 storage nodes, and 601 compute nodes, along with some additional bare metal nodes supporting Big Data workloads. All these nodes needed to be deployed in 8 hours or less in order to allow enough time for T2cloud’s customer, China Railway Corporation, to test and deploy applications to handle the Spring Festival travel rush.

To help with the deployment, T2cloud developed a “DevOps” platform consisting of six subsystems: CMDB, OS installation, OpenStack deployment, task management, automation testing, and health check/monitoring. Chen doesn’t go into great deal about any of these subsystems, but the slide he shows does give away some information:

How to Deploy 800 Servers in 8 Hours

This is a liveblog of the session titled “How to deploy 800 nodes in 8 hours automatically”, presented by Tao Chen with T2Cloud (Tencent).

Chen takes a few minutes, as is customary, to provide an overview of his employer, T2cloud, before getting into the core of the session’s content. Chen explains that the drive to deploy such a large number of servers was driven in large part by a surge in travel due to the Spring Festival travel rush, an annual event that creates high traffic load for about 40 days.

The “800 servers” count included 3 controller nodes, 117 storage nodes, and 601 compute nodes, along with some additional bare metal nodes supporting Big Data workloads. All these nodes needed to be deployed in 8 hours or less in order to allow enough time for T2cloud’s customer, China Railway Corporation, to test and deploy applications to handle the Spring Festival travel rush.

To help with the deployment, T2cloud developed a “DevOps” platform consisting of six subsystems: CMDB, OS installation, OpenStack deployment, task management, automation testing, and health check/monitoring. Chen doesn’t go into great deal about any of these subsystems, but the slide he shows does give away some information:

IPv6 Primer for Deployments

This is a liveblog of the OpenStack Summit Sydney session titled “IPv6 Primer for Deployments”, led by Trent Lloyd from Canonical. IPv6 is a topic with which I know I need to get more familiar, so attending this session seemed like a reasonable approach.

Lloyd starts with some history. IPv6 was released in 1980, and uses 32-bit address (with a total address space of around 4 billion). IPv4, as most people know, is still used for the majority of Internet traffic. IPv6 was released in 1998, and uses 128-bit addresses (for a theoretical total address space of 3.4 x 10 to the 38th power). IPv5 was an experimental protocol, which is why the IETF used IPv6 as the version number for the next production version of the IP protocol.

Lloyd shows a graph showing the depletion of IPv4 address space, to help attendees better understand the situation with IPv4 address allocation. The next graph Lloyd shows illustrates IPv6 adoption, which—according to Google—is now running around 20% or so. (Lloyd shared that he naively estimated IPv4 would be deprecated in 2010.) In Australia it’s still pretty difficult to get IPv6 support, according to Lloyd.

Next, Lloyd reviews decimal and Continue reading

Battle Scars from OpenStack Deployments

This is the first liveblog from day 2 of the OpenStack Summit in Sydney, Australia. The title of the session is “Battle Scars from OpenStack Deployments.” The speakers are Anupriya Ramraj, Rick Mathot, and Farhad Sayeed (two vendors and an end-user, respectively, if my information is correct). I’m hoping for some useful, practical, real-world information out of this session.

Ramraj starts the session, introducing the speakers and setting some context for the presentation. Ramraj and Mathot are with DXC, a managed services provider. Ramraj starts with a quick review of some of the tough battles in OpenStack deployments:

  • Months to deploy OpenStack at scale
  • Chaos during incidents due to lack of OpenStack skills and knowledge
  • Customers spend lengthy periods with support for troubleshooting basic issues
  • Applications do not get onboarded to OpenStack
  • Marooned on earlier version of OpenStack
  • OpenStack skills are hard to recruit and retain

Ramraj recommends using an OpenStack distribution versus “pure” upstream OpenStack, and recommends using new-ish hardware as opposed to older hardware. Given the last bullet, this complicates rolling out OpenStack and resolving OpenStack issues. A lack of DevOps skills and a lack of understanding around OpenStack APIs can impede the process of porting applications Continue reading

Kubernetes on OpenStack: The Technical Details

This is a liveblog of the OpenStack Summit session titled “Kubernetes on OpenStack: The Technical Details”. The speaker is Angus Lees from Bitnami. This is listed as an Advanced session, so I’m hoping we’ll get into some real depth in the session.

Lees starts out with a quick review of Bitnami, and briefly clarifies that this is not a talk about OpenStack on Kubernetes (i.e., using Kubernetes to host the OpenStack control plane); instead, this is about Kubernetes on OpenStack (OpenStack as IaaS, Kubernetes to do container orchestration on said IaaS).

Lees jumps quickly into the content, providing a “compare-and-contrast” of Kubernetes versus OpenStack. One of the key points is that Kubernetes is more application-focused, whereas OpenStack is more machine-focused. Kubernetes’ multi-tenancy story is shaky/immature, and the nature of containers means there is a larger attack surface (VMs provide a smaller attack surface than containers). Lees also points out that Kubernetes is implemented mostly in Golang (versus Python for OpenStack), although I’m not really sure why this matters (unless you are planning to contribute to one of these projects).

Lees next provides an overview of the Kubernetes architecture (Kubernetes master node containing API server talking to controller manager Continue reading

Issues with OpenStack That Are Not OpenStack Issues

This is a liveblog of OpenStack Summit session on Monday afternoon titled “Issues with OpenStack that are not OpenStack Issues”. The speaker for the session is Sven Michels. The premise of the session, as I understand it, is to discuss issues that arise during OpenStack deployments that aren’t actually issues with OpenStack (but instead may be issues with process or culture).

Michels starts with a brief overview of his background, then proceeds to position today’s talk as a follow-up (of sorts) to a presentation he did in Boston. At the Boston Summit, Michels discussed choosing an OpenStack distribution for your particular needs; in this talk, Michels will talk about some of the challenges around “DIY” (Do It Yourself) OpenStack—that is, OpenStack that is not based on some commercial distribution/bundle.

Michels discusses that there are typically two approaches to DIY OpenStack:

  • The “Donald” approach leverages whatever around, including older hardware.
  • The “Scrooge” approach is one in which money is available, which typically means newer hardware.

Each of these approaches has its own challenges. With older hardware, it’s possible you’ll run into older firmware that may not be supported by Linux, or hardware that no longer works as expected. With new hardware, Continue reading

To K8s or Not to K8s Your OpenStack Control Plane

This is a liveblog of the Monday afternoon OpenStack Summit session titled “To K8s or Not to K8s Your OpenStack Control Plane”. The speaker is Robert Starmer of Kumulus Technologies. This session is listed as a Beginner-level session, so I’m hoping it’s not too basic for me (and that readers will still get some value from the liveblog).

Starmer begins with a quick review of his background and expertise, and then proceeds to provide—as a baseline—an overview of containers and Kubernetes for container orchestration. Starmer covers terminology and concepts like Pods, Deployments (and Replica Sets), Services, StatefulSets, and Persistent Volumes. Starmer points out that StatefulSets and Persistent Volumes are particularly applicable to the discussion about using Kubernetes to handle the OpenStack control plane. Following the discussion of Kubernetes components, Starmer points out that the Kubernetes architecture is designed to be resilient, talking about the use of etcd as a distributed state storage system, multiple API servers, separate controller managers, etc.

Next, Starmer spends a few minutes talking about Kubernetes networking and some of the components involved, followed by a high-level discussion around persistent volumes and storage requirements, particularly for StatefulSets.

Having covered Kubernetes, Starmer now starts talking about the requirements Continue reading

OpenStack Summit Sydney Day 1 Keynote

This is a liveblog of the day 1 keynote here at the OpenStack Summit in Sydney, Australia. I think this is my third or fourth trip to Sydney, and this is the first time I’ve run into inclement weather; it’s cloudy, rainy, and wet here, and forecasted to remain that way for most of the Summit.

At 9:02am, the keynotes (there are actually a set of separate keynote presentations this morning) kicks off with a video with Technical Committee memebers, community members, and others talking about the OpenStack community, the OpenStack projects, and the Summit itself. At 9:05am, the founders of the Australian OpenStack User Group—Tristan Goode and Tom Fifield—take the stage to kick off the general session. Goode and Fifield take a few minutes to talk about the history of the Australian OpenStack User Group and the evolution of the OpenStack community in Australia. Goode also takes a few moments to talk about his company, Aptira.

After a few minutes, Goode and Fifield turn the stage over to Mark Collier and Lauren Sell from the OpenStack Foundation. Collier and Sell set the stage for the upcoming presentations, do some housekeeping announcements, and talk about sponsors and support partners. Sell Continue reading

Modernizing Java Apps with Docker

Modernizing Traditional Applications or MTA was one of the themes at DockerCon EU 2017. Traditional applications are typically built a number of  years ago but are critical to business operations. The developer and operational skill set to maintain the application may be hard to find. The code base can be difficult to maintain, if it is available at all. The team that wrote the original app may not even be around. The applications go into maintenance mode, which may mean they are patched regularly for vulnerabilities. Any revisions to the code can take significant number of hours to test and deploy, so updates are infrequent. It can also hold back infrastructure improvements as dependency management becomes a huge pain point.

Java Apps Docker

Any modern application needs a faster delivery time that can adapt to change in the market conditions. The pipeline from pushing the code to a source code repository to the application delivery should be efficient, automated, secure and fast. The application should be able to scale to demand, typically done by horizontal scaling across multiple instances. Portability of the application across different infrastructure becomes key in that case. In case of a failure, MTTR should be short and Continue reading

A Sublime Text Keymap for Bracketeer

I’ve made no secret of the fact that I’m a fan of Sublime Text (ST). I’ve evaluated other editors, like Atom, but still find that ST offers the right blend of performance, functionality, customizability, and cross-platform support. One nice thing about ST (other editors have this too) is the ability to extend it via packages. Bracketeer is one of many packages that can be used to customize ST’s behavior; in this post, I’d like to share a keymap I’m using with Bracketeer that I’ve found very helpful.

Bracketeer is a package that modifies ST’s default bracketing behavior. I first started using Bracketeer to help with writing Markdown documents, as it makes adding brackets (or parentheses) around existing text easier (it automatically advances the insertion point after the closing bracket). After using Bracketeer for a little while, I realized I could extend the keymap for Bracketeer to have it also help me with “wrapping” text in backticks and a few other characters. I did this by adding this line to the default keymap:

{
  "keys": [ "`" ],
  "command": "bracketeer",
  "args": {
    "braces": "``",
    "pressed": "`"
  }
}

With this line in the keymap, I could select some text, press Continue reading

Intesa Sanpaolo Builds a Resilient Foundation for Banking With Docker Enterprise Edition

Intesa Sanpaolo is the largest bank in Italy and maintains a network of over 5,000 banking branches across Europe and North Africa. With nearly 19 million customers and €739 billion in assets, Intesa Sanpaolo is an integral part of the financial fabric and as such, Italian regulations require that they keep their business and applications online to serve their customers.

As a bank that can trace its roots back to the early 1800s, the majority of Intesa’s edge applications are still monolithic and hard to move between data centers, never mind migrate to the cloud. Diego Braga, Intesa IT Infrastructure Architect, looked to Docker Enterprise Edition (EE) at the recommendation of his Kiratech business partner Lorenzo Fontana to improve their application availability, portability and add cloud friendly application delivery. With Docker EE, Intesa was able to consolidate infrastructure by nearly 60%, thus saving significant money over their previous design, while also enabling higher application availability across regional data centers and preparing themselves for the cloud.

Docker Enterprise

Prior to the Docker EE implementation, not only were the applications monoliths, but Intesa maintained two separate data centers as mirrors of each other to achieve high availability. This design required excess, cold standby hardware capacity in Continue reading

Tips and Tricks of the Docker Captains

My talk at DockerCon EU was designed to provide the audience with a bunch of tips for making the most of Docker. The tips were inspired by suggestions, blogs and presentations by other Docker Captains as well members of the larger Docker community.

The motivation for the talk was to enable users to quickly gain a higher level of proficiency and understanding in Docker. The metaphor I use is with traditional carpentry tools; whilst a novice can pick up a saw and cut a piece of wood, an expert will be able to do the same job more quickly, more accurately, and with less frustration. The reason why is partly experience, but also because the expert has a more thorough understanding-of and affinity-with her tools. The tips in my talk are designed both to reduce frustration and increase efficiency when working with Docker.

To give an example, one of the tips I present is on configuring the `docker ps` output format. By default `docker ps` prints out a really long line that looks messy except on the widest of terminals. You can fix this by using the `--format` argument to pick what fields you’re interested in e. Continue reading

Strange Error with the Azure CLI

Over the last week or so, I’ve been trying to spend more time with Microsoft Azure; specifically, around some of the interesting things that Azure is doing with containers and Kubernetes. Inspired by articles such as this one, I thought it would be a pretty straightforward process to use the Azure CLI to spin up a Kubernetes cluster and mess around a bit. Simple, right?

Alas, it turned out not to be so simple (if it had been simple, this blog post wouldn’t exist). The first problem I ran into was the upgrading the Azure CLI from version 2.0.13 to version 2.0.20 (which is, to my understanding, the minimum version needed to do what I was trying to do). I’d installed the Azure CLI using this process, so pip install azure-cli --upgrade should take care of it. Unfortunately, on two out of three systems on which I attempted this, the Azure CLI failed to work after the upgrade. I was only able to fix the problem by completely removing the Azure CLI (which I’d installed into a virtualenv), and then re-installing it. First hurdle cleared!

With the Azure CLI upgraded, I proceeded to Continue reading

Modernizing Applications from PoC to Production with Docker Enterprise Edition

Containerizing a single legacy application with Docker Enterprise Edition (EE) can be quite simple and immediately makes the application more portable, scalable, and easier to manage and update. Taking this application to production requires additional planning and collaboration with security teams, performance testing and likely requires detailed operations and disaster recovery plans. This part of the process often has little to do with the technology but with the changes to the organization and governance model.

At DockerCon Europe, I presented a talk on the best practices and processes for taking a containerized legacy app or set of apps from a proof of concept (PoC) to production with these changes in mind. You can watch the full talk here:

A Prescriptive and Repeatable Methodology

The Modernize Traditional Apps [MTA] Program is the result of working with hundreds of companies over the years with deploying and using Docker Enterprise Edition [EE]. Those experiences have been transformed into a prescriptive methodology with best practices and considerations to help you get successfully from PoC to production.

After the PoC there is a short assessment phase of the existing organization, tools, and processes and then choosing the pilot application that is representative of the application Continue reading

How GlaxoSmithKline is Accelerating Science with Docker Enterprise Edition

GlaxoSmithKline is a global pharmaceutical company headquartered in the United Kingdom. Their company mission is “to help people do more, feel better, and live longer”. One way they are doing that is by using data science to find new drug formulations that can improve lives. At DockerCon Europe, Ranjith Raghunath, the director of Big Data Solutions and Lindsay Edwards, the head of Respiratory Data Sciences at GlaxoSmithKline presented how Docker Enterprise Edition (Docker EE) is helping them accelerate drug discovery through a project called Edge Node On Demand.

Letting Science Drive Technology at GlaxoSmithKline

Leveraging Data Science for Improved Outcomes

The biggest challenge in pharmaceutical research is that hundreds of drugs formulations need to be created to take one successfully to market only 3% of formulated molecules actually become medicine. Lindsay Edwards heads a Data Science group that is focused on respiratory illnesses like Chronic Obstructive Pulmonary Disease (COPD) and asthma. His group uses big data analytics to mine research data and previous patient trial data to arrive at results more rapidly.

However, data science is a new and emerging field. There are new software tools and open source data analytics solutions coming to market all the time and different hardware Continue reading

Meet Docker Enthusiasts Near You!

Docker Community

We’ve made it easier for you to find an event near you!

As we continue to grow, we’ve been thinking of ways to better serve the Docker community and give more visibility to all the different events and activities available and to the people who organize them. In collaboration with the Docker Community Leaders (formally known as Meetup Organizers), we’re excited to launch a brand new event platform to better highlight local Docker communities and the awesome events happening all over the world.

Check out the Docker Community event site!

 

From meetups to DockerCon and Docker hosted webinars to workshops, you’ll be sure to find something just for you! Instead of going to multiple places to find event information, it’s now all here in one convenient location on docker.com. 

What is a Docker Local Chapter?

Docker Local Chapters are user groups that host free, community-led gatherings for Docker enthusiasts typically in the form of a meetup. The topics and agenda are curated by each local chapter’s community leader. The result: learning and best practices for everyone. Instead of visiting meetup.com, you can now check out a city chapter page to see who the leaders, campus ambassadors, Continue reading

1 60 61 62 63 64 125