Archive

Category Archives for "Systems"

Getting Started With Automation Analytics

blog_getting-started_automation-analytics

With the upcoming release of the Red Hat Ansible Automation Platform there are now included Software as a Service (SaaS) offerings, one of which is Automation Analytics.  This application provides a visual dashboard, health notifications and organization statistics for your Ansible Automation. Automation Analytics works across multiple Ansible Tower clusters allowing holistic analytics across your entire infrastructure.

When talking to the community and our customers, a question that often comes up is: “How do we measure success?”.  Automation Analytics provides key data on Job Template usage, Ansible Module usage, organizational comparisons across your enterprise, and much more.  This data can be used to assess usage, success criteria, and even charge backs between different groups. This blog post will outline how to get started with Automation Analytics and start collecting data right away.

 

What you need to get started:

  • Red Hat Ansible Tower 3.5.3 or newer
  • An active Red Hat Ansible Automation Platform subscription
  • A Red Hat Ansible Tower instance that can reach https://cloud.redhat.com 
  •  

    Ansible Automation Platform terminology

    There are some terms used in this blog post that may be unfamiliar Continue reading

    Getting Started With Ansible Hub

    blog_getting-started_automation-hub

    In the past, Ansible content such as roles, modules and plugins was usually consumed in two ways: the modules were part of the Ansible package, and roles could be found in Galaxy. However, as time went on the current method of content distribution had challenges with scale for both contributors and consumers of Ansible content. Dylan described this in a blog post worth reading.

    Recent releases of Ansible started a journey towards better content management. In previous Ansible releases, each and every module was strictly tied to the release schedule of Ansible and community, customer, and partner feedback demonstrated that the release schedule of content needed to evolve.  Ansible content collections allow our Ansible contributors to create specialized content without being tied to a specific release cycle of the Ansible product, making it easier to plan and deliver. For Ansible newcomers, the collections come “pre-packaged” with modules and playbooks around common use cases like networking and security, making it easier to get off the ground with Ansible. If you want to learn more about Ansible content collections, check out our series about collections!

    The introduction of collections to the Ansible ecosystem solves a number of challenges for access to Continue reading

    Getting Started With Ansible Content Collections

    blog_getting-started_content-collections

    With the release of Red Hat Ansible Automation Platform, Ansible Content Collections are now fully supported. Ansible Content Collections, or collections, represent the new standard of distributing, maintaining and consuming automation. By combining multiple types of Ansible content (playbooks, roles, modules, and plugins), flexibility and scalability are greatly improved.

     

    Who Benefits?

    Everyone!

    Traditionally, module creators have had to wait for their modules to be marked for inclusion in an upcoming Ansible release or had to add them to roles, which made consumption and management more difficult. By shipping modules within Ansible Content Collections along with pertinent roles and documentation, and removing the barrier to entry, creators are now able to move as fast as the demand for their creations. For a public cloud provider, this means new functionality of an existing service or a new service altogether, can be rolled out along with the ability to automate the new functionality.

    For the automation consumer, this means that fresh content is continuously made available for consumption. Managing content in this manner also becomes easier as modules, plugins, roles, and docs are packaged and tagged with a collection version. Modules can be updated, renamed, improved upon; roles can be updated to Continue reading

    Getting Started With Automation Hub

    blog_getting-started_automation-hub

    In the past, Ansible content such as roles, modules and plugins was usually consumed in two ways: the modules were part of the Ansible package, and roles could be found in Galaxy. However, as time went on the current method of content distribution had challenges with scale for both contributors and consumers of Ansible content. Dylan described this in a blog post worth reading.

    Recent releases of Ansible started a journey towards better content management. In previous Ansible releases, each and every module was strictly tied to the release schedule of Ansible and community, customer, and partner feedback demonstrated that the release schedule of content needed to evolve.  Ansible content collections allow our Ansible contributors to create specialized content without being tied to a specific release cycle of the Ansible product, making it easier to plan and deliver. For Ansible newcomers, the collections come “pre-packaged” with modules and playbooks around common use cases like networking and security, making it easier to get off the ground with Ansible. If you want to learn more about Ansible content collections, check out our series about collections!

    The introduction of collections to the Ansible ecosystem solves a number of challenges for access to Continue reading

    Docker’s Next Chapter: Advancing Developer Workflows for Modern Apps

    Today we start the next chapter in the Docker story, one that’s focused on developers. That we have the opportunity to write this next chapter is thanks to you, our community, for without you we wouldn’t be here. And while our focus on developers builds on recent history, it’s a focus also grounded in Docker’s beginning.

    In The Beginning

    When Solomon Hykes, Docker’s founder, unveiled the Docker project in 2013, he succinctly stated the problem Docker aimed to solve as, “for a developer, shipping code to the server is hard.” To address, Docker abstracted out OS kernels’ complex container primitives, provided a developer-friendly, CLI-based workflow and defined an immutable, portable image format. The result transformed how developers work, making it much easier to build, ship and run their apps on any server. So while container primitives had existed for decades, Docker democratized them and made them as easy to use as

    docker run hello-world

    The rest is history. Over the last six years, Docker containerization catalyzed the growth of microservices-based applications, enabled development teams to ship apps many times faster and accelerated the migration of apps from the data center to the cloud. Far from a Docker-only effort, a Continue reading

    Using Kustomize with Cluster API Manifests

    A topic that’s been in the back of my mind since writing the Cluster API introduction post is how someone could use kustomize to modify the Cluster API manifests. Fortunately, this is reasonably straightforward. It doesn’t require any “hacks” like those needed to use kustomize with kubeadm configuration files, but similar to modifying kubeadm configuration files you’ll generally need to use the patching functionality of kustomize when working with Cluster API manifests. In this post, I’d like to take a fairly detailed look at how someone might go about using kustomize with Cluster API.

    By the way, readers who are unfamiliar with kustomize should probably read this introductory post first, and then read the post on using kustomize with kubeadm configuration files. I suggest reading the latter post because it provides an overview of how to use kustomize to patch a specific portion of a manifest, and you’ll use that functionality again when modifying Cluster API manifests.

    A Fictional Use Case

    For this post, I’m going to build out a fictional use case/scenario for the use of kustomize and Cluster API. Here are the key points to this fictional use case:

    1. Three different clusters on AWS are needed. The Continue reading

    Celebrating Veterans Day: Docker Employee Profiles

    On Veterans Day, and every day, we give thanks to our veterans. We are fortunate to have Brent Salisbury, Siobhan Casey, and Johnny Gonzalez, as Docker colleagues who were in the United States Marine Corps Reserve, the United States Army Reserve, and the United States Marine Corps. Thank you all for your service, hard work, and dedication. As a thank you for their service, we’re profiling them on our blog.

    Brent Salisbury, Software Alliance Engineer

    Brent Salisbury was in the United States Marine Corps Reserve from 1996-2002. Now, he is a Software Alliance Engineer at Docker. You can follow him on Twitter @networkstatic. 

    What is your job? 

    Software Alliance Engineer.

    How long have you worked at Docker?

    4.5 years.

    Is your current role one that you always intended on your career path? 

    Data Networking has been my passion since college. Working at Docker has afforded me the opportunity to help usher in a new software paradigm in what can be achieved in host networking and security versus the traditional proprietary hardware models of the past.

    What is your advice for someone entering the field?

    It may sound cliche, but find your passion. Everyone in technology is Continue reading

    A Roadmap for Building Modern Applications

    Photo by Alvaro Reyes on Unsplash

    No matter what industry you’re in, your application modernization strategy matters. Overlooking or downplaying its importance is a quick way for customers to sour and competitors to gain an edge. It’s why 91% of executives believe their revenues will take a hit without successful digital transformation.

    The good news is modern applications offer a clear path forward. Creating a roadmap for your modern application strategy is a critical step toward a more agile and continuous model of software development and delivery – one that’s centered on delivering perpetually expanding value and new experiences to customers. 

    This is the first of a series of blogs where we will look at industry viewpoints, different approaches, underlying platforms and real-world stories that are foundational to successful modern application development in order to provide a roadmap for application modernization.

    What’s in Your Environment? 

    The technology inventory at companies today is as diverse, distributed and complex as ever. It includes a variety of technology stacks, application frameworks, services and languages. During a modernization process, new Open Source technologies are often integrated with legacy solutions. Existing applications need to be maintained and enhanced, modern applications need to be Continue reading

    Depend on Docker for Kubeflow

    Run Kubeflow natively on Docker Desktop for Mac or Windows

    This is a guest post by Alex Iankoulski, Docker Captain and full stack software and infrastructure architect at Shell New Energies. The views expressed here are his own and are neither opposed or endorsed by Shell or Docker. 

    In this blog, I will show you how to use Docker Desktop for Mac or Windows to run Kubeflow. To make this easier, I used my Depend on Docker project, which you can find on Github.

    Rationale

    Even though we are experiencing a tectonic shift of development workflows in the cloud era towards hosted and remote environments, a substantial amount of work and experimentation still happens on developer’s local machines. The ability to scale down allows us to mimic a cloud deployment locally and enables us to play, learn quickly, and make changes in a safe, isolated environment. A good example of this rationale is provided by Kubeflow and MiniKF.

    Overview

    Since Kubeflow was first released by Google in 2018, adoption has increased significantly, particularly in the data science world for orchestration of machine learning pipelines. There are various ways to deploy Kubeflow both on desktops and servers as described in Continue reading

    For Liberty Mutual, the Openness and Flexibility of the Cloud Means Better Business Outcomes

    We had the chance recently to sit down with the Liberty Mutual Insurance team at their Portsmouth, New Hampshire offices and talk about how they deliver better business outcomes with the cloud and containerization.

    At this point, Liberty Mutual has moved about 30 percent of their applications to the cloud. One of big improvements the team has seen with the cloud and Docker is the speed at which developers can develop and deploy their applications. That means better business outcomes for Liberty Mutual and its customers.

    Here’s what they told us. You can also catch the highlights in this two-minute video:

    On how tech is central to Liberty Mutual’s business

    Mark Cressey, SVP and GM, IT Hosting Services: Tech and the digitization it’s allowed has really enabled Liberty Mutual to get deeply ingrained in our customers’ lives and support them through their major life journeys. We’re able to be more predictive of what our customer’s needs and get in front of them as a proactive step. How can we help? How can we assist you? Is this the right coverage? And even to the point where using real time information, we can warn them about approaching windstorms or warn our Continue reading

    Docker’s Recommended Sessions for KubeCon 2019

    The Docker team is gearing up for another great KubeCon this year in San Diego, November 17-21. As a Platinum sponsor of this year’s event, we are excited to bring Docker employees, community members and Docker captains together to demonstrate and celebrate  the combined impact of Docker and Kubernetes.

    Stop by Booth P37 to learn how to leverage the Docker platform to securely build, share and run modern applications for any Kubernetes environment. We will demonstrate Docker Desktop Enterprise and how it accelerates container application development while supporting developer choice. Experts will be on hand to answer questions about Docker Kubernetes Services (DKS), a secure and production-ready Kubernetes environment. Or come to learn more about Docker’s contributions to Kubernetes while picking up some great Docker swag.

    Learn More from Docker Experts

    KubeCon will also provide a great opportunity to learn from industry experts and hear from people who run production applications on Kubernetes. Here’s a helpful guide from the Docker team of our recommended talks:

    Monday, Nov 18

    Tuesday, Nov 19

    Learn About Modern Apps on Azure with Docker at Microsoft Ignite

    The Docker team will be on the show floor at Microsoft Ignite the week of November 4. We’ll be talking about the state of modern application development, how to accelerate innovation efforts, and the role containerization, Docker, and Microsoft Azure play in powering these initiatives.

    Come by booth #2414 at Microsoft Ignite to check out the latest developments in the Docker platform. Learn why over 1.8 million developers build modern applications on Docker, and over 800 enterprises rely on Docker Enterprise for production workloads. 

    At Microsoft Ignite, we will be talking about:

    How to Develop and Deliver Modern Applications for Azure Kubernetes Service (AKS)

    Docker Enterprise 3.0 shipped back in April 2019, making it the first and only desktop-to-cloud container platform in the market that lets you build and share any application and securely run them anywhere – from hybrid cloud to the edge. At Microsoft Ignite, we’ll have demos that shows how Docker Enterprise 3.0 simplifies Kubernetes for Azure Kubernetes Service (AKS) and enables companies to more easily build modern applications with Docker Desktop Enterprise and Docker Application

    Learn how to accelerate your journey to the cloud with Docker’s Dev Team Starter Bundle for Continue reading

    Don’t Be Scared of Kubernetes

    5 Reasons You Might Be Afraid to Get Started with Kubernetes

    Kubernetes has the broadest capabilities of any container orchestrator available today, which adds up to a lot of power and complexity. That can be overwhelming for a lot of people jumping in for the first time – enough to scare people off from getting started. There are a few reasons it can seem intimidating:

    • It’s complicated, isn’t it? As we noted in a previous post, jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but how to actually fly the thing is not obvious. If you’ve never done more than play a flight simulator game, it can be downright scary.
    • Is it production-ready? Everyone is talking about Kubernetes, but it’s only emerged as a major technology in the past few years. Many companies take a wait-and-see approach on new technologies. Building out a Kubernetes deployment on your own means solving challenging problems without enterprise support. 
    • Do I have the people and skills to support it? IT teams are just beginning to learn Kubernetes. If it’s complicated, it means you’ll need people with the right experience to support it. According to industry Continue reading

    Understanding Kubernetes Security on Docker Enterprise 3.0

    This is a guest post by Javier Ramírez, Docker Captain and IT Architect at Hopla Software. You can follow him on Twitter @frjaraur or on Github.

    Docker began including Kubernetes with Docker Enterprise 2.0 last year. The recent 3.0 release includes CNCF Certified Kubernetes 1.14, which has many additional security features. In this blog post, I will review Pod Security Policies and Admission Controllers.

    What are Kubernetes Pod Security Policies?

    Pod Security Policies are rules created in Kubernetes to control security in pods. A pod will only be scheduled on a Kubernetes cluster if it passes these rules. These rules are defined in the  “PodSecurityPolicy” resource and allow us to manage host namespace and filesystem usage, as well as privileged pod features. We can use the PodSecurityPolicy resource to make fine-grained security configurations, including:

    • Privileged containers.
    • Host namespaces (IPC, PID, Network and Ports).
    • Host paths and their permissions and volume types.
    • User and group for containers process execution and setuid capabilities inside container.
    • Change default containers capabilities.
    • Behaviour of Linux security modules.
    • Allow host kernel configurations using sysctl.

    The Docker Universal Control Plane (UCP) 3.2 provides two Pod Security Policies by default – which is helpful Continue reading

    Programmatically Creating Kubernetes Manifests

    A while ago I came across a utility named jk, which purported to be able to create structured text files—in JSON, YAML, or HCL—using JavaScript (or TypeScript that has been transpiled into JavaScript). One of the use cases was creating Kubernetes manifests. The GitHub repository for jk describes it as “a data templating tool”, and that’s accurate for simple use cases. In more complex use cases, the use of a general-purpose programming language like JavaScript in jk reveals that the tool has the potential to be much more than just a data templating tool—if you have the JavaScript expertise to unlock that potential.

    The basic idea behind jk is that you could write some relatively simple JavaScript, and jk will take that JavaScript and use it to create some type of structured data output. I’ll focus on Kubernetes manifests here, but as you read keep in mind you could use this for other purposes as well. (I explore a couple other use cases at the end of this post.)

    Here’s a very simple example:

    const service = new api.core.v1.Service('appService', {
        metadata: {
            namespace: 'appName',
            labels: {
                app: 'appName',
                team: 'blue',
            },
        },
        spec: {
            selector:  Continue reading

    Designing Docker Hub Two-Factor Authentication

    We recognize the central role that Docker Hub plays in modern application development and are working on many enhancements around security and content. In this blog post we will share how we are implementing two-factor authentication (2FA). 

    Using Time-Based One-Time Password (TOTP) Authentication

    Two-factor authentication increases the security of your accounts by requiring two different forms of validation. This helps ensure that you are the rightful account owner. For Docker Hub, that means providing something you know (your username and a strong password) and something you have in your possession. Since Docker Hub is used by millions of developers and organizations for storing and sharing content – sometimes company intellectual property – we chose to use one of the more secure models for 2FA: software token (TOTP) authentication. 

    TOTP authentication is more secure than SMS-based 2FA, which has many attack vectors and vulnerabilities. TOTP requires a little more upfront setup, but once enabled, it is just as simple (if not simpler) than text message-based verification. It requires the use of an authenticator application, of which there are many available. These can be apps downloaded to your mobile device (e.g. Google Authenticator or Microsoft Authenticator) or it can Continue reading

    Spousetivities in Barcelona at VMworld EMEA 2019

    Barcelona is probably my favorite city in Europe—which works out well, since VMware seems to have settled on Barcelona at the destination for VMworld EMEA. VMworld is back in Barcelona again this year, and I’m fortunate enough to be able to attend. VMworld in Barcelona wouldn’t be the same without Spousetivities, though, and I’m happy to report that Spousetivities will be in Barcelona. In fact, registration is already open!

    If you’re bringing along a spouse, significant other, boyfriend/girlfriend, or just some family members, you owe it to them to look into Spousetivities. You’ll be able to focus at the conference knowing that your loved one(s) are not only safe, but enjoying some amazing activities in and around Barcelona. Here’s a quick peek at what Crystal and her team have lined up this year:

    • A wine tour of the Penedes region (southwest of Barcelona)—attendees will get to see some amazing wineries not frequented by tourists!
    • A walking tour of Barcelona
    • A tapas cooking class
    • A fantastic walking tour of Costa Brava, Pals, and Girona
    • A sailing tour (it’s a 3 hour tour, but it won’t end up like Gilligan’s)

    Lunch and private transportation are included for all activities, and all activities Continue reading

    The Potential Of Red Hat Plus Power Is Larger Than Exascale

    Red Hat is coming onto IBM’s books at just the right time, and to be honest, it might have been better for Big Blue if the deal to acquire the world’s largest supplier of support and packaging services for open source software had closed maybe one or two quarters ago.

    The Potential Of Red Hat Plus Power Is Larger Than Exascale was written by Timothy Prickett Morgan at The Next Platform.

    Attend a #LearnDocker Workshop This Fall

    Join a Docker for Developers Workshop Near You

    From October through December, Docker User Groups all over the world are hosting a workshop for their local community! Join us for an Introduction to Docker for Developers, a hands-on workshop we run on Play with Docker

    This Docker 101 workshop for developers is designed to get you up and running with containers. You’ll learn how to build images, run containers, use volumes to persist data and mount in source code, and define your application using Docker Compose. We’ll even mix in a few advanced topics, such as networking and image building best-practices. There is definitely something for everyone! 

    Visit your local User Group page to see if there is a workshop scheduled in your area. Don’t see an event listed? Email the team by scrolling to the bottom of the chapter page and clicking the contact us button. Let them know you want to join in on the workshop fun! 

    Join the Docker Virtual Meetup Group

    Don’t see a user group in your area? Never fear, join the virtual meetup group for monthly meetups on all things Docker.  


    The #LearnDocker for #developers workshop series is coming to Continue reading

    Using Kustomize with Kubeadm Configuration Files

    Last week I had a crazy idea: if kustomize can be used to modify YAML files like Kubernetes manifests, then could one use kustomize to modify a kubeadm configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize to modify kubeadm configuration files.

    If you aren’t already familiar with kustomize, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm configuration files to modify, I’ll use kubeadm configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.

    While the blog post linked above provides an overview of kustomize, it certainly doesn’t cover all the functionality kustomize provides. In this particular use case—modifying kubeadm configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize, which allows you to overwrite specific fields within the YAML definition Continue reading

    1 34 35 36 37 38 125