Archive

Category Archives for "Systems"

Checking Your Current Docker Pull Rate Limits and Status

Continuing with our move towards consumption-based limits, customers will see the new rate limits for Docker pulls of container images at each tier of Docker subscriptions starting from November 2, 2020. 

Anonymous free users will be limited to 100 pulls per six hours, and authenticated free users will be limited to 200 pulls per six hours. Docker Pro and Team subscribers can pull container images from Docker Hub without restriction as long as the quantities are not excessive or abusive.

In this article, we’ll take a look at determining where you currently fall within the rate limiting policy using some command line tools.

Determining your current rate limit

Requests to Docker Hub now include rate limit information in the response headers for requests that count towards the limit. These are named as follows:

  • RateLimit-Limit    
  • RateLimit-Remaining

The RateLimit-Limit header contains the total number of pulls that can be performed within a six hour window. The RateLimit-Remaining header contains the number of pulls remaining for the six hour rolling window. 

Let’s take a look at these headers using the terminal. But before we can make a request to Docker Hub, we need to obtain a bearer token. We will then Continue reading

Setting Up Cloud Deployments Using Docker, Azure and Github Actions

A few weeks ago I shared a blog about how to use GitHub Actions with Docker, prior to that Guillaume has also shared his blog post on using Docker and ACI. I thought I would bring these two together to look at a single flow to go from your code in GitHub all the way through to deploying on ACI using our new Docker to ACI experience!

To start, let’s remember where we were with our last Github action. Last time we got to a point where our builds to master would be re-built and pushed to Docker Hub (and we used some caching to speed these up).  

name: CI to Docker Hub
 
on:
 push:
   tags:
     - "v*.*.*"
 
jobs:
 
 build:
   runs-on: ubuntu-latest
   steps:
     -
       name: Checkout
       uses: actions/checkout@v2
     -      
       name: Set up Docker Buildx
       id: buildx
       uses: docker/setup-buildx-action@v1
     -    
       name: Cache Docker layers
       uses: actions/cache@v2
       with:
         path: /tmp/.buildx-cache
         key: ${{ runner.os }}-buildx-${{ github.sha }}
         restore-keys: |
           ${{ runner.os }}-buildx-
     -
       uses: docker/login-action@v1
       with:
         username: ${{ secrets.DOCKER_USERNAME }}
         password: ${{ secrets.DOCKER_PASSWORD }}
     -
       name: Build and push
       id: docker_build
       uses: docker/build-push-action@v2
       with:
         context: ./
         file: ./Dockerfile
         builder: ${{ steps.buildx.outputs.name  Continue reading

Docker’s Next Chapter: Our First Year

2020 has been quite the year. Pandemic, lockdowns, virtual conferences and back-to-back Zoom meetings. Global economic pressures, confinement and webcams aside, we at Docker have been focused on delivering what we set out to do when we announced Docker’s Next Chapter: Advancing Developer Workflows for Modern Apps last November 2019. I wish to thank the Docker team for their “can do!” spirit and efforts throughout this unprecedented year, as well as our community, our Docker Captains, our ecosystem partners, and our customers for their non-stop enthusiasm and support. We could not have had the year we had without you.

This next chapter is being jointly written with you, the developer, as so much of our motivation and inspiration comes from your sharing with us how you’re using Docker. Consider the Washington University School of Medicine (WUSM): WUSM’s team of bioinformatics developers uses Docker to build pipelines – consisting of up to 25 Docker images in some cases – for analyzing the genome sequence data of cancer patients to inform diagnosis and treatments. Furthermore, they collaborate with each other internally and with other cancer research institutions by sharing their Docker images through Docker Hub. In the words of WUSM’s Dr. Continue reading

Docker V2 Github Action is Now GA

Docker is happy to announce the GA of our V2 Github Action. We’ve been working with @crazy-max over the last few months along with getting feedback from the wider community on how we can improve our existing Github Action. We have now moved from our single action to a clearer division and advanced set of options that not only allow you to just build & push but also support features like multiple architectures and build cache.

The big change with the advent of our V2 action is also the expansion of the number of actions that Docker is providing on Github. This more modular approach and the power of Github Actions has allowed us to make the minimal UX changes to the original action and add a lot more functionality.

We still have our more meta build/push action which does not actually require all of these preconfiguration steps and can still be used to deliver the same workflow we had with the previous workflow! To Upgrade the only changes are that we have split out the login to a new step and also now have a step to setup our builder. 

  -
        name: Setup Docker Buildx
        uses: docker/setup-buildx-action@v1

This Continue reading

Technology Short Take 132

Welcome to Technology Short Take #132! My list of links and articles from around the web seems to be a bit heavy on security-related topics this time. Still, there’s a decent collection of networking, cloud computing, and virtualization articles as well as a smattering of other topics for you to peruse. I hope you find something useful!

Networking

  • I think a fair number of folks may not be aware that the Nginx ingress controller for Kubernetes—both the community version and the Nginx-maintained open source version—do suffer from timeouts and errors resulting from changes in the back-end application’s list of endpoints (think pods being added or removed). This performance testing post lays out all the details. In particular, see the section titled “Timeout and Error Results for the Dynamic Deployment.”
  • Ivan Pepelnjak attempts to answer the question, “How much do I need to know about Linux networking?”
  • Speaking of Linux networking…Marek Majkowski of Cloudflare digs deep into conntrack, used for stateful firewalling functionality.

Servers/Hardware

  • Normally I talk about server hardware and such here, but with so much moving to public cloud providers, let’s expand that focus a little bit: in this post, Jeramiah Dooley provides his perspective Continue reading

Docker Hub Image Retention Policy Delayed, Subscription Updates

Today we are announcing that we are pausing enforcement of the changes to image retention until mid 2021. Two months ago, we announced a change to Docker image retention policies to reduce overall resource consumption. As originally stated, this change, which was set to take effect on November 1, 2020, would result in the deletion of images for free Docker account users after six months of inactivity. After this announcement, we heard feedback from many members of the Docker community about challenges this posed, in terms of adjusting to the policy without visibility as well as tooling needed to manage an organization’s Docker Hub images. Today’s announcement means Docker will not enforce image expiration enforcement on November 1. Instead, Docker is focusing on consumption-based subscriptions that meet the needs of all of our customers. In this model, as the needs of a developer grow, they can upgrade to a subscription that meets their requirements without limits.

This change means that developers will get a base level of consumption to start, and can extend their subscriptions as their needs grow and evolve, only paying for what is actually needed. The community of 6.7 million registered Docker developers is incredibly diverse–the Continue reading

Understanding Inner Loop Development and Pull Rates

We have heard feedback that given the changes Docker introduced relating to network egress and the number of pulls for free users, that there are questions around the best way to use Docker as part of your development workflow without hitting these limits. This blog post covers best practices that improve your experience and uses a sensible consumption of Docker which will mitigate the risk of hitting these limits and how to increase the limits depending on your use case. 

If you are interested in how these limits are addressed in a CI/CD pipeline, please have a look at our post: Best Practices for using Docker Hub for CI/CD. If you are using Github Action, have a look at our Docker Github Actions post.

Prerequisites

To complete this tutorial, you will need the following:

Determining Number of Pulls

Docker defines pull rate limits as the number of manifest requests to Docker Hub. Rate limits for Docker pulls are based Continue reading

Docker and Snyk Extend Partnership to Docker Official and Certified Images

Today we are pleased to announce that Docker and Snyk have extended our existing partnership to bring vulnerability scanning to Docker Official and certified images. As the exclusive scanning partner for these two image categories, Snyk will work with Docker to provide developers with insights into our most popular images. It builds on our previous announcement earlier this year where Snyk scanning was integrated into the Docker Desktop and Docker Hub. This means that developers can now incorporate vulnerability assessment along each step of the container development and deployment process.

Docker Official images represent approximately 25% of all of the pull activity on Docker Hub. Docker Official images are used extensively by millions of developers and developer world wide teams to build and run tens of millions of containerized applications. By integrating vulnerability scanning from Snyk users are now able to get more visibility into the images and have a higher level of confidence that their applications are secure and ready for production.

Docker Official images that have been scanned by Snyk will be available early next year.

You can read more about it from Snyk here and you can catch Docker CEO Scott Johnson and Snyk CEO Peter McKay Continue reading

Docker at SnykCon 2020

We are excited to be a gold sponsor of the inaugural SnykCon virtual conference, a free online event from Snyk taking place this week on October 21-22, 2020. The conference will look at best practices and technologies for integrating development and security teams, tools, and processes, with a specific nod of the secure use of containers, from images used as a starting point to apps shared with teams and the public.

At Docker, we know that security is vital to successful app development projects, and automating app security early in the development process ensures teams start with the right foundation and ship apps that have vulnerability scanning and remediation included by default. This year we announced a broad partnership with Snyk to incorporate their leading vulnerability scanning across the entire Docker app development lifecycle. At Snykcon, attendees will learn how to successfully incorporate security scanning into their entire Docker app delivery pipeline.

Some of the highlights from Docker at this event include:

  • Docker CEO Scott Scott Johnston will join Snyk CEO Peter McKay in the keynote fireside chat on Thursday, October 22 at 8:30am PDT. Scot and Peter will talk about the partnership between Docker and Snyk and share Continue reading

Getting started with Ansible security automation: Threat Hunting

AnsibleFest has just wrapped up, with a whole track dedicated to security automation, our answer to the lack of integration across the IT security industry. If you’re looking for a use case to start with, our investigation enrichment blog will give you yet another example of where Ansible can facilitate typical operational challenges of security practitioners.

Ansible security automation is about integrating various security technologies with each other. One part of this challenge is the technical complexity: different products, interfaces, workflows, etc. But another equally important part is getting the processes of different teams in the security organization aligned. After all, one sign of successful automation is the deployment across team boundaries.

This is especially true with threat hunting activities: when security analysts suspect a malicious activity or want to prove a hypothesis, they need to work with rules and policies to fine tune the detection and identification. This involves changes and configurations on various target systems managed by different teams.

In this blog post, we will start with a typical day-to-day security operations challenge and walk through some example threat hunting steps - adding more teams and products over the course to finally show how Red Hat Ansible Automation Continue reading

Best of Fest: AnsibleFest 2020

Thank you to everyone who joined us over the past two days for the AnsibleFest 2020 virtual experience. We had such a great time connecting with Ansible lovers across the globe. In case you missed some of it (or all of it), we have some event highlights to share with you! If you want to go see what you may have missed, all the AnsibleFest 2020 content will be available on demand for a year. 

 

Community Updates

This year at AnsibleFest 2020, Ansible Community Architect Robyn Bergeron kicked off with her keynote on Tuesday morning. We heard how with Ansible Content Collections, it’s easier than ever to use Ansible the way you want or need to, as a contributor or an end user. Ansible 2.10 is now available, and Robyn explained how the feedback loop got us there. If you want to hear more about the Ansible community project, go watch Robyn’s keynote on demand

 

Product Updates

Ansible’s own Richard Henshall talked about the Red Hat Ansible Automation Platform product updates and new releases. In 2018, we unveiled the Ansible certified partner program and now we have over 50 platforms certified. We are bridging traditional Continue reading

Deep Dive: ACL Configuration Management Using Ansible Network Automation Resource Modules

In October 2019 as part of the Red Hat Ansible Engine 2.9 release, the Ansible Network Automation team introduced the first resource modules. These opinionated network modules make network automation easier and more consistent for those automating various network platforms in production. The goal for resource modules is to avoid creating and maintaining overly complex jinja2 templates for rendering and pushing network configuration. 

This blog post covers the newly released ios_acls resource module and how to automate manual processes associated with switch and router configurations. These network automation modules are used for configuring routers and switches from popular vendors (but not limited to) Arista, Cisco, Juniper, and VyOS. The access control lists (ACLs) network resource modules are able to read ACL configuration from the network, provide the ability to modify and then push changes to the network device. These opinionated network resource modules make network automation easier and more consistent for those automating various network platforms in production. I’ll walk through several examples and describe the use cases for each state parameter (including three newly released state types) and how these are used in real world scenarios.

 

The Certified Content Collection

This blog uses the cisco.ios Continue reading

Improve the Security of Hub Container Images with Automatic Vulnerability Scans

In yesterday’s blog about improvements to the end-to-end Docker developer experience, I was thrilled to share how we are integrating security into image development, and to announce the launch of vulnerability scanning for images pushed to the Hub. This release is one step in our collaboration with our partner Snyk where we are integrating their security testing technology into the Docker platform. Today, I want to expand on our announcements and show you how to get started with image scanning with Snyk. 

In this blog I will show you why scanning Hub images is important, how to configure the Hub pages to trigger Snyk vulnerability scans, and how to run your scans and understand the results. I will also provide suggestions incorporating vulnerability scanning into your development workflows so that you include regular security checkpoints along each step of your application deployment.  

Software vulnerability scanners have been around for a while to detect vulnerabilities that hackers use for software exploitation. Traditionally security teams ran scanners after developers thought that their work was done, frequently sending code back to developers to fix known vulnerabilities. In today’s “shift-left” paradigm, scanning is applied earlier during the development and CI cycles Continue reading

IT Leader Channel at AnsibleFest 2020

Whether you have automated different domains within your business or are just getting started, creating a roadmap to automation that can be passed between teams and understood at different levels is critical to any automation strategy. 

We’ve brought back the IT Decision Maker track at AnsibleFest this year after its debut in 2019, featuring sessions that help uplevel the conversation about automation, create consensus between teams and get automation goals accomplished faster. 

 

What you can expect

There are a variety of sessions in the IT Decision Maker track. A few are focused on specific customer use cases of how they adopted and implemented Ansible. These tracks are great companions to our customer keynotes, including those from CarMax and PRA Health Sciences, that will dive into their Ansible implementation at a technical level. This track aims to cover the many constituents of automation within a business and how to bring  the right type of teams together to extend your automation to these stakeholders. 

Newcomers to AnsibleFest will get a lot out of this track, as many of the sessions are aimed at those with a beginner’s level knowledge of Ansible Automation Platform and its hosted services. Those Continue reading

Considerations for using IaC with Cluster API

In other posts on this site, I’ve talked about both infrastructure-as-code (see my posts on Terraform or my posts on Pulumi) and somewhat separately I’ve talked about Cluster API (see my posts on Cluster API). And while I’ve discussed the idea of using existing AWS infrastructure with Cluster API, in this post I wanted to try to think about how these two technologies play together, and provide some considerations for using them together.

I’ll focus here on AWS as the cloud provider/platform, but many of these considerations would also apply—in concept, at least—to other providers/platforms.

In no particular order, here are some considerations for using infrastructure-as-code and Cluster API (CAPI)—specifically, the Cluster API Provider for AWS (CAPA)—together:

  • If you’re going to need the CAPA workload clusters to have access to other AWS resources, like applications running on EC2 instances or managed services like RDS, you’ll need to use the additionalSecurityGroups functionality, as I described in this blog post.
  • The AWS cloud provider requires certain tags to be assigned to resources (see this post for more details), and CAPI automatically provisions new workload clusters with the AWS cloud provider when running on AWS. Thus, you’ll want to make Continue reading

Culture at AnsibleFest 2020

At Red Hat, we’ve long recognized that the power of collaboration enables communities to achieve more together than individuals can accomplish on their own. Developing an organizational culture that empowers communities to flourish and collaborate -- whether in an open source community or for an internal community of practice -- isn’t always straightforward. This year at AnsibleFest, the Culture topic aims to demystify some of these areas by sharing the stories, practices, and examples that can get you on your path to better collaboration. 

 

Culture at AnsibleFest: “Open” for participation

Because we recognize that culture is not a “one size fits all” topic, we’ve made sure to sprinkle nearly every track at AnsibleFest with relevant content to help every type of Ansible user (or manager of Ansible users!) participate in developing healthy cultures and communities of automation inside their organizations. 

Whether you’re interested in contributing to open source communities, learning how others have grown the use of Ansible inside their departments or organizations, or if you’re simply interested in building healthy, diverse, inclusive communities, inside or outside the workplace -- the Culture (cross) Channel at AnsibleFest has you covered. 

 

Be a Cultural Catalyst for Continue reading

AnsibleFest 2020 Live Q&A

We are less than a week away from AnsibleFest 2020! We can’t wait to connect with you and help you connect with other automation lovers. We have some great content lined up for this year’s virtual experience and that includes some amazing Live Q&A Sessions. This year, you will be able to get your questions answered from Ansible experts, Red Hatters and Ansible customers. Let’s dive into what you can expect. 

 

Tuesday, October 13

11am

Live Q&A: Get all your network automation questions answered with Brad Thornton, Iftikhar Khan and Sean Cavanaugh

In this session, a panel of experts discuss a wide range of use cases around network automation.  They will talk about the Red Hat Ansible Automation Platform and the product direction including Ansible Network Collections, resource modules and managing network devices in a GitOps model. Bring your questions for the architects and learn more about how Red Hat is helping organizations operationalize automation in their network while bridging gaps between different IT infrastructure teams.

 

Live Q&A: Bridging traditional, container, and edge platforms through automation with Joe Fitzgerald, Ashesh Badani, and Stefanie Chiras

Join this panel discussion, moderated by Kelly Fitzpatrick (Redmonk), to hear from Continue reading

New Collab, Support and Vulnerability Scanning Enhance Docker Pro and Team Subscriptions

Last March, we laid out our commitment to focus on developer experiences to help build, share, and run applications with confidence and efficiency. In the past few months we have delivered new features for the entire Docker platform that have built on the tooling and collaboration experiences to improve the development and app delivery process.

During this time, we have also learned a lot from our users about ways Docker can help improve developer confidence in delivering apps for more complicated use cases and how we can help   larger teams improve their ability to deliver apps in a secure and repeatable manner. Over the next few weeks, you will see a number of new features delivered to Docker subscribers at the free, Pro and Team level that deliver on that vision for our customers. 

Today, I’m excited to announce the first set of features: vulnerability scanning in Docker Hub for Pro and Team subscribers. This new release enables individual and team users to automatically monitor, identify and ultimately resolve security issues in their applications. We will also preview Desktop features that will rollout over the next several months.   

We’ve heard in numerous interviews with team managers that Continue reading

Getting Started With AWS Ansible Module Development and Community Contribution

We often hear from cloud admins and developers that they’re interested in giving back to Ansible and using their knowledge to benefit the community, but they don’t know how to get started.  Lots of folks may even already be carrying new Ansible modules or plugins in their local environments, and are looking to get them included upstream for more broad use.

Luckily, it doesn’t take much to get started as an Ansible contributor. If you’re already using the Ansible AWS modules, there are many ways to use your existing knowledge, skills and experience to contribute. If you need some ideas on where to contribute, take a look at the following:

  • Creating integration tests: Creating missing tests for modules is a great way to get started, and integration tests are just Ansible tasks!
  • Module porting: If you’re familiar with the boto3 Python library, there’s also a backlog of modules that need to be ported from boto2 to boto3.
  • Repository issue triage: And of course there’s always open Github issues and pull requests. Testing bugs or patches and providing feedback on your use cases and experiences is very valuable.

The AWS Ansible Content Collections

Starting with Ansible 2.10, the AWS Continue reading

Technology Short Take 131

Welcome to Technology Short Take #131! I’m back with another collection of articles on various data center technologies. This time around the content is a tad heavy on the security side, but I’ve still managed to pull in articles on networking, cloud computing, applications, and some programming-related content. Here’s hoping you find something useful here!

Networking

  • This recent Ars Technica article points out that a feature in Chromium—the open source project leveraged by Chrome and Edge, among others—is having a significant impact on root DNS traffic. More technical details can be found in an associated APNIC blog post.
  • Here’s a few details around Open Service Mesh.
  • Quentin Machu outlines a series of problems his company experienced using Weave Net as the CNI for their Kubernetes clusters, as well as describes the migration process to a new CNI. His blog post is well worth a read, IMO.

Security

1 22 23 24 25 26 125