I’ve written a few different posts on WireGuard, the “simple yet fast and modern VPN” (as described by the WireGuard web site) that aims to supplant tools like IPSec and OpenVPN. My first post on WireGuard showed how to configure WireGuard on Linux, both on the client side as well as on the server side. After that, I followed it up with posts on using the GUI WireGuard app to configure WireGuard on macOS and—most recently—making WireGuard from Homebrew work on an M1-based Mac. In this post, I’m going to take a look at using WireGuard on macOS again, but this time via the CLI.
Some of this information is also found in this WireGuard quick start. Here I’ll focus only on using macOS as a WireGuard client, not as a server; refer to the WireGuard docs (or to my earlier post) for information on setting up a WireGuard server. I’ll also assume that you’ve installed WireGuard via Homebrew.
The first step is to generate the public/private keys you’ll need. If the /usr/local/etc/wireguard
(or the /opt/homebrew/etc/wireguard
for users on an M1-based Mac) directory doesn’t exist, you’ll need to first create that directory. (It didn’t exist Continue reading
The Kuma community recently released version 1.2.0 of the open source Kuma service mesh, and along with it a corresponding version of kumactl
, the command-line utility for interacting with Kuma. To make it easy for macOS users to get kumactl
, the Kuma community maintains a Homebrew formula for the CLI utility. That includes providing M1-native (ARM64) macOS binaries for kumactl
. Unfortunately, installing an earlier version of kumactl
on an M1-based Mac using Homebrew is somewhat less than ideal. Here’s one way—probably not the only way—to work around some of the challenges.
Note that this post really only applies to users of M1-based Macs; users of Intel-based Macs can extract the kumactl
binary from the release archive available linked from the Kuma install docs. (The same goes for users of Linux distributions running on Intel-based hardware.) On the Kuma website, simply select the desired version of Kuma from the drop-down in the upper left, check the page for the direct download link, and off you go. This doesn’t work for M1-based Macs because at the time this post was written, the Kuma community was not providing ARM64 binaries. This leaves Homebrew as the only way (aside Continue reading
After writing the post on using WireGuard on macOS (using the official WireGuard GUI app from the Mac App Store), I found the GUI app’s behavior to be less than ideal. For example, tunnels marked as on-demand would later show up as no longer configured as an on-demand tunnel. When I decided to set up WireGuard on my M1-based MacBook Pro (see my review of the M1 MacBook Pro), I didn’t want to use the GUI app. Fortunately, Homebrew has formulas for WireGuard. Unfortunately, the WireGuard tools as installed by Homebrew on an M1-based Mac won’t work. Here’s how to fix that.
The key issues with WireGuard as installed by Homebrew on an M1-based Mac are:
/opt/homebrew
prefix. By comparison, Homebrew uses /usr/local
on Intel-based Macs. Some of the WireGuard-related scripts are hard-coded to use /usr/local
as the Homebrew prefix. Because the prefix has changed, though, these scripts now don’t work on an M1-based Mac.VMware vCenter Server tags are labels that can be applied to objects like the system’s environment and usage, therefore it is a very useful method of asset management - also making tags a perfect fit in the Ansible world to organize systems in an Ansible inventory. Red Hat customers have regularly requested the ability to use vCenter Tags in Red Hat Ansible Tower. This is now possible with an Ansible Tower inventory source that supports tags and provides the vmware_vm_inventory plugin.
Ansible Automation Platform 1.2 brings completely native Ansible inventory plugin support to Ansible Tower 3.8. In previous versions, there were specific inventory plugin configurations based on the old inventory scripts where a specific set of parameters surfaced in Ansible Tower's user interface. For example: cloud region and a specific subset of variables you could pass to those inventory scripts surfaced as variables you could pass to the inventory source, which means that new configuration parameters that come with Ansible inventory plugins are not supported in order to maintain compatibility with the old inventory scripts.
The move to support native inventory plugins allows Red Hat Ansible Automation Platform customers to use all the configuration parameters available through Continue reading
I recently came across something that wasn’t immediately intuitive with regard to terminating HTTPS traffic on an AWS Elastic Load Balancer (ELB) when using Kubernetes on AWS. At least, it wasn’t intuitive to me, and I’m guessing that it may not be intuitive to some other readers as well. Kudos to my teammates Hart Hoover and Brent Yarger for identifying the resolution, which I’m going to call out in this post.
This AWS Premium Support post outlines the basic scenario:
Consider the following YAML, taken directly from the previously-referenced AWS Premium Support article:
apiVersion: v1
kind: Service
metadata:
name: Continue reading
Welcome to Technology Short Take #141! This is the first Technology Short Take compiled, written, and published entirely on my M1-based MacBook Pro (see my review here). The collection of links shared below covers a fairly wide range of topics, from old Sun hardware to working with serverless frameworks in the public cloud. I hope that you find something useful here. Enjoy!
As part of an ongoing effort to refine my work environment, several months ago I switched to a Logitech Ergo K860 ergonomic keyboard. While I’m not a “keyboard snob,” I am somewhat particular about the feel of my keyboard, so I wasn’t sure how I would like the K860. In this post, I’ll provide my feedback, and provide some information on how well the keyboard works with both Linux and macOS.
Setting up the K860 is remarkably easy. The first system I tried to pair it with was an older Mac Pro workstation, and apparently the Bluetooth hardware on that particular workstation wasn’t new enough to support the K860 (Logitech indicates that Bluetooth 5.0 is needed; more on that in a moment). Instead, I popped in the USB-A wireless receiver, and was up and running with the K860 less than a minute later. This was using macOS, but the Mac Pro also dual-booted Linux, so I rebooted into Linux and found that the K860 with the Logitech-supplied USB receiver continued to work without any issues.
The key takeaway regarding Linux is this: if you’re interested in getting the K860 for use with Continue reading
Technological advancements are intended to bring more control, agility and velocity to organizations. However, adopting these new technologies and techniques, such as cloud computing and microservices, increases an organization’s security footprint, bringing greater risk of security breaches.
Cyberattacks potentially expose organizations to financial loss, reputational damage, legal liability, and business continuity risk. As a result, security teams are under increased pressure to help proactively protect organizations against cyberattacks and maintain a more consistent, rapid incident response framework to respond to security breaches.
In our previous blogs in this series, we explored how Ansible security automation enables security teams to automate and simplify investigation enrichment and threat hunting practices. We also discussed and provided our answer to the lack of integration across the IT security industry.
In this blog post, we’ll have a closer look at incident response and how Ansible security automation empowers security teams to respond effectively to security breaches.
Incident response is the approach and techniques that security departments implement to neutralize and mitigate cyberattacks, and is a core responsibility of the security team. Recent news headlines are rife with high-profile security breaches and Continue reading
As the adoption of containers and Kubernetes increases to drive application modernization, IT organizations must find ways to easily deploy and manage multiple Kubernetes clusters across regions, both residing in the public cloud and/or on-premises, and all the way to the edge. As such, we continue to expand on the capabilities of our Certified Ansible Content Collection for kubernetes.core.
In this blog post, we’ll highlight some of the exciting new changes in the 2.0 release of this Collection.
Development on the kubernetes.core Collection had historically taken place in the community.kubernetes GitHub repository, which was built off community contributions before Red Hat supported it. That code base served as the source for both Collections. With this release, we have shifted all development to the kubernetes.core GitHub repository. Moving forward, the community.kubernetes namespace will simply redirect to the kubernetes.core Collection. If you are currently using the community.kubernetes namespace in your playbooks, we encourage you to begin switching over to kubernetes.core. This change better reflects that this codebase is a Red Hat supported Collection.
One of the main objectives of our 2.0 release was to Continue reading
At the end of last year we launched vulnerability scanning options as part of the Docker platform. We worked together with our partner Snyk to include security testing options along multiple points of your inner loop. We incorporated scanning options into the Hub, so that you can configure your repositories to automatically scan all the pushed images. We also added a scanning command to the Docker CLI on Docker Desktop for Mac and Windows, so that you can run vulnerability scans for images on your local machine. The earlier in your development that you find these vulnerabilities, the easier and cheaper it is to fix them. Vulnerability scan results also provide remediation guidance on things that you can do to remove the reported vulnerabilities. Some of the examples of remediation include recommendations for alternative base images with lower vulnerability counts, or package upgrades that have already resolved the specified vulnerabilities.
We are now making another update in our security journey, by bringing “docker scan” to the Docker CLI on Linux. The experience of scanning on Linux is identical to what we have already launched for Desktop CLI, with scanning support for linux/amd64 (x86-64) Docker images. The Continue reading
We are excited to announce the release of Docker Desktop 3.4.
This release includes several improvements to Docker Desktop, including our new Volume Management interface, the Compose v2 roll-out, and changes to how to Skip an update to Docker Desktop based on your feedback.
Have you wanted a way to more easily manage and explore your volumes?
In this release we’re introducing a new capability in Docker Desktop that helps you to create and delete volumes from Desktop’s Dashboard as well as to see which ones are In Use.
For developers with Pro and Team Docker subscriptions, we’ll be bringing a richer experience to managing your volumes.
You’ll be able to explore the contents of the volumes so that you can more easily get an understanding of what’s taking up space within the volume.
You’ll also be able to easily see which specific containers are using any particular volume.
We’re also looking to add additional capabilities in the future, such as being able to easily download files from the volume, read-only view for text files, and more. We’d love to hear more about what you’d like to see us prioritize and focus on in improving the Continue reading
AnsibleFest will be a free, virtual two day event again this year on September 29-30. You can expect all the usual highlights, like customer keynotes, breakout sessions, direct access to Ansible experts and more. We will also be bringing back tracks from last year like Network, Security, Developer and more to give you exactly the type of information you need for wherever you are in your Ansible journey.
Do you have a story to share about how you're using Ansible?
The Call for Proposals will be open from June 8-29. We will be choosing a variety of sessions across all subjects and skill areas. Notifications will be sent out in July for session approval status. Share your automation story with us today!
Want to be the first to hear the latest updates about AnsibleFest? Then sign up to stay connected and up-to-date on all things on the AnsibleFest page.
As many of you are aware, it has been a difficult period for companies offering free cloud compute [1]. Unfortunately, Docker’s Autobuild service has been targeted by the same bad actors, so today we are disappointed to announce that we will be discontinuing Autobuilds on the free tier starting from June 18, 2021.
In the last few months we have seen a massive growth in the number of bad actors who are taking advantage of this service with the goal of abusing it for crypto mining. For the last 7 years we have been proud to offer our Autobuild service to all our users as the simplest way to set up CI for containerized projects. As well as the increased cost of running the service, this type of abuse periodically impacts performance for paying Autobuild users and induces many sleepless nights for our team
In April we saw the number of build hours spike 2X our usual load and by the end of the month we had already deactivated ~10,000 accounts due to mining abuse The following week we had another ~2200 miners spin up.
As a result of this we have made the hard choice to remove Autobuilds Continue reading
Nearly 80,000 participants registered for DockerCon Live 2021! There were fantastic keynotes, compelling sessions, thousands of interactions and everything in-between that a developer and development teams need to help solve their day-to-day application development challenges.
In all that excitement, you might have missed the new innovations that Docker announced to make it easier for developers to build, share and run your applications from code to cloud. These enhancements are a result of Docker’s continued investment and commitment to make sure developers have the best experience possible while making app development more efficient and secure.
Application security is directly tied to the software supply chain. Developers are realizing the importance of integrating security as early as possible in the development process. They must now consider the security directives of their organization and associated compliance rules while also enabling their teams to work in the most secure, efficient way possible.
These new product enhancements bolster security in a number of dimensions including scanning for vulnerabilities during different development stages and increasing team security by offering tools such as audit logs and scoped access tokens.
Take a look at what we announced:
Verified Publisher Program
Docker launched the Docker Verified Publisher program Continue reading
Docker Inc. started like many startups with engineers working from a single location. For us, this was in the Bay Area in the US. We were very office-centric, so the natural way to increase diversity and to get engineers from different cultures to work together was to open new offices in diverse locations. Right from the start, our goal was to mix American and European ways of producing software, giving us the best of both cultures.
In 2015, Docker started to open offices in Europe, starting with Cambridge in the United Kingdom and followed by Paris in France. With these two locations, the long road to gaining experience working with remote employees began.
Having multiple offices scattered around the world is different from being fully remote. But you still start experiencing some of the challenges of not having everybody in the same location simultaneously. We spent a great deal of our time on planes or trains visiting each other.
Despite the robust open-source culture of the company, which shows that you can build great software while not having everybody in the same room, we still had a very office-centric culture. A lot of the Continue reading
I hadn’t done a personal hardware refresh in a while; my laptop was a 2017-era MacBook Pro (with the much-disliked butterfly keyboard) and my tablet was a 2014-era iPad Air 2. Both were serviceable but starting to show their age, especially with regard to battery life. So, a little under a month ago, I placed an order for some new Apple equipment. Included in that order was a new 2020 13" MacBook Pro with the Apple-designed M1 CPU. In this post, I’d like to provide a brief review of the 2020 M1-based MacBook Pro based on the past month of usage.
The “TL;DR” of my review is this: the new M1-based MacBook Pro offers impressive performance and even more impressive battery life. While the raw performance may not “blow away” its 2020 Intel-based counterpart—at least, it didn’t in my real-world usage—the M1-based MacBook Pro offered consistently responsive performance with a battery life that easily blew past any other laptop I’ve ever used, bar none.
Read on for more details.
The build quality is really good, with a significant improvement in keyboard quality relative to the earlier butterfly keyboard models (such as my 2017-era MacBook Pro). However, the overall design Continue reading
As we all know, Ansible is a well-adapted tool for the end-to-end automation of IT infrastructures. At the same time, due to the addition of new features and developments within the project, the Ansible community is growing at an accelerated rate. To help structure the project and also to facilitate the change in direction, we are launching a Steering Committee for the Ansible Community Project.
The Steering Committee’s role is to provide guidance, suggestions, and ensure delivery of the Ansible Community package. The committee shall be broadly representative of the planning and approval areas.
The initial Steering Committee members, selected based on their wide knowledge of and active contributions to the Ansible project, are:
Members of the committee will work with community users plus Ansible teams within Red Hat to assist in the composition of idea proposals/new collection inclusion requests. Rather than advocating on behalf of particular interests or perspectives, the job of the Steering Committee members is to listen carefully to their fellow community members, discuss, Continue reading
It’s here! Ready or not, DockerCon — our free, one-day, all-digital event designed for developers by developers — has arrived. Registration is open until 9 a.m., so if you haven’t already done so, go ahead and sign up!
This is your chance to learn all you can about modern application delivery in a cloud-native world — including the application development technology, skills, tools and people you need to help solve the problems you face day to day.
Final reminders: Don’t forget to catch our line-up of keynote speakers including Docker CEO Scott Johnston, and to bring your questions to Live Panels hosted by Docker Captain Bret Fisher, as well as our two developer-focused panels and Hema Ganapathy’s women’s panel. Just put your questions on selected topics in chat, and the team will do their best to answer them.
If you still need guidance on what to focus on, here’s a reminder of what not to miss. And don’t forget to come celebrate our global community in Community Rooms — a first at DockerCon.
That’s it! Now go forth and carpe DockerCon!
DockerCon LIVE 2021
Join us for DockerCon LIVE 2021 on Thursday, May 27. DockerCon LIVE is Continue reading
With DockerCon just a day away, let’s not forget to give a big THANK YOU to all our sponsors.
As our ecosystem partners, they play a central role in our strategy to deliver the best developer experience from local desktop to cloud, and/or to offer best-in-class solutions to help you build apps faster, easier and more securely. Translation: We couldn’t do what we do without them.
So be sure to visit their virtual rooms and special sessions at DockerCon this Thursday, May 27. With more than 20 Platinum, Gold or Silver sponsors this year, you’ll have plenty to choose from.
For example, check out AWS’s virtual room and the session with AWS Principal Technologist Massimo Re Ferrè at 3:15 p.m.-3:45 p.m. PDT.
And check out Microsoft’s virtual room and any of the three sessions it’s offering — How to Package DevOps Tools Using Docker Containers (3:45 p.m.- 4:15 p.m.), Container-Based Development with Visual Studio Code (4:15 p.m.- 4:45 p.m.), and Supercharging Machine Learning Development with Azure Machine Learning and Containers in VS Code! (4:45 p.m.- 5:15 p.m.).
Or there’s Mirantis’ virtual room and their two Continue reading
At Docker, we feel strongly about embracing diversity and we are committed to being proactive with respect to inclusion. As an example of our support for diversity, we are hosting the Community Rooms during DockerCon with panels and sessions for our global audience in their native languages. We are also highlighting the contributions from our women Captains and community developers.
At DockerCon, the Women in Tech panel will focus on the breadth and depth of knowledge from our panelists and their experiences using Docker technology throughout their career. Join us as we discuss the career choices that led these women to become application developers and hear about key innovations that they are working on.
Women in Tech Panel 4:15 Pacific on May 27, 2021
This panel is just one event out of a one day event packed with demonstrations, product announcement, company updates and more – all of it is focused on modern application delivery in a cloud-native world.
Our panelists and moderators include:
Hema Ganapathy – Moderator
Product Marketing, Docker
Hema is a highly seasoned technology professional with 30+ years of experience in software development, telecommunications, cloud computing and big data. She has held senior positions in Continue reading