Packet Pushers co-founders Ethan Banks and Greg Ferro join the 50th episode of IPv6 Buzz to talk about why network engineers haven't prioritized IPv6, and how to change that.
One of the common themes – and one could say even the main theme – of The Next Platform is that some of technologies developed by the high performance supercomputing centers (usually in conjunction with governments and academia), the hyperscalers, the big cloud builders, and a handful of big and innovative large enterprises eventually get hardened, commercialized, and pushed out into the larger mainstream of information technology. …
“Build once, deploy anywhere” is really nice on the paper but if you want to use ARM targets to reduce your bill, such as Raspberry Pis and AWS A1 instances, or even keep using your old i386 servers, deploying everywhere can become a tricky problem as you need to build your software for these platforms. To fix this problem, Docker introduced the principle of multi-arch builds and we’ll see how to use this and put it into production.
Quick setup
To be able to use the docker manifest command, you’ll have to enable the experimental features.
On macOS and Windows, it’s really simple. Open the Preferences > Command Line panel and just enable the experimental features.
On Linux, you’ll have to edit ~/.docker/config.json and restart the engine.
Under the hood
OK, now we understand why multi-arch images are interesting, but how do we produce them? How do they work?
Each Docker image is represented by a manifest. A manifest is a JSON file containing all the information about a Docker image. This includes references to each of its layers, their corresponding sizes, the hash of the image, its size and also the platform it’s supposed to work on. Continue reading
Nelso Rodríguez is a nurse and one of the founders of the community network in El Cuy, Patagonia, Argentina.
A little over a year ago, there was no Internet or mobile connection in my isolated town of El Cuy, population 540.
After several unsuccessful attempts by the provincial government to bring in satellite Internet – which was of very poor quality, at very high costs – our people began to demand an Internet for all.
After a newspaper article exposed our lack of connectivity, the Internet Society offered to help us set up a community network. There was a fiber-optic cable passing through a village just 50 kilometers away, which also had a tower with an Internet signal. So, a group of residents volunteered to dig the trenches for the anchors and lay the concrete. The Internet Society financed all the material and provided the technical expertise. After making a connection between two antennas, we managed to bring community Internet to El Cuy.
Our network has been up and running since February 2019, reaching almost 400 residents in El Cuy and another 100 in the nearby town of Cerro Policía. This has been spectacular in so many ways!
The proliferation of DDoS attacks of varying size, duration, and persistence has made DDoS protection a foundational part of every business and organization’s online presence. However, there are key considerations including network capacity, management capabilities, global distribution, alerting, reporting and support that security and risk management technical professionals need to evaluate when selecting a DDoS protection solution.
Gartner’s view of the DDoS solutions; How did Cloudflare fare?
Gartner recently published the report Solution Comparison for DDoS Cloud Scrubbing Centers (ID G00467346), authored by Thomas Lintemuth, Patrick Hevesi and Sushil Aryal. This report enables customers to view a side-by-side solution comparison of different DDoS cloud scrubbing centers measured against common assessment criteria. If you have a Gartner subscription, you can view the report here. Cloudflare has received the greatest number of ‘High’ ratings as compared to the 6 other DDoS vendors across 23 assessment criteria in the report.
The vast landscape of DDoS attacks
From our perspective, the nature of DDoS attacks has transformed, as the economics and ease of launching a DDoS attack has changed dramatically. With a rise in cost-effective capabilities of launching a DDoS attack, we have observed a rise in the number of under 10 Gbps DDoS Continue reading
Want to trigger linting to your Ansible deployment on every Pull Request?
In this blog, I will show you how to add some great automation into your Ansible code pipeline.
CI/CD is currently a pretty hot topic for developers. Operations teams can get started with some automated linting with GitHub actions. If you use GitHub you can lint your playbooks during different stages including git pushes or pull requests.
If you’re following good git flow practices and have an approval committee reviewing pull requests, this type of automated testing can save you a lot of time and keep your Ansible code nice and clean.
What is Ansible Lint?
Ansible Lint is an open source project that lints your Ansible code. The docs state that it checks playbooks for practices and behavior that could potentially be improved. It can be installed with pip and run manually on playbooks or set up in a pre-commit hook and run when you attempt a commit on your repo from the CLI.
The project can be found under the Ansible org on GitHub.
Last year, Cloudflare announced the planned expansion of our partner program to help managed and professional service partners efficiently engage with Cloudflare and join us in our mission to help build a better Internet. Today, we want to highlight some of those amazing partners and our growing support and training for MSPs around the globe. We want to make sure service partners have the enablement and resources they need to bring a more secure and performant Internet experience to their customers.
This partner program tier is specifically designed for professional service firms and Managed Service Providers (MSPs and MSSPs) that want to build value-added services and support Cloudflare customers. While Cloudflare is hyper-focused on building highly scalable and easy to use products, we recognize that some customers may want to engage with a professional services firm to assist them in maximizing the value of our offerings. From building Cloudflare Workers, implementing multi-cloud load balancing, or managing WAF and DDoS events, our partner training and support enables sales and technical teams to position and support the Cloudflare platform as well as enhance their services businesses.
Training
Our training and certification is meant to help partners through each stage of Cloudflare adoption, Continue reading
As developers, operators and devops people, we are all hungry for visibility and efficiency in our workflows. As Linux reigns the “Open-Distributed-Virtualized-Software-Driven-Cloud-Era”— understanding what is available within Linux in terms of observability is essential to our jobs and careers.
Linux Community and Ecosystem around Observability
More often than not and depending on the size of the company it’s hard to justify the cost of development of debug and tracing tools unless it’s for a product you are selling. Like any other Linux subsystem, the tracing and observability infrastructure and ecosystem continues to grow and advance due to mass innovation and the sheer advantage of distributed accelerated development. Naturally, bringing native Linux networking to the open networking world makes these technologies readily available for networking.
There are many books and other resources available on Linux system observability today…so this may seem no different. This is a starter blog discussing some of the building blocks that Linux provides for tracing and observability with a focus on networking. This blog is not meant to be an in-depth tutorial on observability infrastructure but a summary of all the subsystems that are available today and constantly being enhanced by the Linux networking community for networking. Continue reading
DockerCon LIVE 2020 is the opportunity for the Docker Community to connect while socially distancing, have fun, and learn from each other. From beginner to expert, DockerCon speakers are getting ready to share their tips and tricks for becoming more productive and finding joy while building apps.
From the Docker team, Engineer Dave Scott will share how recent updates to Docker Desktop helps to deliver quicker edit-compile-test cycles to developers. He’ll dive into the New Docker Desktop Filesharing Features that enabled this change and how to use them effectively to speed up your dev workflow.
And for PHP devs, Erika Heidi from Digital Ocean will demonstrate How to Create PHP Development Environments with Docker Compose, using a Laravel 6 application as case study. She’ll demonstrate how to define and integrate services, share files between containers, and manage your environment with Docker Compose commands.
10ish ways to explore your network with Suzieq Suzieq is new software for Network Observability. In this blog I will be going over some of the things you can do with Suzieq to help you explore and understand your network. We have suzieq-data that you can use to investigate Suzieq....