We recently made available an experimental alpha Collection of generated modules using the AWS Cloud Control API for interacting with AWS Services. This content is not intended for production in its current state. We are making this work available because we thought it was important to share our research and get your feedback.
In this post, we’ll highlight how to try out this alpha release of the new amazon.cloud content Collection.
Launched in September 2021 and featured at AWS re:Invent, AWS Cloud Control API is a set of common application programming interfaces (APIs) that provides five operations for developers to create, read, update, delete, and list (CRUDL) resources and make it easy for developers and partners to manage the lifecycle of AWS and third-party services in a standard way.
The Cloud Control API provides support for hundreds of AWS resources today with support for more existing AWS resources across services such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) in the coming months.
AWS delivers a broad and deep portfolio of cloud services. It started with Amazon Simple Storage Service (Amazon S3) and grew over Continue reading
It can be frustrating to get errors (SERVFAIL response codes) returned from your DNS queries. It can be even more frustrating if you don’t get enough information to understand why the error is occurring or what to do next. That’s why back in 2020, we launched support for Extended DNS Error (EDE) Codes to 1.1.1.1.
As a quick refresher, EDE codes are a proposed IETF standard enabled by the Extension Mechanisms for DNS (EDNS) spec. The codes return extra information about DNS or DNSSEC issues without touching the RCODE so that debugging is easier.
Now we’re happy to announce we will return more error code types and include additional helpful information to further improve your debugging experience. Let’s run through some examples of how these error codes can help you better understand the issues you may face.
To try for yourself, you’ll need to run the dig or kdig command in the terminal. For dig, please ensure you have v9.11.20 or above. If you are on macOS 12.1, by default you only have dig 9.10.6. Install an updated version of BIND to fix that.
Let’s start with the output of an example Continue reading
This post is also available in 简体中文, 日本語, Español.
Since my previous blog about Secondary DNS, Cloudflare's DNS traffic has more than doubled from 15.8 trillion DNS queries per month to 38.7 trillion. Our network now spans over 270 cities in over 100 countries, interconnecting with more than 10,000 networks globally. According to w3 stats, “Cloudflare is used as a DNS server provider by 15.3% of all the websites.” This means we have an enormous responsibility to serve DNS in the fastest and most reliable way possible.
Although the response time we have on DNS queries is the most important performance metric, there is another metric that sometimes goes unnoticed. DNS Record Propagation time is how long it takes changes submitted to our API to be reflected in our DNS query responses. Every millisecond counts here as it allows customers to quickly change configuration, making their systems much more agile. Although our DNS propagation pipeline was already known to be very fast, we had identified several improvements that, if implemented, would massively improve performance. In this blog post I’ll explain how we managed to drastically improve our DNS record propagation speed, and the Continue reading
Scientists at the Institute for Computational Cosmology (ICC) in England spend most of their time trying to unravel the secrets held in the deepest parts of space, from the Big Bang and the origins of the universe to mysteries like dark matter. …
Switchless Network Steers COSMA7 Supercomputer Down Congested Pathways was written by Jeffrey Burt at The Next Platform.
Sponsored Feature Over the course of two decades, Intel drove the most important architectural change in the datacenter since the advent of the System/360 mainframe in the 1960s and the rise of RISC/Unix servers in the 1980s. …
Covering All The Hardware – And Software – Bases With IPUs was written by Timothy Prickett Morgan at The Next Platform.
I migrated my blog to Hugo two years ago, and never regretted the decision. At the same time I implemented version control with Git, and started using GitHub (and GitLab for a convoluted set of reasons) to host the blog repository.
After hesitating for way too long, I decided to go one step further and made the blog repository public. The next time a blatant error of mine annoys you fork it, fix my blunder(s), and submit a pull request (or write a comment and I’ll fix stuff like I did in the past).
I migrated my blog to Hugo two years ago, and never regretted the decision. At the same time I implemented version control with Git, and started using GitHub (and GitLab for a convoluted set of reasons) to host the blog repository.
After hesitating for way too long, I decided to go one step further and made the blog repository public. The next time a blatant error of mine annoys you fork it, fix my blunder(s), and submit a pull request (or write a comment and I’ll fix stuff like I did in the past).
I recently got fiber to my house. Yay! So after getting hooked up I started measuring that everything looked sane and performant.
I encountered two issues. Normal people would not notice or be bothered by either of them. But I’m not normal people.
I’m still working on one of the issues (and may not be able to disclose the details anyway, as the root cause may be confidential), so today’s issue is traceroute.
In summary: A bad MPLS config can break traceroute outside of the MPLS network.
$ traceroute -q 1 seattle.gov
traceroute to seattle.gov (156.74.251.21), 30 hops max, 60 byte packets
1 192.168.x.x (192.168.x.x) 0.302 ms <-- my router
2 194.6.x.x.g.network (194.6.x.x) 3.347 ms
3 10.102.3.45 (10.102.3.45) 3.391 ms
4 10.102.2.29 (10.102.2.29) 2.841 ms
5 10.102.2.25 (10.102.2.25) 2.321 ms
6 10.102.1.0 (10.102.1.0) 3.454 ms
7 10.200.200.4 (10.200.200.4) 2. Continue reading
Calico Open Source is an industry standard for container security and networking that offers high-performance cloud-native scalability and supports Kubernetes workloads, non-Kubernetes workloads, and legacy workloads. Created and maintained by Tigera, Calico Open Source offers a wide range of support for your choice of data plane whether it’s Windows, eBPF, Linux, or VPP.
We’re excited to announce our new certification course for Azure, Certified Calico Operator: Azure Expert! This free, self-paced course is the latest in our series of four courses. If you haven’t had a chance to complete our previous courses, I highly recommend enrolling in them in the following order (or as you prefer).
Whether you have little to no experience with cloud concepts, have entry-level DevOps and engineering experience, are keen to learn more about Azure or are already an Azure expert looking for a cloud networking and security solution, you will benefit from this course.
The course provides an introduction to Azure cloud, learnings about managed, self-managed and hybrid cluster deployment using Calico in Azure, and offers hands-on labs to help you explore most of Continue reading
As is well known, we like feed and speeds and slots and watts metrics here at The Next Platform for any kind of gear that runs in the datacenter. …
The Value Proposition For Amazon’s Graviton3 Server Chip was written by Timothy Prickett Morgan at The Next Platform.
This installment of Russ White’s BGP course discusses how the BGP protocol calculates the best path for a route. Topics include: -Routes to discard -Weighting -Shortest AS path -Origin type -Multi-Exit Discriminator (MED) -Oldest eBGP Path -Tiebreakers You can subscribe to the Packet Pushers’ YouTube channel for more videos as they are published. It’s a […]
The post Learning BGP Module 2 Lesson 4: Best Path – Video appeared first on Packet Pushers.