
Check out our twenty-first edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.
Sign up below to have The Serverlist sent directly to your mailbox.
Continuing our commitment to helping organizations around the world deliver a public cloud experience in the data center through VMware’s Virtual Cloud Network, we’re excited to announce the general availability of VMware NSX-TTM 3.1. This latest release of our full stack Layer 2 – 7 networking and security platform delivers capabilities that allow you to build modern networks at cloud scale while simplifying operations and strengthening security for east-west traffic inside the data center.
As we continue to adapt to new realities, organizations need to build modern networks that can deliver any application, to any user, anywhere at any time, over any infrastructure — all while ensuring performance and connectivity objectives are met. And they need to do this at public cloud scale. NSX-T 3.1 gives organizations a way to simplify modern networks and replace legacy appliances that congest data center traffic. The Virtual Cloud Network powered by NSX-T enables you to achieve a stronger security posture and run virtual and containerized workloads anywhere.
On August 13th, we announced the implementation of rate limiting for Docker container pulls for some users. Beginning November 2, Docker will begin phasing in limits of Docker container pull requests for anonymous and free authenticated users. The limits will be gradually reduced over a number of weeks until the final levels (where anonymous users are limited to 100 container pulls per six hours and free users limited to 200 container pulls per six hours) are reached. All paid Docker accounts (Pro, Team or Legacy subscribers) are exempt from rate limiting.
The rationale behind the phased implementation periods is to allow our anonymous and free tier users and integrators to see the places where anonymous CI/CD processes are pulling container images. This will allow Docker users to address the limitations in one of two ways: upgrade to an unlimited Docker Pro or Docker Team subscription, or adjust application pipelines to accommodate the container image request limits. After a lot of thought and discussion, we’ve decided on this gradual, phased increase over the upcoming weeks instead of an abrupt implementation of the policy. An up-do-date status update on rate limitations is available at https://www.docker.com/increase-rate-limits.
Docker users Continue reading
When it comes to data-intensive supercomputing, few centers have the challenges Pawsey Supercomputing Centre faces. …
Pawsey Finds I/O Sweet Spots for Data-Intensive Supercomputing was written by Nicole Hemsoth at The Next Platform.
Continuing with our move towards consumption-based limits, customers will see the new rate limits for Docker pulls of container images at each tier of Docker subscriptions starting from November 2, 2020.
Anonymous free users will be limited to 100 pulls per six hours, and authenticated free users will be limited to 200 pulls per six hours. Docker Pro and Team subscribers can pull container images from Docker Hub without restriction as long as the quantities are not excessive or abusive.
In this article, we’ll take a look at determining where you currently fall within the rate limiting policy using some command line tools.
Requests to Docker Hub now include rate limit information in the response headers for requests that count towards the limit. These are named as follows:
The RateLimit-Limit header contains the total number of pulls that can be performed within a six hour window. The RateLimit-Remaining header contains the number of pulls remaining for the six hour rolling window.
Let’s take a look at these headers using the terminal. But before we can make a request to Docker Hub, we need to obtain a bearer token. We will then Continue reading
Today’s Heavy Networking show dives into Digital Experience Monitoring (DEM) with sponsor Catchpoint. Catchpoint combines synthetic testing with end user device monitoring to provide greater visibility into the end user experience while helping network engineers and IT admins support and troubleshoot a distributed workforce. Our guests from Catchpoint are Nik Koutsoukos, CMO; and Tony Ferelli, VP Operations.
The post Heavy Networking 547: Building And Monitoring A User-Centric Digital Experience With Catchpoint (Sponsored) appeared first on Packet Pushers.
If hardware doesn’t scale well, either up into a more capacious shared memory system or out across a network of modestly powered nodes, there isn’t much you can do about it. …
Where Latency Is Key And Throughput Is Of Value was written by Timothy Prickett Morgan at The Next Platform.

Can you hear me? Are you listening to me? Those two statements are used frequently to see if someone is paying attention to what you’re saying. Their connotation is very different though. One asks a question about whether you can tell if there are words coming out of someone’s mouth. Is the language something you can process? The other question is all about understanding.
“Seek first to understand,then to be understood.” – Stephen Covey
Listening is hard. Like super hard. How often do you find yourself on a conference call with your mind wandering to other things you need to take care of? How many times have we seen someone shopping online for shoes or camping gear instead of taking notes on the call they should be paying attention to? They answer is more often than we should.
Attention spans are hard for everyone, whether you’re affected by attention disorders or have normal brain chemistry. Our minds hate being bored. They’re always looking for a way to escape to something more exciting and stimulating. You know you can feel it when there’s a topic that seriously interests you and pulls you in versus the same old Continue reading

We recently released a new version of Cloudflare Resolver which adds a piece of information called “Extended DNS Errors” (EDE) along with the response code under certain circumstances. This will be helpful in tracing DNS resolution errors and figuring out what went wrong behind the scenes.

The DNS protocol was designed to map domain names to IP addresses. To inform the client about the result of the lookup, the protocol has a 4 bit field, called response code/RCODE. The logic to serve a response might look something like this:
function lookup(domain) {
...
switch result {
case "No error condition":
return NOERROR with client expected answer
case "No record for the request type":
return NOERROR
case "The request domain does not exist":
return NXDOMAIN
case "Refuse to perform the specified operation for policy reasons":
return REFUSE
default("Server failure: unable to process this query due to a problem with the name server"):
return SERVFAIL
}
}
try {
lookup(domain)
} catch {
return SERVFAIL
}
Although the context hasn't changed much, protocol extensions such as DNSSEC have been added, which makes the RCODE run out of space to express the server's internal Continue reading
After describing Cisco SD-WAN architecture and routing capabilities, David Penaloza focused on the onboarding process and tasks performed by the Cisco SD-WAN solution (encryption, tunnel establishment, and device onboarding) in it’s so-called Orchestration Plane.