Intro Guide to Dockerfile Best Practices

There are over one million Dockerfiles on GitHub today, but not all Dockerfiles are created equally. Efficiency is critical, and this blog series will cover five areas for Dockerfile best practices to help you write better Dockerfiles: incremental build time, image size, maintainability, security and repeatability. If you’re just beginning with Docker, this first blog post is for you! The next posts in the series will be more advanced.

Important note: the tips below follow the journey of ever-improving Dockerfiles for an example Java project based on Maven. The last Dockerfile is thus the recommended Dockerfile, while all intermediate ones are there only to illustrate specific best practices.

Incremental build time

In a development cycle, when building a Docker image, making code changes, then rebuilding, it is important to leverage caching. Caching helps to avoid running build steps again when they don’t need to.

Tip #1: Order matters for caching

However, the order of the build steps (Dockerfile instructions) matters, because when a step’s cache is invalidated by changing files or modifying lines in the Dockerfile, subsequent steps of their cache will break. Order your steps from least to most frequently changing steps to optimize caching.

Tip #2: Continue reading

Understanding RTs and RDs

One of the items that continues to come up in my conversations with folks learning about about MPLS VPNs is defining what a Route Target (RT) and Route Distinguisher (RD) are. More specifically, most seem to understand their purpose – but often times they don’t quite understand the application. I (and many others – just google “Understanding RDs and RTs”) have written about this in the past but Im hoping to put a finer point on the topic in this post.

If someone were to ask me to summarize what route targets and route distinguishers were – I’d probably define them like this…

Route Distinguishers – serve to make routes unique
Route Targets – metadata used to make route import decisions

Now – I’ll grant that those definitions are awfully terse, but I also feel like this is a topic that is often over complicated. So let’s spend some time talking about RTs and RDs separately and then bring it all together in a lab so you can see what’s really happening.

Route Distinguishers

As I said, a route distinguisher serve to make routes look unique. So why do we care about making routes look unique? I’d argue one of Continue reading

Cloudflare outage caused by bad software deploy (updated)

This is a short placeholder blog and will be replaced with a full post-mortem and disclosure of what happened today.

For about 30 minutes today, visitors to Cloudflare sites received 502 errors caused by a massive spike in CPU utilization on our network. This CPU spike was caused by a bad software deploy that was rolled back. Once rolled back the service returned to normal operation and all domains using Cloudflare returned to normal traffic levels.

This was not an attack (as some have speculated) and we are incredibly sorry that this incident occurred. Internal teams are meeting as I write performing a full post-mortem to understand how this occurred and how we prevent this from ever occurring again.


Update at 2009 UTC:

Starting at 1342 UTC today we experienced a global outage across our network that resulted in visitors to Cloudflare-proxied domains being shown 502 errors (“Bad Gateway”). The cause of this outage was deployment of a single misconfigured rule within the Cloudflare Web Application Firewall (WAF) during a routine deployment of new Cloudflare WAF Managed rules.

The intent of these new rules was to improve the blocking of inline JavaScript that is used in attacks. These rules were Continue reading

Cloudflare outage caused by bad software deploy (updated)

This is a short placeholder blog and will be replaced with a full post-mortem and disclosure of what happened today.

For about 30 minutes today, visitors to Cloudflare sites received 502 errors caused by a massive spike in CPU utilization on our network. This CPU spike was caused by a bad software deploy that was rolled back. Once rolled back the service returned to normal operation and all domains using Cloudflare returned to normal traffic levels.

This was not an attack (as some have speculated) and we are incredibly sorry that this incident occurred. Internal teams are meeting as I write performing a full post-mortem to understand how this occurred and how we prevent this from ever occurring again.


Update at 2009 UTC:

Starting at 1342 UTC today we experienced a global outage across our network that resulted in visitors to Cloudflare-proxied domains being shown 502 errors (“Bad Gateway”). The cause of this outage was deployment of a single misconfigured rule within the Cloudflare Web Application Firewall (WAF) during a routine deployment of new Cloudflare WAF Managed rules.

The intent of these new rules was to improve the blocking of inline JavaScript that is used in attacks. These rules were Continue reading

Transparency, Fairness, and Respect: The Policy Brief on Responsible Data Handling

It’s been a little over a year since the European Union’s General Data Protection Regulation (GDPR) was implemented, but almost immediately, people noticed its impact. First, there was the flurry of emails seeking users’ consent to the collection and use of their data. Since then, there’s also been an increase in the number of sites that invite the user to consent to tracking by clicking “Yes to everything,” or to reject them by going through a laborious process of clicking “No” for each individual category. (Though some non-EU sites simply broadcast “if we think you’re visiting from the EU, we can’t let you access our content.”) There was also the headline-grabbing €50 million fine imposed on Google by the French supervisory authority.

In its summary of the year, the EU Data Protection Board (EDPB) reported an increase in the number of complaints received under GDPR, compared to the previous year, and a “perceived rise in awareness about data protection rights among individuals.” Users are more informed and want more control over the collection and use of their personal data.

They’re probably irritated by the current crop of consent panels, and either ignore, bypass, or click through them Continue reading

BrandPost: “Shift Left” to Push Customer Support into Overdrive

The “Shift Left” concept is all about efficiency and quality. In software development, shifting left means performing testing early and often in the project lifecycle instead of waiting until the end. By discovering and addressing errors and bugs earlier, teams can ultimately deliver a higher quality product, one that is better aligned with addressing customers’ needs.In support, it means shifting requests as close to the customer as possible – which includes offering the ability self-serve. Moving solutions closer to the operational frontline and to the point of the first issue allows customers get answers quicker and organizations to close tickets faster. There are 3 big benefits of taking this approach:To read this article in full, please click here

BrandPost: SD-WAN Buyers Should Think Application Performance as well as Resiliency

As an industry analyst, not since the days of WAN Optimization have I seen a technology gain as much interest as I am seeing with SD-WANs today. Although full deployments are still limited, nearly every network manager, and often many IT leaders I talk to, are interested in it. The reason for this is two-fold – the WAN has grown in importance for cloud-first enterprises and is badly in need of an overhaul. This hasn’t gone unnoticed by the vendor community as there has been an explosion of companies bringing a broad range of SD-WAN offerings to market. The great news for buyers is that there is no shortage of choices. The bad news is there are too many choices and making the right decision difficult.To read this article in full, please click here

Light Board Video Series: VMware NSX Cloud

Over the last decade there has been a gradual, continuous shift of enterprise software applications away from the data center and towards one or multiple public clouds. As more and more applications are built natively in public clouds like AWS or Azure, the management of networking and security for those workloads becomes more complex: each cloud has its own set of unique constructs that must be managed independently of those in the data center.

What if there was a way to unify all of those workloads under one consistent networking fabric that can manage one standard set of networking and security policies across both on-premises and public clouds? This is where VMware NSX Cloud comes in.

What is NSX Cloud?

Designed specifically for public-cloud-native workloads, NSX Cloud extends VMware NSX software-defined networking and security from the data center to multiple public clouds, enabling consistent policy management from a single NSX interface.

To explain what NSX Cloud is and how it can deliver consistent hybrid networking and security for you, we asked our product manager Shiva Somasundaram to recored a three-part lightboard video series.

Part 1: NSX Cloud Overview

Shiva gives a high-level overview of what NSX Cloud is and how Continue reading

The Week in Internet News: Small Routing Error Has Big Consequences

Bad route: A small routing error led to Internet outages in the Northeastern United States on June 24, Inc.com reports. Small network services provider DQE Communications shared inaccurate routing information with Verizon, which then passed it along to the wider network. Internet services were flaky for about two hours, with Verizon Fios phone and Internet services in Virginia, Massachusetts, New York, New Jersey, Pennsylvania, and other states affected, the Washington Post said. Server issues also affected Reddit, Twitch, and video gaming service Discord.

Attacking encryption? U.S. President Donald Trump’s National Security Council recently discussed ways to prohibit companies from offering customers unbreakable encryption, Politico reports. Officials debated whether to ask Congress to effectively outlaw end-to-end encryption, according to anonymous sources.

Embrace the dark side: Government entities looking to improve Internet speeds in their areas should consider dark fiber when it’s available, advises AmericanCityandCounty.com. Switching to dark fiber can offer both performance improvement and cost savings, but the transition can demand a major overhaul.

Service restored, for one guy: Sudan’s three-week Internet shutdown keeps going, except for one lawyer, who won a lawsuit against telecom operator Zain Sudan over the blackout ordered by the country’s military rulers, the Continue reading

BrandPost: How does David battle Goliath? With great strategy and the technology to implement it

The classic story of David battling Goliath resonates with any successful entrepreneur. At some point, small companies must confront large, entrenched rivals. Those big companies possess clear advantages: brand recognition, economies of scale, financial leverage and many others. Customers need a compelling reason to switch providers.How do would-be Davids compete? They need to develop their own modernized slingshot. Technology provides virtually endless possibilities for competitive advantage. Like David, though, you need to size up your opponent and adopt the right strategy before choosing your weapon.In the United Kingdom, a company called Ocado did just that in the exotic, sophisticated market of … grocery stores.To read this article in full, please click here

Network Break 241: Extreme Buys Aerohive; Sloppy BGP Plumbing Causes Route Leak

Extreme Networks spends approx. $227 million to buy Aerohive Networks to add a cloud-managed WLAN to its portfolio, a route leak resulted in cascading failures on June 24th, Oracle will retire Dyn managed DNS services, Mist Systems rolls out a new 11ax AP, and more tech news on today's Network Break podcast.

The post Network Break 241: Extreme Buys Aerohive; Sloppy BGP Plumbing Causes Route Leak appeared first on Packet Pushers.

Leaving Comments in Code Expressed Artefacts

Week of 24th June 2019 was interesting. We had #ferrogate which made a lot of network engineers very unhappy and also an ongoing social media thread on code comments. For this discussion, I’m going with the title of "leaving comments in code expressed artefacts" because code represents more than writing software. I feel quite passionately about this having been on the raw end of no code comments and also being guilty of leaving plenty of crappy and unhelpful comments too.

The Mystic Arts

Let’s set a scene. You’ve had a long day and you’re buckled in for what can only be described as a mentally exhausting night. The system architecture is clearly formed in your head and you’re beginning to see issues ahead of time. You can’t quite justify any premature optimisation, but you know this current design has a ceiling. You also know there are system wide intricacies that are not obvious at the component level.

Normality in these scenarios is to insert context based comments, which make perfect sense at 2am, but next day 9am exhausted you may be confused as to what on earth happened in the early hours. We’ve all been there.

There are multiple trains Continue reading