What are data centers? How they work and how they are changing in size and scope

A data center is a physical facility that enterprises use to house their business-critical applications and information. As they evolve from centralized on-premises facilities to edge deployments to public cloud services, it’s important to think long-term about how to maintain their reliability and security.What is a data center? Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements. These can be broken down into three categories: Compute: The memory and processing power to run the applications, generally provided by high-end servers Storage: Important enterprise data is generally housed in a data center, on media ranging from tape to solid-state drives, with multiple backups Networking: Interconnections between data center components and to the outside world, including routers, switches, application-delivery controllers, and more These are the components that IT needs to store and manage the most critical resources that are vital to the continuous operations of an organization. Because of this, the reliability, efficiency, security and constant evolution of data centers are typically a top priority. Both software and hardware security measures are a must.To read this article in full, please click here

The Week in Internet News: New York City Sued for Homework Gap

A virtual gap: Homeless advocates and legal groups have sued New York City for a lack of reliable Internet access in the city’s 27 homeless shelters, Reuters on WTVBam.com reports. Thousands of students living in the homeless shelters are struggling to keep up with virtual school during the COVID-19 pandemic, the plaintiffs say. The city has promised to install WiFi service in the shelters. New York City recently returned to virtual school after COVID-19 rates ticked up.

Repair it yourself: The European Parliament has voted to make it easier to repair electronic devices outside of the company that sold them, Euronews.com says. The legislation would allow independent repairs without hurting the value of the device during trade in, a move that’s a “major blow” to big device makers.

Device spying: The Singapore-based developer of smartphone application Muslim Pro, targeted at Muslim users, has denied allegations that it is selling the personal data to the U.S. military, The Straits Times reports. Developer Bitsmedia says it is immediately ending relationships with its data partners, however. Vice.com recently reported that the app was among several selling personal data to the U.S. military.

Facebook fined: The South Korean government Continue reading

Improving the Resiliency of Our Infrastructure DNS Zone

Improving the Resiliency of Our Infrastructure DNS Zone

In this blog post we will discuss how we made our infrastructure DNS zone more reliable by using multiple primary nameservers to leverage our own DNS product running on our edge as well as a third-party DNS provider.

Improving the Resiliency of Our Infrastructure DNS Zone

Authoritative Nameservers

You can think of an authoritative nameserver as the source of truth for the records of a given DNS zone. When a recursive resolver wants to look up a record, it will eventually need to talk to the authoritative nameserver(s) for the zone in question. If you’d like to read more on the topic, our learning center provides some additional information.

Here’s an example of our authoritative nameservers (replacing our actual domain with example.com):

~$ dig NS example.com +short
ns1.example.com.
ns2.example.com.
ns3.example.com.

As you can see, there are three nameservers listed. You’ll notice that the nameservers happen to reside in the same zone, but they don’t have to. Those three nameservers point to six anycasted IP addresses (3 x IPv4, 3 x IPv6) announced from our edge, comprising data centers from 200+ cities around the world.

The Problem

We store the hostnames for all of our machines, both the ones at the Continue reading

Internet Society Continues Strong Support for the IETF’s Critical Work on Open Standards

large meeting room with many people sitting on chairs

Open standards and the role they play are an important part of what makes the Internet the Internet. A fundamental building block of the Internet and everything it enables, open standards allow devices, services, and applications to work together across the interconnected networks that make up the Internet that we depend on every day. 

In fact, every moment you are online, even just reading this blog post, you are relying on open standards such as DNS, HTTP, and TLS. They are a critical property of what we call the Internet Way of Networking.

Since its inception, the Internet Engineering Task Force (IETF) – a global community of thousands of engineers who are working each day to create and improve open standards to make the Internet work better – has been at the center of technical innovation for the global Internet. In addition to the standards themselves, the open processes and principles through which they are developed ensure the evolution of Internet technologies that meet the need of the growing number of devices and uses that empower people around the world to connect, share, learn, and more. This places the work of the IETF, and other groups focused on open Continue reading

Zero trust planning: Key factors for IT pros to consider

Moving away from VPNs as a means to protect corporate networks at the perimeter and moving toward zero-trust network access requires careful enterprise planning and may require implementing technologies that are new to individual organizations.ZTNA employs identity-based authentication to establish trust with entities trying to access the network and grants each authorized entity access only to the data and applications they require to accomplish their tasks. It also provides new tools for IT to control access to sensitive data by those entities that are deemed trusted.To read this article in full, please click here

Startup EdgeQ offers 5G and AI for the edge

A new startup has emerged from stealth mode with a design that converges 5G connectivity and AI compute onto a system-on-a-chip (SoC) that's aimed at edge networks. Founded in 2018, EdgeQ was launched by former executives at Broadcom, Intel, and Qualcomm and has racked up $51 million in funding.EdgeQ's AI-5G SoC is aimed at 5G private wireless networks for the Industrial Internet of Things (IIoT). EdgeQ says its chip will allow enterprises in manufacturing, energy, automotive, telco and other verticals to harness private networking for disruptive applications, intelligent services, and new business models.To read this article in full, please click here

Zero trust planning: Key factors for IT pros to consider

Moving away from VPNs as a means to protect corporate networks at the perimeter and moving toward zero-trust network access requires careful enterprise planning and may require implementing technologies that are new to individual organizations.ZTNA employs identity-based authentication to establish trust with entities trying to access the network and grants each authorized entity access only to the data and applications they require to accomplish their tasks. It also provides new tools for IT to control access to sensitive data by those entities that are deemed trusted.To read this article in full, please click here

Worth Exploring: Pluginized Protocols

Remember my BGP route selection rules are a clear failure of intent-based networking paradigm blog post? I wrote it almost three years ago, so maybe you want to start by rereading it…

Making long story short: every large network is a unique snowflake, and every sufficiently convoluted network architect has unique ideas of how BGP route selection should work, resulting in all sorts of crazy extended BGP communities, dozens if not hundreds of nerd knobs, and 2000+ pages of BGP documentation for a recent network operating system (no, unfortunately I’m not joking).

Worth Exploring: Pluginized Protocols

Remember my BGP route selection rules are a clear failure of intent-based networking paradigm blog post? I wrote it almost three years ago, so maybe you want to start by rereading it…

Making long story short: every large network is a unique snowflake, and every sufficiently convoluted network architect has unique ideas of how BGP route selection should work, resulting in all sorts of crazy extended BGP communities, dozens if not hundreds of nerd knobs, and 2000+ pages of BGP documentation for a recent network operating system (no, unfortunately I’m not joking).

Seeing is believing: a client-centric specification of database isolation

Seeing is believing: a client-centric specification of database isolation, Crooks et al., PODC’17.

Last week we looked at Elle, which detects isolation anomalies by setting things up so that the inner workings of the database, in the form of the direct serialization graph (DSG), can be externally recovered. Today’s paper choice, ‘Seeing is believing’ also deals with the externally observable effects of a database, in this case the return values of read operations, but instead of doing this in order to detect isolation anomalies, Crooks et al. use this perspective to create new definitions of isolation levels.

It’s one of those ideas, that once it’s pointed out to you seems incredibly obvious (a hallmark of a great idea!). Isolation guarantees are a promise that a database makes to its clients. We should therefore define them in terms of effects visible to clients – as part of the specification of the external interface offered by the database. How the database internally fulfils that contract is of no concern to the client, so long as it does. And yet, until Crooks all the definitions, including Adya’s, have been based on implementation concerns!

In theory defining isolation levels Continue reading

Primer: How XDP and eBPF Speed Network Traffic via the Linux Kernel

Every so often, however, a new buzzword or acronym comes around that really has weight behind it. Such is the case with XDP (eBPF programming language to gain access to the lower-level kernel hook. That hook is then implemented by the network device driver within the ingress traffic processing function, before a socket buffer can be allocated for the incoming packet. Let’s look at how these two work together. This outstanding example comes from Jeremy Erickson, who is a senior R&D developer with Sebastiano Piazzi on

A Thanksgiving 2020 Reading List

A Thanksgiving 2020 Reading List

While our colleagues in the US are celebrating Thanksgiving this week and taking a long weekend off, there is a lot going on at Cloudflare. The EMEA team is having a full day on CloudflareTV with a series of live shows celebrating #CloudflareCareersDay.

So if you want to relax in an active and learning way this weekend, here are some of the topics we’ve covered on the Cloudflare blog this past week that you may find interesting.

Improving Performance and Search Rankings with Cloudflare for Fun and Profit

Making things fast is one of the things we do at Cloudflare. More responsive websites, apps, APIs, and networks directly translate into improved conversion and user experience. On November 10, Google announced that Google Search will directly take web performance and page experience data into account when ranking results on their search engine results pages (SERPs), beginning in May 2021.

Rustam Lalkaka and Rita Kozlov explain in this blog post how Google Search will prioritize results based on how pages score on Core Web Vitals, a measurement methodology Cloudflare has worked closely with Google to establish, and we have implemented support for in our analytics tools. Read the full blog post.

Getting Continue reading

Fun Times: Another Broken Linux ALG

Dealing with protocols that embed network-layer addresses into application-layer messages (like FTP or SIP) is great fun, more so if the said protocol traverses a NAT device that has to find the IP addresses embedded in application messages while translating the addresses in IP headers. For whatever reason, the content rewriting functionality is called application-level gateway (ALG).

Even when we’re faced with a monstrosity like FTP or SIP that should have been killed with napalm a microsecond after it was created, there’s a proper way of doing things and a fast way of doing things. You could implement a protocol-level proxy that would intercept control-plane sessions… or you could implement a hack that tries to snoop TCP payload without tracking TCP session state.

Not surprisingly, the fast way of doing things usually results in a wonderful attack surface, more so if the attacker is smart enough to construct HTTP requests that look like SIP messages. Enjoy ;)

Fun Times: Another Broken Linux ALG

Dealing with protocols that embed network-layer addresses into application-layer messages (like FTP or SIP) is great fun, more so if the said protocol traverses a NAT device that has to find the IP addresses embedded in application messages while translating the addresses in IP headers. For whatever reason, the content rewriting functionality is called application-level gateway (ALG).

Even when we’re faced with a monstrosity like FTP or SIP that should have been killed with napalm a microsecond after it was created, there’s a proper way of doing things and a fast way of doing things. You could implement a protocol-level proxy that would intercept control-plane sessions… or you could implement a hack that tries to snoop TCP payload without tracking TCP session state.

Not surprisingly, the fast way of doing things usually results in a wonderful attack surface, more so if the attacker is smart enough to construct HTTP requests that look like SIP messages. Enjoy ;)

What’s Your Work From Home DR Plan?

It’s almost December and the signs are pointing to a continuation of the current state of working from home for a lot of people out there. Whether it’s a surge in cases that is causing businesses to close again or a change in the way your company looks at offices and remote work, you’re likely going to ring in the new year at your home keyboard in your pajamas with a cup of something steaming next to your desk.

We have all spent a lot of time and money investing in better conditions for ourselves at home. Perhaps it was a fancy new mesh chair or a more ergonomic keyboard. It could have been a bigger monitor with a resolution increase or a better webcam for the dozen or so Zoom meetings that have replaced the water cooler. There may even be more equipment in store, such as a better home wireless setup or even a corporate SD-WAN solution to help with network latency. However, have you considered what might happen if it all goes wrong and you need to be online?

In and Outage

Outages happen more often than we realize. That’s never been more evident than the situation Continue reading