This is the week of Cloudflare's seventh birthday. It's become a tradition for us to announce a series of products each day of this week and bring major new benefits to our customers. We're beginning with one I'm especially proud of: Unmetered Mitigation.
CC BY-SA 2.0 image by Vassilis
Cloudflare runs one of the largest networks in the world. One of our key services is DDoS mitigation and we deflect a new DDoS attack aimed at our customers every three minutes. We do this with over 15 terabits per second of DDoS mitigation capacity. That's more than the publicly announced capacity of every other DDoS mitigation service we're aware of combined. And we're continuing to invest in our network to expand capacity at an accelerating rate.
Virtually every Cloudflare competitor will send you a bigger bill if you are unlucky enough to get targeted by an attack. We've seen examples of small businesses that survive massive attacks to then be crippled by the bills other DDoS mitigation vendors sent them. From the beginning of Cloudflare's history, it never felt right that you should have to pay more if you came under an attack. That feels barely a Continue reading
When building a DDoS mitigation service it’s incredibly tempting to think that the solution is scrubbing centers or scrubbing servers. I, too, thought that was a good idea in the beginning, but experience has shown that there are serious pitfalls to this approach.
A scrubbing server is a dedicated machine that receives all network traffic destined for an IP address and attempts to filter good traffic from bad. Ideally, the scrubbing server will only forward non-DDoS packets to the Internet application being attacked. A scrubbing center is a dedicated location filled with scrubbing servers.
The three most pressing problems with scrubbing are: bandwidth, cost, knowledge.
The bandwidth problem is easy to see. As DDoS attacks have scaled to >1Tbps having that much network capacity available is problematic. Provisioning and maintaining multiple-Tbps of bandwidth for DDoS mitigation is expensive and complicated. And it needs to be located in the right place on the Internet to receive and absorb an attack. If it’s not then attack traffic will need to be received at one location, scrubbed, and then clean traffic forwarded to the real server: that can introduce enormous delays with a limited number of locations.
In the past, we’ve spoken about how Cloudflare is architected to sustain the largest DDoS attacks. During traffic surges we spread the traffic across a very large number of edge servers. This architecture allows us to avoid having a single choke point because the traffic gets distributed externally across multiple datacenters and internally across multiple servers. We do that by employing Anycast and ECMP.
We don't use separate scrubbing boxes or specialized hardware - every one of our edge servers can perform advanced traffic filtering if the need arises. This allows us to scale up our DDoS capacity as we grow. Each of the new servers we add to our datacenters increases our maximum theoretical DDoS “scrubbing” power. It also scales down nicely - in smaller datacenters we don't have to overinvest in expensive dedicated hardware.
During normal operations our attitude to attacks is rather pragmatic. Since the inbound traffic is distributed across hundreds of servers we can survive periodic spikes and small attacks without doing anything. Vanilla Linux is remarkably resilient against unexpected network events. This is especially true since kernel 4.4 when the performance of SYN cookies was greatly improved.
But at some point, malicious traffic volume Continue reading
In this video, Tony Fortunato demonstrates how he used Wireshark to analyze a traceroute utility.
Organizations can save money by negotiating better deals on a common piece of network equipment, analysts say.
Fig 1.1- Brocade Fiber Switch with Cisco Nexus 5K Switch Testing |
Fig 1.2- Brocade VCS Fabric Extension Over Brocade 6510 Switch |
Fig 1.1- Cisco DSL Topology |
This was adapted from a post which originally appeared on the Eager blog. Eager has now become the new Cloudflare Apps.
QWERTYUIOP
— Text of the first email ever sent, 1971
The ARPANET (a precursor to the Internet) was created “to help maintain U.S. technological superiority and guard against unforeseen technological advances by potential adversaries,” in other words, to avert the next Sputnik. Its purpose was to allow scientists to share the products of their work and to make it more likely that the work of any one team could potentially be somewhat usable by others. One thing which was not considered particularly valuable was allowing these scientists to communicate using this network. People were already perfectly capable of communicating by phone, letter, and in-person meeting. The purpose of a computer was to do massive computation, to augment our memories and empower our minds.
Surely we didn’t need a computer, this behemoth of technology and innovation, just to talk to each other.
The history of computing moves from massive data processing mainframes, to time sharing where many people share one computer, to the diverse collection of personal computing devices Continue reading
Back in May last year, one of my colleagues blogged about the introduction of our Python binding for the Cloudflare API and drew reference to our other bindings in Go and Node. Today we are complimenting this range by introducing a new official binding, this time in PHP.
This binding is available via Packagist as cloudflare/sdk, you can install it using Composer simply by running composer require cloudflare/sdk
. We have documented various use-cases in our "Cloudflare PHP API Binding" KB article to help you get started.
Alternatively should you wish to help contribute, or just give us a star on GitHub, feel free to browse to the cloudflare-php source code.
PHP is a controversial language, and there is no doubt there are elements of bad design within the language (as is the case with many other languages). However, love it or hate it, PHP is a language of high adoption; as of September 2017 W3Techs report that PHP is used by 82.8% of all the websites whose server-side programming language is known. In creating this binding the question clearly wasn't on the merits of PHP, but whether we wanted to help drive improvements to the developer experience for Continue reading
Class is in session! This week, we are excited to announce that the new networking how-to video series is live on the Cumulus Networks website. Join our highly-qualified instructors as they school you on everything you need to know about web-scale networking. No backpack or homework required — learn everything you need from the comfort of your couch.
So, what’s on the syllabus for web-scale 101? Our goals this semester are to make open networking accessible to everyone, to teach the basics and beyond of Linux, and to demonstrate exactly what you gain from leaving behind traditional networking. Are you confused by configurations? Or have you ever wondered what APT stands for? Our instructors will answer all of your questions. After watching these how-to video tutorials, you’ll be a web-scale scholar!
These video tutorials cover topics such as:
What’s the difference between configuring IP addresses with Juniper or Cumulus Linux? We’ll let you decide that for yourself. Head over to our how-to video page and begin your educational journey. No need to worry about tuition — this priceless educational experience is Continue reading
Dell EMC, Cisco, and HPE are all gunning for the throne.
Internet access is often a challenge associated with developing countries. But while many of us in North America have the privilege of access at our fingertips, it’s still a huge barrier to success for many rural and remote Indigenous communities in Canada and the United States.
According to the 2016 Broadband Progress Report, 10% of Americans lack access to broadband. The contrast is even more striking when you look at Internet access in rural areas, with 39% lacking access to broadband of 25/4Mbps, compared to 4% in urban areas.
Many Canadian rural and remote communities face similar access issues. In December 2016, the Canadian Radio-television and Telecommunications Commission (CRTC) set targets for Internet service providers (ISPs) to offer customers in all parts of the country broadband at 50/10Mbps with the option of unlimited data. CRTC estimates two million households, or roughly 18% of Canadians, don’t have access to those speeds or data.
Let those figures sink in for a minute. Today in 2017, millions of people in North America still don’t have access to broadband Internet.
It’s an even harder to pill to swallow when you realize how disproportionately and gravely it affects indigenous communities, many of which are Continue reading
The post Worth Reading: Distrusting Symantic Certificates appeared first on rule 11 reader.