Archive

Category Archives for "Networking"

Your next servers might be a no-name brand

For years, white box PCs have accounted for a significant chunk of desktop sales. It was the same wherever I went: small mom-and-pop shops built their own PCs using components shipped in from Taiwan, and if there was a logo on it, it was for the PC store (affectionately referred to as “screwdriver shops”) that built the thing. On the server side, though, it remained a name-brand business. Data centers were filled with racks of servers that bore the logos of IBM (now Lenovo), Dell and HP. + Also on Network World: How a data center works, today and tomorrow + However, that’s changing. In its latest sales figures for the second quarter of 2017, IDC says ODM sales now account for the largest group of server sales, surpassing HPE. In the second calendar quarter of 2017, worldwide server sales increased 6.3 percent year over year to $15.7 billion thanks in part to new Intel Skylake processors.To read this article in full or to leave a comment, please click here

Your next servers might be a no-name brand

For years, white box PCs have accounted for a significant chunk of desktop sales. It was the same wherever I went: small mom-and-pop shops built their own PCs using components shipped in from Taiwan, and if there was a logo on it, it was for the PC store (affectionately referred to as “screwdriver shops”) that built the thing. On the server side, though, it remained a name-brand business. Data centers were filled with racks of servers that bore the logos of IBM (now Lenovo), Dell and HP. + Also on Network World: How a data center works, today and tomorrow + However, that’s changing. In its latest sales figures for the second quarter of 2017, IDC says ODM sales now account for the largest group of server sales, surpassing HPE. In the second calendar quarter of 2017, worldwide server sales increased 6.3 percent year over year to $15.7 billion thanks in part to new Intel Skylake processors.To read this article in full or to leave a comment, please click here

Unmetered Mitigation: DDoS Protection Without Limits

This is the week of Cloudflare's seventh birthday. It's become a tradition for us to announce a series of products each day of this week and bring major new benefits to our customers. We're beginning with one I'm especially proud of: Unmetered Mitigation.

CC BY-SA 2.0 image by Vassilis

Cloudflare runs one of the largest networks in the world. One of our key services is DDoS mitigation and we deflect a new DDoS attack aimed at our customers every three minutes. We do this with over 15 terabits per second of DDoS mitigation capacity. That's more than the publicly announced capacity of every other DDoS mitigation service we're aware of combined. And we're continuing to invest in our network to expand capacity at an accelerating rate.

Surge Pricing

Virtually every Cloudflare competitor will send you a bigger bill if you are unlucky enough to get targeted by an attack. We've seen examples of small businesses that survive massive attacks to then be crippled by the bills other DDoS mitigation vendors sent them. From the beginning of Cloudflare's history, it never felt right that you should have to pay more if you came under an attack. That feels barely a Continue reading

No Scrubs: The Architecture That Made Unmetered Mitigation Possible

When building a DDoS mitigation service it’s incredibly tempting to think that the solution is scrubbing centers or scrubbing servers. I, too, thought that was a good idea in the beginning, but experience has shown that there are serious pitfalls to this approach.

A scrubbing server is a dedicated machine that receives all network traffic destined for an IP address and attempts to filter good traffic from bad. Ideally, the scrubbing server will only forward non-DDoS packets to the Internet application being attacked. A scrubbing center is a dedicated location filled with scrubbing servers.

Three Problems With Scrubbers

The three most pressing problems with scrubbing are: bandwidth, cost, knowledge.

The bandwidth problem is easy to see. As DDoS attacks have scaled to >1Tbps having that much network capacity available is problematic. Provisioning and maintaining multiple-Tbps of bandwidth for DDoS mitigation is expensive and complicated. And it needs to be located in the right place on the Internet to receive and absorb an attack. If it’s not then attack traffic will need to be received at one location, scrubbed, and then clean traffic forwarded to the real server: that can introduce enormous delays with a limited number of locations.

Continue reading

Meet Gatebot – a bot that allows us to sleep

In the past, we’ve spoken about how Cloudflare is architected to sustain the largest DDoS attacks. During traffic surges we spread the traffic across a very large number of edge servers. This architecture allows us to avoid having a single choke point because the traffic gets distributed externally across multiple datacenters and internally across multiple servers. We do that by employing Anycast and ECMP.

We don't use separate scrubbing boxes or specialized hardware - every one of our edge servers can perform advanced traffic filtering if the need arises. This allows us to scale up our DDoS capacity as we grow. Each of the new servers we add to our datacenters increases our maximum theoretical DDoS “scrubbing” power. It also scales down nicely - in smaller datacenters we don't have to overinvest in expensive dedicated hardware.

During normal operations our attitude to attacks is rather pragmatic. Since the inbound traffic is distributed across hundreds of servers we can survive periodic spikes and small attacks without doing anything. Vanilla Linux is remarkably resilient against unexpected network events. This is especially true since kernel 4.4 when the performance of SYN cookies was greatly improved.

But at some point, malicious traffic volume Continue reading

IDG Contributor Network: How chip design is evolving in response to IoT development

The rapid development of the so-called Internet of Things (IoT) has pushed many old industries to the brink, forcing most companies to fundamentally reevaluate how they do business. Few have felt the reverberations of the IoT more than the microchip industry, one of the vital drivers of the IoT that has both enabled it and evolved alongside of it.So how exactly is chip design evolving to keep up with the IoT’s breakneck proliferation? A quick glance at the innerworkings of the industry which enables all of our beloved digital devices to work shows just how innovative it must be to keep up with today’s ever-evolving world.Building a silicon brain Developing microchips, which are so diverse that they’re used to power coffee makers and fighter jets alike, is no simple task. In order to meet the massive processing demands of today’s digital gadgets, chip design has been forced to take some tips from the most efficient computer known to man: the human brain.To read this article in full or to leave a comment, please click here

How a data center works, today and tomorrow

A data center is a physical facility that enterprises use to house their business-critical applications and information, so as they evolve, it’s important to think long-term about how to maintain their reliability and security.Data center components Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements such as routers, switches, security devices, storage systems, servers, application delivery controllers and more. These are the components that IT needs to store and manage the most critical systems that are vital to the continuous operations of a company. Because of this, the reliability, efficiency, security and constant evolution of a data center are typically a top priority.To read this article in full or to leave a comment, please click here

How a data center works, today and tomorrow

A data center is a physical facility that enterprises use to house their business-critical applications and information, so as they evolve, it’s important to think long-term about how to maintain their reliability and security.Data center components Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements such as routers, switches, security devices, storage systems, servers, application delivery controllers and more. These are the components that IT needs to store and manage the most critical systems that are vital to the continuous operations of a company. Because of this, the reliability, efficiency, security and constant evolution of a data center are typically a top priority.To read this article in full or to leave a comment, please click here

Introduction to Brocade 6510 Switch


Today I am going to talk about the Brocade 6510 Switch with the specifications and the details. Brocade 6510 switch features up to 48 ports of Gen 5 Fibre Channel technology with specifications suitable for hyper-scale, private cloud, virtualized, and other high-bandwidth Fibre Channel environments. 

Fig 1.1- Brocade Fiber Switch with Cisco Nexus 5K Switch Testing

With an aggregate 768Gb/s throughput and an 18-inch deep 1U footprint, the 6510 supports 2, 4, 8, 10, or 16 Gb/s Fibre Channel across 24, 36, or 48 ports of connectivity and with a feature set that can be extended via add-on licenses for a wide variety of usage scenarios. 

The Brocade 6510 represents best-of-class Fibre Channel SAN switching, which is an important asset to the Storage Review Enterprise Storage Lab in order to ensure that network components do not bottleneck storage devices during SAN benchmarks.

Fig 1.2- Brocade VCS Fabric Extension Over Brocade 6510 Switch
Let's talk about the Brocade Switch Specifications in detail. Above is the sample diagram showing the use of the Brocade VCS and below is the specifications of the Switch.

Brocade 6510 Switch Specifications
  • Fibre Channel ports: Switch mode (default): 24-, 36-, and 48-port configurations Continue reading

Easy and Simple 11 Steps to configure Cisco DSL Router

Today I am going to talk about the easy and the simple 11 steps to configure the DSL configuration. Below is the basic setup of the DSL router in the network.


Fig 1.1- Cisco DSL Topology
The above shown diagram is just an example of DSL connection and below is the sample configurations on the cisco routers. Let's talk about the 11 steps to configure the Cisco DSL router now.


Step 1
Configure service time stamp to properly log and display debug output in the troubleshooting section.

ttlbits_router#configure terminal
ttlbits_router(config)#service timestamps debug datetime msec
ttlbits_router(config)#service timestamps log datetime msec
ttlbits_router(config)#end

Step 2
Disable the logging console on your Cisco DSL Router to suppress console messages that may be triggered while you are configuring the router.

ttlbits_router#configure terminal
ttlbits_router(config)#no logging console
ttlbits_router(config)#end

Step 3
Configure IP routing, IP subnet−zero, and ip classless to provide flexibility in routing configuration options.

ttlbits_router#configure terminal
ttlbits_router(config)#ip routing
ttlbits_router(config)#ip subnet−zero
ttlbits_router(config)#ip classless
ttlbits_router(config)#end

Step 4
Configure an IP address and subnet mask on the Cisco DSL Router Ethernet interface. Enable NAT inside on the Ethernet interface.

ttlbits_router#configure terminal
ttlbits_router(config)#interface ethernet 0
ttlbits_router Continue reading

The History of Email

The History of Email

This was adapted from a post which originally appeared on the Eager blog. Eager has now become the new Cloudflare Apps.

QWERTYUIOP

— Text of the first email ever sent, 1971

The ARPANET (a precursor to the Internet) was created “to help maintain U.S. technological superiority and guard against unforeseen technological advances by potential adversaries,” in other words, to avert the next Sputnik. Its purpose was to allow scientists to share the products of their work and to make it more likely that the work of any one team could potentially be somewhat usable by others. One thing which was not considered particularly valuable was allowing these scientists to communicate using this network. People were already perfectly capable of communicating by phone, letter, and in-person meeting. The purpose of a computer was to do massive computation, to augment our memories and empower our minds.

Surely we didn’t need a computer, this behemoth of technology and innovation, just to talk to each other.

The History of Email
The computers which sent (and received) the first email.

The history of computing moves from massive data processing mainframes, to time sharing where many people share one computer, to the diverse collection of personal computing devices Continue reading

KEMP Presented Some Interesting Features at NFD16

KEMP Technologies presented at Network Field Day 16, where I was privileged to be a delegate. Who paid for what? Answers here.


Three facets of the KEMP presentation stood out to me:


The KEMP Management UI Can Manage Non-KEMP Devices

KEMP's centralized management UI, the KEMP 360 Controller, can manage/monitor other load balancers (ahem, Application Delivery Controllers) including AWS ELB, HAProxy, NGINX and F5 BIG-IP.

This is pretty clever: If KEMP gets into an enterprise, perhaps because it's dipping a toe into the cloud at Azure, they may manage to worm their way deeper than would otherwise have been possible. Nice work, KEMPers.

VS Motion Can Streamline Manual Deployment Workflows

KEMP's VS Motion feature allows easy service migrations between KEMP instances by copying service definitions from one box to another. It's probably appropriate when replicating services between production instances and when promoting configurations between dev/test/prod. The mechanism is described in some detail here:


The interface is pretty straightforward. It looks just like the balance transfer UI at my bank: Select the From instance, the To instance, what you want transferred (which virtual service) and then hit the Move button. The interface also sports a Copy button, so in that Continue reading

BrandPost: SD-WAN Benefits: More Than Eliminating MPLS

Most of the discussion to date on the benefit of SD-WANs has focused on how an SD-WAN enables a network organization to reduce or eliminate its spend on expensive MPLS circuits.That is clearly an important benefit. However, as many early adopters of SD-WANs can attest to, SD-WANs have other important benefits.I am going to use this blog to summarize an interview I recently had with an IT professional who is in the midst of rolling out an SD-WAN solution. As described below, the benefits of the new solution include better performance, better visibility and the reduced cost and complexity that comes from removing Cisco routers.To read this article in full or to leave a comment, please click here

A New API Binding: cloudflare-php

A New API Binding: cloudflare-php

A New API Binding: cloudflare-php

Back in May last year, one of my colleagues blogged about the introduction of our Python binding for the Cloudflare API and drew reference to our other bindings in Go and Node. Today we are complimenting this range by introducing a new official binding, this time in PHP.

This binding is available via Packagist as cloudflare/sdk, you can install it using Composer simply by running composer require cloudflare/sdk. We have documented various use-cases in our "Cloudflare PHP API Binding" KB article to help you get started.

Alternatively should you wish to help contribute, or just give us a star on GitHub, feel free to browse to the cloudflare-php source code.

A New API Binding: cloudflare-php

PHP is a controversial language, and there is no doubt there are elements of bad design within the language (as is the case with many other languages). However, love it or hate it, PHP is a language of high adoption; as of September 2017 W3Techs report that PHP is used by 82.8% of all the websites whose server-side programming language is known. In creating this binding the question clearly wasn't on the merits of PHP, but whether we wanted to help drive improvements to the developer experience for Continue reading

Announcing our new how-to video series!

Class is in session! This week, we are excited to announce that the new networking how-to video series is live on the Cumulus Networks website. Join our highly-qualified instructors as they school you on everything you need to know about web-scale networking. No backpack or homework required — learn everything you need from the comfort of your couch.

So, what’s on the syllabus for web-scale 101? Our goals this semester are to make open networking accessible to everyone, to teach the basics and beyond of Linux, and to demonstrate exactly what you gain from leaving behind traditional networking. Are you confused by configurations? Or have you ever wondered what APT stands for? Our instructors will answer all of your questions. After watching these how-to video tutorials, you’ll be a web-scale scholar!

These video tutorials cover topics such as:

  • Configuring trunks and access ports
  • How Linux networking differs from traditional networking
  • Automating your data center
  • …And much more!

What’s the difference between configuring IP addresses with Juniper or Cumulus Linux? We’ll let you decide that for yourself. Head over to our how-to video page and begin your educational journey. No need to worry about tuition — this priceless educational experience is Continue reading

Connecting Indigenous Communities

Internet access is often a challenge associated with developing countries. But while many of us in North America have the privilege of access at our fingertips, it’s still a huge barrier to success for many rural and remote Indigenous communities in Canada and the United States.

According to the 2016 Broadband Progress Report, 10% of Americans lack access to broadband. The contrast is even more striking when you look at Internet access in rural areas, with 39% lacking access to broadband of 25/4Mbps, compared to 4% in urban areas.

Many Canadian rural and remote communities face similar access issues. In December 2016, the Canadian Radio-television and Telecommunications Commission (CRTC) set targets for Internet service providers (ISPs) to offer customers in all parts of the country broadband at 50/10Mbps with the option of unlimited data. CRTC estimates two million households, or roughly 18% of Canadians, don’t have access to those speeds or data.

Let those figures sink in for a minute. Today in 2017, millions of people in North America still don’t have access to broadband Internet.

It’s an even harder to pill to swallow when you realize how disproportionately and gravely it affects indigenous communities, many of which are Continue reading