Archive

Category Archives for "Networking"

Internet Impacts of Hurricanes Harvey, Irma, and Maria

Hurricane Irene - Dyn Uptime

Devastation caused by several storms during the 2017 Atlantic hurricane season has been significant, as Hurricanes Harvey, Irma, and Maria destroyed property and took lives across a number of Caribbean island nations, as well as Texas and Florida in the United States. The strength of these storms has made timely communication of information all the more important, from evacuation orders, to pleas for help and related coordination among first responders and civilian rescuers, to insight into open shelters, fuel stations, and grocery stores. The Internet has become a critical component of this communication, with mobile weather applications providing real-time insight into storm conditions and locations, social media tools like Facebook and Twitter used to contact loved ones or ask for assistance, “walkie talkie” apps like Zello used to coordinate rescue efforts, and “gas tracker” apps like GasBuddy used to crowdsource information about open fuel stations, gas availability, and current prices.

As the Internet has come to play a more pivotal role here, the availability and performance of Internet services has become more important as well.  While some “core” Internet components remained available during these storms thanks to hardened data center infrastructure, backup power generators, and comprehensive disaster planning, local infrastructure Continue reading

IoT poised to impact quality, capabilities of healthcare

Everything about the modern doctor’s office feels primitive. It’s one of the few businesses that requires I use my telephone for scheduling — unless it’s about lab results. For that, they prefer fax. Even the doctor’s tools, such as the blood pressure cuff, scale and stethoscope, are largely the same as the equipment used in my childhood.I get that the industry needs to be cautious regarding change and that legal requirements further complicate matters, but changes are overdue. Because medical professionals are unlikely to adopt unproven tech, the evolution will most likely come from existing tech being used in other applications.Let’s take a look at how things might change in healthcare technology:To read this article in full or to leave a comment, please click here

IDG Contributor Network: Office 365: What’s your network deployment architecture?

I recently gave a webinar on how to best architect your network for Office 365. It comes on the heels of a number of complaints from customers around their struggles deploying responsive Office 365 implementations. SharePoint doesn’t quite work; Skype calls are unclear. And forget about OneDrive for Business. It’s incredibly slow.Latency and Office 365 Ensuring a smooth transition to Office 365, or for that matter any cloud deployment, involves a solid understanding of which Office 365 applications are being deployed. Here latency matters. Microsoft recommends that round trip latency for Office 365 does not exceed 275 ms, but those metrics change significantly depending on the Office 365 application. Latency should not exceeds 50ms with Exchange Online and 25ms with SharePoint. (Check out my “ultimate” list of Office 365 networking tools for help with your O365 deployment.)To read this article in full or to leave a comment, please click here

IDG Contributor Network: Office 365: What’s your network deployment architecture?

I recently gave a webinar on how to best architect your network for Office 365. It comes on the heels of a number of complaints from customers around their struggles deploying responsive Office 365 implementations. SharePoint doesn’t quite work; Skype calls are unclear. And forget about OneDrive for Business. It’s incredibly slow.Latency and Office 365 Ensuring a smooth transition to Office 365, or for that matter any cloud deployment, involves a solid understanding of which Office 365 applications are being deployed. Here latency matters. Microsoft recommends that round trip latency for Office 365 does not exceed 275 ms, but those metrics change significantly depending on the Office 365 application. Latency should not exceeds 50ms with Exchange Online and 25ms with SharePoint. (Check out my “ultimate” list of Office 365 networking tools for help with your O365 deployment.)To read this article in full or to leave a comment, please click here

The Value of Configuration Consistency (Thwack)

It’s one thing to have a stable network, but it’s another to have consistency in device configurations across the network. Does that even matter?

On the Solarwinds Thwack Geek Speak blog I looked at some reasons why it might be important to maintain certain configuration standards across all devices. Please do take a trip to Thwack and check out my post, “The Value of Configuration Consistency“.

The Value of Configuration Consistency

 

Please see my Disclosures page for more information about my role as a Solarwinds Ambassador.

If you liked this post, please do click through to the source at The Value of Configuration Consistency (Thwack) and give me a share/like. Thank you!

Your next servers might be a no-name brand

For years, white box PCs have accounted for a significant chunk of desktop sales. It was the same wherever I went: small mom-and-pop shops built their own PCs using components shipped in from Taiwan, and if there was a logo on it, it was for the PC store (affectionately referred to as “screwdriver shops”) that built the thing. On the server side, though, it remained a name-brand business. Data centers were filled with racks of servers that bore the logos of IBM (now Lenovo), Dell and HP. + Also on Network World: How a data center works, today and tomorrow + However, that’s changing. In its latest sales figures for the second quarter of 2017, IDC says ODM sales now account for the largest group of server sales, surpassing HPE. In the second calendar quarter of 2017, worldwide server sales increased 6.3 percent year over year to $15.7 billion thanks in part to new Intel Skylake processors.To read this article in full or to leave a comment, please click here

Your next servers might be a no-name brand

For years, white box PCs have accounted for a significant chunk of desktop sales. It was the same wherever I went: small mom-and-pop shops built their own PCs using components shipped in from Taiwan, and if there was a logo on it, it was for the PC store (affectionately referred to as “screwdriver shops”) that built the thing. On the server side, though, it remained a name-brand business. Data centers were filled with racks of servers that bore the logos of IBM (now Lenovo), Dell and HP. + Also on Network World: How a data center works, today and tomorrow + However, that’s changing. In its latest sales figures for the second quarter of 2017, IDC says ODM sales now account for the largest group of server sales, surpassing HPE. In the second calendar quarter of 2017, worldwide server sales increased 6.3 percent year over year to $15.7 billion thanks in part to new Intel Skylake processors.To read this article in full or to leave a comment, please click here

Unmetered Mitigation: DDoS Protection Without Limits

This is the week of Cloudflare's seventh birthday. It's become a tradition for us to announce a series of products each day of this week and bring major new benefits to our customers. We're beginning with one I'm especially proud of: Unmetered Mitigation.

CC BY-SA 2.0 image by Vassilis

Cloudflare runs one of the largest networks in the world. One of our key services is DDoS mitigation and we deflect a new DDoS attack aimed at our customers every three minutes. We do this with over 15 terabits per second of DDoS mitigation capacity. That's more than the publicly announced capacity of every other DDoS mitigation service we're aware of combined. And we're continuing to invest in our network to expand capacity at an accelerating rate.

Surge Pricing

Virtually every Cloudflare competitor will send you a bigger bill if you are unlucky enough to get targeted by an attack. We've seen examples of small businesses that survive massive attacks to then be crippled by the bills other DDoS mitigation vendors sent them. From the beginning of Cloudflare's history, it never felt right that you should have to pay more if you came under an attack. That feels barely a Continue reading

No Scrubs: The Architecture That Made Unmetered Mitigation Possible

When building a DDoS mitigation service it’s incredibly tempting to think that the solution is scrubbing centers or scrubbing servers. I, too, thought that was a good idea in the beginning, but experience has shown that there are serious pitfalls to this approach.

A scrubbing server is a dedicated machine that receives all network traffic destined for an IP address and attempts to filter good traffic from bad. Ideally, the scrubbing server will only forward non-DDoS packets to the Internet application being attacked. A scrubbing center is a dedicated location filled with scrubbing servers.

Three Problems With Scrubbers

The three most pressing problems with scrubbing are: bandwidth, cost, knowledge.

The bandwidth problem is easy to see. As DDoS attacks have scaled to >1Tbps having that much network capacity available is problematic. Provisioning and maintaining multiple-Tbps of bandwidth for DDoS mitigation is expensive and complicated. And it needs to be located in the right place on the Internet to receive and absorb an attack. If it’s not then attack traffic will need to be received at one location, scrubbed, and then clean traffic forwarded to the real server: that can introduce enormous delays with a limited number of locations.

Continue reading

Meet Gatebot – a bot that allows us to sleep

In the past, we’ve spoken about how Cloudflare is architected to sustain the largest DDoS attacks. During traffic surges we spread the traffic across a very large number of edge servers. This architecture allows us to avoid having a single choke point because the traffic gets distributed externally across multiple datacenters and internally across multiple servers. We do that by employing Anycast and ECMP.

We don't use separate scrubbing boxes or specialized hardware - every one of our edge servers can perform advanced traffic filtering if the need arises. This allows us to scale up our DDoS capacity as we grow. Each of the new servers we add to our datacenters increases our maximum theoretical DDoS “scrubbing” power. It also scales down nicely - in smaller datacenters we don't have to overinvest in expensive dedicated hardware.

During normal operations our attitude to attacks is rather pragmatic. Since the inbound traffic is distributed across hundreds of servers we can survive periodic spikes and small attacks without doing anything. Vanilla Linux is remarkably resilient against unexpected network events. This is especially true since kernel 4.4 when the performance of SYN cookies was greatly improved.

But at some point, malicious traffic volume Continue reading

IDG Contributor Network: How chip design is evolving in response to IoT development

The rapid development of the so-called Internet of Things (IoT) has pushed many old industries to the brink, forcing most companies to fundamentally reevaluate how they do business. Few have felt the reverberations of the IoT more than the microchip industry, one of the vital drivers of the IoT that has both enabled it and evolved alongside of it.So how exactly is chip design evolving to keep up with the IoT’s breakneck proliferation? A quick glance at the innerworkings of the industry which enables all of our beloved digital devices to work shows just how innovative it must be to keep up with today’s ever-evolving world.Building a silicon brain Developing microchips, which are so diverse that they’re used to power coffee makers and fighter jets alike, is no simple task. In order to meet the massive processing demands of today’s digital gadgets, chip design has been forced to take some tips from the most efficient computer known to man: the human brain.To read this article in full or to leave a comment, please click here

How a data center works, today and tomorrow

A data center is a physical facility that enterprises use to house their business-critical applications and information, so as they evolve, it’s important to think long-term about how to maintain their reliability and security.Data center components Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements such as routers, switches, security devices, storage systems, servers, application delivery controllers and more. These are the components that IT needs to store and manage the most critical systems that are vital to the continuous operations of a company. Because of this, the reliability, efficiency, security and constant evolution of a data center are typically a top priority.To read this article in full or to leave a comment, please click here

How a data center works, today and tomorrow

A data center is a physical facility that enterprises use to house their business-critical applications and information, so as they evolve, it’s important to think long-term about how to maintain their reliability and security.Data center components Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements such as routers, switches, security devices, storage systems, servers, application delivery controllers and more. These are the components that IT needs to store and manage the most critical systems that are vital to the continuous operations of a company. Because of this, the reliability, efficiency, security and constant evolution of a data center are typically a top priority.To read this article in full or to leave a comment, please click here

Introduction to Brocade 6510 Switch


Today I am going to talk about the Brocade 6510 Switch with the specifications and the details. Brocade 6510 switch features up to 48 ports of Gen 5 Fibre Channel technology with specifications suitable for hyper-scale, private cloud, virtualized, and other high-bandwidth Fibre Channel environments. 

Fig 1.1- Brocade Fiber Switch with Cisco Nexus 5K Switch Testing

With an aggregate 768Gb/s throughput and an 18-inch deep 1U footprint, the 6510 supports 2, 4, 8, 10, or 16 Gb/s Fibre Channel across 24, 36, or 48 ports of connectivity and with a feature set that can be extended via add-on licenses for a wide variety of usage scenarios. 

The Brocade 6510 represents best-of-class Fibre Channel SAN switching, which is an important asset to the Storage Review Enterprise Storage Lab in order to ensure that network components do not bottleneck storage devices during SAN benchmarks.

Fig 1.2- Brocade VCS Fabric Extension Over Brocade 6510 Switch
Let's talk about the Brocade Switch Specifications in detail. Above is the sample diagram showing the use of the Brocade VCS and below is the specifications of the Switch.

Brocade 6510 Switch Specifications
  • Fibre Channel ports: Switch mode (default): 24-, 36-, and 48-port configurations Continue reading

Easy and Simple 11 Steps to configure Cisco DSL Router

Today I am going to talk about the easy and the simple 11 steps to configure the DSL configuration. Below is the basic setup of the DSL router in the network.


Fig 1.1- Cisco DSL Topology
The above shown diagram is just an example of DSL connection and below is the sample configurations on the cisco routers. Let's talk about the 11 steps to configure the Cisco DSL router now.


Step 1
Configure service time stamp to properly log and display debug output in the troubleshooting section.

ttlbits_router#configure terminal
ttlbits_router(config)#service timestamps debug datetime msec
ttlbits_router(config)#service timestamps log datetime msec
ttlbits_router(config)#end

Step 2
Disable the logging console on your Cisco DSL Router to suppress console messages that may be triggered while you are configuring the router.

ttlbits_router#configure terminal
ttlbits_router(config)#no logging console
ttlbits_router(config)#end

Step 3
Configure IP routing, IP subnet−zero, and ip classless to provide flexibility in routing configuration options.

ttlbits_router#configure terminal
ttlbits_router(config)#ip routing
ttlbits_router(config)#ip subnet−zero
ttlbits_router(config)#ip classless
ttlbits_router(config)#end

Step 4
Configure an IP address and subnet mask on the Cisco DSL Router Ethernet interface. Enable NAT inside on the Ethernet interface.

ttlbits_router#configure terminal
ttlbits_router(config)#interface ethernet 0
ttlbits_router Continue reading

The History of Email

The History of Email

This was adapted from a post which originally appeared on the Eager blog. Eager has now become the new Cloudflare Apps.

QWERTYUIOP

— Text of the first email ever sent, 1971

The ARPANET (a precursor to the Internet) was created “to help maintain U.S. technological superiority and guard against unforeseen technological advances by potential adversaries,” in other words, to avert the next Sputnik. Its purpose was to allow scientists to share the products of their work and to make it more likely that the work of any one team could potentially be somewhat usable by others. One thing which was not considered particularly valuable was allowing these scientists to communicate using this network. People were already perfectly capable of communicating by phone, letter, and in-person meeting. The purpose of a computer was to do massive computation, to augment our memories and empower our minds.

Surely we didn’t need a computer, this behemoth of technology and innovation, just to talk to each other.

The History of Email
The computers which sent (and received) the first email.

The history of computing moves from massive data processing mainframes, to time sharing where many people share one computer, to the diverse collection of personal computing devices Continue reading