Archive

Category Archives for "Networking"

Light/No Blogging this Week

I’m trying to get through the final bits of this new book (which should publish at the end of December, from what I understand), and the work required for a pair of PhD seminars (a bit over 50 pages of writing). I probably won’t post anything this week so I can get caught up a little, and I might not be posting heavily next week.

I’ll be at SDxE in Austin Tuesday and Wednesday, if anyone wants to find me there.

The post Light/No Blogging this Week appeared first on rule 11 reader.

Making good use of the files in /proc

The /proc file system first made its way into some Unix operating systems (such as Solaris) in the mid-1990s, promising to give users more and easier access into the kernel and to running processes. It was a very welcome enhancement — looking and acting like a regular file system, but delivering hooks into the kernel and the ability to treat processes as files. It went well beyond what we could do with ps and other common commands for examining processes and the system they run on.When it first appeared, /proc took a lot of us by surprise. We were used to devices as files, but access to processes as files was new and exciting. In the years since, /proc has become more of a go-to source for process information, but it retains an element of mystery because of the incredible detail that it provides.To read this article in full or to leave a comment, please click here

Internet Impacts of Hurricanes Harvey, Irma, and Maria

Hurricane Irene - Dyn Uptime

Devastation caused by several storms during the 2017 Atlantic hurricane season has been significant, as Hurricanes Harvey, Irma, and Maria destroyed property and took lives across a number of Caribbean island nations, as well as Texas and Florida in the United States. The strength of these storms has made timely communication of information all the more important, from evacuation orders, to pleas for help and related coordination among first responders and civilian rescuers, to insight into open shelters, fuel stations, and grocery stores. The Internet has become a critical component of this communication, with mobile weather applications providing real-time insight into storm conditions and locations, social media tools like Facebook and Twitter used to contact loved ones or ask for assistance, “walkie talkie” apps like Zello used to coordinate rescue efforts, and “gas tracker” apps like GasBuddy used to crowdsource information about open fuel stations, gas availability, and current prices.

As the Internet has come to play a more pivotal role here, the availability and performance of Internet services has become more important as well.  While some “core” Internet components remained available during these storms thanks to hardened data center infrastructure, backup power generators, and comprehensive disaster planning, local infrastructure Continue reading

IoT poised to impact quality, capabilities of healthcare

Everything about the modern doctor’s office feels primitive. It’s one of the few businesses that requires I use my telephone for scheduling — unless it’s about lab results. For that, they prefer fax. Even the doctor’s tools, such as the blood pressure cuff, scale and stethoscope, are largely the same as the equipment used in my childhood.I get that the industry needs to be cautious regarding change and that legal requirements further complicate matters, but changes are overdue. Because medical professionals are unlikely to adopt unproven tech, the evolution will most likely come from existing tech being used in other applications.Let’s take a look at how things might change in healthcare technology:To read this article in full or to leave a comment, please click here

IDG Contributor Network: Office 365: What’s your network deployment architecture?

I recently gave a webinar on how to best architect your network for Office 365. It comes on the heels of a number of complaints from customers around their struggles deploying responsive Office 365 implementations. SharePoint doesn’t quite work; Skype calls are unclear. And forget about OneDrive for Business. It’s incredibly slow.Latency and Office 365 Ensuring a smooth transition to Office 365, or for that matter any cloud deployment, involves a solid understanding of which Office 365 applications are being deployed. Here latency matters. Microsoft recommends that round trip latency for Office 365 does not exceed 275 ms, but those metrics change significantly depending on the Office 365 application. Latency should not exceeds 50ms with Exchange Online and 25ms with SharePoint. (Check out my “ultimate” list of Office 365 networking tools for help with your O365 deployment.)To read this article in full or to leave a comment, please click here

IDG Contributor Network: Office 365: What’s your network deployment architecture?

I recently gave a webinar on how to best architect your network for Office 365. It comes on the heels of a number of complaints from customers around their struggles deploying responsive Office 365 implementations. SharePoint doesn’t quite work; Skype calls are unclear. And forget about OneDrive for Business. It’s incredibly slow.Latency and Office 365 Ensuring a smooth transition to Office 365, or for that matter any cloud deployment, involves a solid understanding of which Office 365 applications are being deployed. Here latency matters. Microsoft recommends that round trip latency for Office 365 does not exceed 275 ms, but those metrics change significantly depending on the Office 365 application. Latency should not exceeds 50ms with Exchange Online and 25ms with SharePoint. (Check out my “ultimate” list of Office 365 networking tools for help with your O365 deployment.)To read this article in full or to leave a comment, please click here

The Value of Configuration Consistency (Thwack)

It’s one thing to have a stable network, but it’s another to have consistency in device configurations across the network. Does that even matter?

On the Solarwinds Thwack Geek Speak blog I looked at some reasons why it might be important to maintain certain configuration standards across all devices. Please do take a trip to Thwack and check out my post, “The Value of Configuration Consistency“.

The Value of Configuration Consistency

 

Please see my Disclosures page for more information about my role as a Solarwinds Ambassador.

If you liked this post, please do click through to the source at The Value of Configuration Consistency (Thwack) and give me a share/like. Thank you!

Your next servers might be a no-name brand

For years, white box PCs have accounted for a significant chunk of desktop sales. It was the same wherever I went: small mom-and-pop shops built their own PCs using components shipped in from Taiwan, and if there was a logo on it, it was for the PC store (affectionately referred to as “screwdriver shops”) that built the thing. On the server side, though, it remained a name-brand business. Data centers were filled with racks of servers that bore the logos of IBM (now Lenovo), Dell and HP. + Also on Network World: How a data center works, today and tomorrow + However, that’s changing. In its latest sales figures for the second quarter of 2017, IDC says ODM sales now account for the largest group of server sales, surpassing HPE. In the second calendar quarter of 2017, worldwide server sales increased 6.3 percent year over year to $15.7 billion thanks in part to new Intel Skylake processors.To read this article in full or to leave a comment, please click here

Your next servers might be a no-name brand

For years, white box PCs have accounted for a significant chunk of desktop sales. It was the same wherever I went: small mom-and-pop shops built their own PCs using components shipped in from Taiwan, and if there was a logo on it, it was for the PC store (affectionately referred to as “screwdriver shops”) that built the thing. On the server side, though, it remained a name-brand business. Data centers were filled with racks of servers that bore the logos of IBM (now Lenovo), Dell and HP. + Also on Network World: How a data center works, today and tomorrow + However, that’s changing. In its latest sales figures for the second quarter of 2017, IDC says ODM sales now account for the largest group of server sales, surpassing HPE. In the second calendar quarter of 2017, worldwide server sales increased 6.3 percent year over year to $15.7 billion thanks in part to new Intel Skylake processors.To read this article in full or to leave a comment, please click here

Unmetered Mitigation: DDoS Protection Without Limits

This is the week of Cloudflare's seventh birthday. It's become a tradition for us to announce a series of products each day of this week and bring major new benefits to our customers. We're beginning with one I'm especially proud of: Unmetered Mitigation.

CC BY-SA 2.0 image by Vassilis

Cloudflare runs one of the largest networks in the world. One of our key services is DDoS mitigation and we deflect a new DDoS attack aimed at our customers every three minutes. We do this with over 15 terabits per second of DDoS mitigation capacity. That's more than the publicly announced capacity of every other DDoS mitigation service we're aware of combined. And we're continuing to invest in our network to expand capacity at an accelerating rate.

Surge Pricing

Virtually every Cloudflare competitor will send you a bigger bill if you are unlucky enough to get targeted by an attack. We've seen examples of small businesses that survive massive attacks to then be crippled by the bills other DDoS mitigation vendors sent them. From the beginning of Cloudflare's history, it never felt right that you should have to pay more if you came under an attack. That feels barely a Continue reading

No Scrubs: The Architecture That Made Unmetered Mitigation Possible

When building a DDoS mitigation service it’s incredibly tempting to think that the solution is scrubbing centers or scrubbing servers. I, too, thought that was a good idea in the beginning, but experience has shown that there are serious pitfalls to this approach.

A scrubbing server is a dedicated machine that receives all network traffic destined for an IP address and attempts to filter good traffic from bad. Ideally, the scrubbing server will only forward non-DDoS packets to the Internet application being attacked. A scrubbing center is a dedicated location filled with scrubbing servers.

Three Problems With Scrubbers

The three most pressing problems with scrubbing are: bandwidth, cost, knowledge.

The bandwidth problem is easy to see. As DDoS attacks have scaled to >1Tbps having that much network capacity available is problematic. Provisioning and maintaining multiple-Tbps of bandwidth for DDoS mitigation is expensive and complicated. And it needs to be located in the right place on the Internet to receive and absorb an attack. If it’s not then attack traffic will need to be received at one location, scrubbed, and then clean traffic forwarded to the real server: that can introduce enormous delays with a limited number of locations.

Continue reading

Meet Gatebot – a bot that allows us to sleep

In the past, we’ve spoken about how Cloudflare is architected to sustain the largest DDoS attacks. During traffic surges we spread the traffic across a very large number of edge servers. This architecture allows us to avoid having a single choke point because the traffic gets distributed externally across multiple datacenters and internally across multiple servers. We do that by employing Anycast and ECMP.

We don't use separate scrubbing boxes or specialized hardware - every one of our edge servers can perform advanced traffic filtering if the need arises. This allows us to scale up our DDoS capacity as we grow. Each of the new servers we add to our datacenters increases our maximum theoretical DDoS “scrubbing” power. It also scales down nicely - in smaller datacenters we don't have to overinvest in expensive dedicated hardware.

During normal operations our attitude to attacks is rather pragmatic. Since the inbound traffic is distributed across hundreds of servers we can survive periodic spikes and small attacks without doing anything. Vanilla Linux is remarkably resilient against unexpected network events. This is especially true since kernel 4.4 when the performance of SYN cookies was greatly improved.

But at some point, malicious traffic volume Continue reading

IDG Contributor Network: How chip design is evolving in response to IoT development

The rapid development of the so-called Internet of Things (IoT) has pushed many old industries to the brink, forcing most companies to fundamentally reevaluate how they do business. Few have felt the reverberations of the IoT more than the microchip industry, one of the vital drivers of the IoT that has both enabled it and evolved alongside of it.So how exactly is chip design evolving to keep up with the IoT’s breakneck proliferation? A quick glance at the innerworkings of the industry which enables all of our beloved digital devices to work shows just how innovative it must be to keep up with today’s ever-evolving world.Building a silicon brain Developing microchips, which are so diverse that they’re used to power coffee makers and fighter jets alike, is no simple task. In order to meet the massive processing demands of today’s digital gadgets, chip design has been forced to take some tips from the most efficient computer known to man: the human brain.To read this article in full or to leave a comment, please click here

How a data center works, today and tomorrow

A data center is a physical facility that enterprises use to house their business-critical applications and information, so as they evolve, it’s important to think long-term about how to maintain their reliability and security.Data center components Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements such as routers, switches, security devices, storage systems, servers, application delivery controllers and more. These are the components that IT needs to store and manage the most critical systems that are vital to the continuous operations of a company. Because of this, the reliability, efficiency, security and constant evolution of a data center are typically a top priority.To read this article in full or to leave a comment, please click here