Archive

Category Archives for "Networking"

Puerto Rico’s Slow Internet Recovery

On 20 September 2017, Hurricane Maria made landfall in Puerto Rico.  Two and a half months later, the island is still recovering from the resulting devastation.  This extended phase of recovery is reflected in the state of the local internet and reveals how far Puerto Rico still has to go to make itself whole again.

While most of the BGP routes for Puerto Rico have returned, DNS query volumes from the island are still only a fraction of what they were on September 19th  — the day before the storm hit.  DNS activity is a better indicator of actual internet use (or lack thereof) than the simple announcements of BGP routes.

We have been analyzing the impacts of natural disasters such as hurricanes and earthquakes going back to Hurricane Katrina in 2005.  Compared to the earthquake near Japan in 2011, Hurricane Sandy in 2012, or the earthquake in Nepal in 2015, Puerto Rico’s disaster stands alone with respect to its prolonged and widespread impact on internet access.  The following analysis tells that story.

DNS statistics

Queries from Puerto Rico to our Internet Guide recursive DNS service have still not recovered to pre-hurricane levels Continue reading

New global internet reliability concerns emerge

Undersea, internet-carrying cables are not protected well enough and there isn’t an alternative in place should they fail.That's according to a new report from U.K.-based Policy Exchange, which outlines potential catastrophic effects that a simple cut in the hosepipe-sized underwater infrastructure could create.Also on Network World: The hidden cause of slow Internet and how to fix it Tsunamis, a vessel dragging an anchor, or even saw-wielding Russians could bring down the global financial system or cripple a solo nation’s internet access, Policy Exchange says in its new study (pdf).To read this article in full, please click here

New global internet reliability concerns emerge

Undersea, internet-carrying cables are not protected well enough and there isn’t an alternative in place should they fail.That's according to a new report from U.K.-based Policy Exchange, which outlines potential catastrophic effects that a simple cut in the hosepipe-sized underwater infrastructure could create.Also on Network World: The hidden cause of slow Internet and how to fix it Tsunamis, a vessel dragging an anchor, or even saw-wielding Russians could bring down the global financial system or cripple a solo nation’s internet access, Policy Exchange says in its new study (pdf).To read this article in full, please click here

Arista brings the benefits of leaf-spine to routing

About a decade ago almost all data centers were built on a traditional three- (or sometimes more) tier architectures that used the spanning tree protocol (STP). That prevented routing loops but also deactivated all the backup links, which accounted for almost half the ports in large environments. This caused organizations to significantly overspend on their networks.Leaf-spine networks, on the other hand, have only two tiers, are much flatter and use something called ECMP (equal cost multi-pathing). So all routes are active, creating a much more efficient network that more agile and costs less.Also on Network World: 10 Most important open source networking projects The traditional three-tier data center was designed to scale up, which was the key requirement in the client/server era. Leaf-spine is optimized for rapid scale out, which has become critical in data centers today, as more and more traffic is moving in an East-West direction. To read this article in full, please click here

Arista brings the benefits of leaf-spine to routing

About a decade ago almost all data centers were built on a traditional three- (or sometimes more) tier architectures that used the spanning tree protocol (STP). That prevented routing loops but also deactivated all the backup links, which accounted for almost half the ports in large environments. This caused organizations to significantly overspend on their networks.Leaf-spine networks, on the other hand, have only two tiers, are much flatter and use something called ECMP (equal cost multi-pathing). So all routes are active, creating a much more efficient network that more agile and costs less.Also on Network World: 10 Most important open source networking projects The traditional three-tier data center was designed to scale up, which was the key requirement in the client/server era. Leaf-spine is optimized for rapid scale out, which has become critical in data centers today, as more and more traffic is moving in an East-West direction. To read this article in full, please click here

CAA of the Wild: Supporting a New Standard

CAA of the Wild: Supporting a New Standard

One thing we take pride in at Cloudflare is embracing new protocols and standards that help make the Internet faster and safer. Sometimes this means that we’ll launch support for experimental features or standards still under active development, as we did with TLS 1.3. Due to the not-quite-final nature of some of these features, we limit the availability at the onset to only the most ardent users so we can observe how these cutting-edge features behave in the wild. Some of our observations have helped the community propose revisions to the corresponding RFCs.

We began supporting the DNS Certification Authority Authorization (CAA) Resource Record in June behind a beta flag. Our goal in doing so was to see how the presence of these records would affect SSL certificate issuance by publicly-trusted certification authorities. We also wanted to do so in advance of the 8 September 2017 enforcement date for mandatory CAA checking at certificate issuance time, without introducing a new and externally unproven behavior to millions of Cloudflare customers at once. This beta period has provided invaluable insight as to how CAA records have changed and will continue to change the commercial public-key infrastructure (PKI) ecosystem.

As of today, Continue reading

J-AUT Course

Hi ,

I have enrolled for Juniper-JAUT Course and looking forward to it.

Below are the details. Its a 5 Day course and am expecting more out of this course.

https://learningportal.juniper.net/juniper/user_activity_info.aspx?id=5186

My main interest lies in YAML / JSON use cases with Juniper Devices and their interaction. I will let you know how the course goes as the day progresses and over all efficiency of the course.

 

-Rakesh

BrandPost: Where Do you Rank? IDC Lists Top Drivers for SD-WAN Adoption

IDC concluded a worldwide survey in September 2017 to learn and report on the key factors driving SD-WAN deployments for enterprises. I’m pleased to see the alignment in the findings with what I wrote in my previous article, SD-WAN Delivers Real Business Outcomes to Cloud-first Enterprises back in September. The results of the survey identified the following top four drivers for deploying an enterprise SD-WAN solution:To read this article in full, please click here

Reaction: Science and Engineering

Are you a scientist, or an engineer? This question does not seem to occur to most engineers, but it does seem science has “taken the lead role” in recent history, with engineers being sometimes (or perhaps often) seen as “the folks who figure out how to make use of what scientists are discovering.” There are few fields where this seems closer to the truth than computing. Peter Denning has written an insightful article over at the ACM on this topic; a few reactions are in order.

Denning separates engineers from scientists by saying:

The first concerns the nature of their work. Engineers design and build technologies that serve useful purposes, whereas scientists search for laws explaining phenomena.

While this does seem like a useful starting point, I’m not at all certain the two fields can be cleanly separated in this way. The reality is there is probably a continuum starting from what might be called “meta-engineers,” those who’s primary goal is to implement a technology designed by someone else by mentally reverse engineering what this “someone else” has done, to the deeply focused “pure scientist,” who really does not care about the practical application, but is rather simply searching Continue reading

AMD scores its first big server win with Azure

AMD built it, and now the OEM has come. In this case, its Epyc server processors have scored their first big public win, with Microsoft announcing Azure instances based on AMD’s Epyc server microprocessors.AMD was first to 64-bit x86 design with Athlon on the desktop and Opteron on the servers. Once Microsoft ported Windows Server to 64 bits, the benefit became immediately apparent. Gone was the 4GB memory limit of 32-bit processors, replaced with 16 exabytes of memory, something we won’t live to see in our lifetimes (famous last words, I know).Also on Network World: Micro-modular data centers set to multiply When Microsoft published a white paper in 2005 detailing how it was able to consolidate 250 32-bit MSN Network servers into 25 64-bit servers thanks to the increase in memory, which meant more connections per machine, that started the ball rolling for AMD. And within a few years, Opteron had 20 percent server market share.To read this article in full, please click here