The archive platform is extensible enough to work across any cloud provider.
Joel Knight published his blogging toolkit and processes he uses to write blog posts. Definitely worth reading even if you never plan to blog as he nicely documents how to sync creative process across multiple platforms.
I’m trying to get through the final bits of this new book (which should publish at the end of December, from what I understand), and the work required for a pair of PhD seminars (a bit over 50 pages of writing). I probably won’t post anything this week so I can get caught up a little, and I might not be posting heavily next week.
I’ll be at SDxE in Austin Tuesday and Wednesday, if anyone wants to find me there.
The post Light/No Blogging this Week appeared first on rule 11 reader.
Devastation caused by several storms during the 2017 Atlantic hurricane season has been significant, as Hurricanes Harvey, Irma, and Maria destroyed property and took lives across a number of Caribbean island nations, as well as Texas and Florida in the United States. The strength of these storms has made timely communication of information all the more important, from evacuation orders, to pleas for help and related coordination among first responders and civilian rescuers, to insight into open shelters, fuel stations, and grocery stores. The Internet has become a critical component of this communication, with mobile weather applications providing real-time insight into storm conditions and locations, social media tools like Facebook and Twitter used to contact loved ones or ask for assistance, “walkie talkie” apps like Zello used to coordinate rescue efforts, and “gas tracker” apps like GasBuddy used to crowdsource information about open fuel stations, gas availability, and current prices.
As the Internet has come to play a more pivotal role here, the availability and performance of Internet services has become more important as well. While some “core” Internet components remained available during these storms thanks to hardened data center infrastructure, backup power generators, and comprehensive disaster planning, local infrastructure Continue reading
Two years to deploy 160Tb of bandwidth
The post Completion of 160Tb Trans-Atlantic Subsea Cable In Two Years – Microsoft appeared first on EtherealMind.
It’s one thing to have a stable network, but it’s another to have consistency in device configurations across the network. Does that even matter?
On the Solarwinds Thwack Geek Speak blog I looked at some reasons why it might be important to maintain certain configuration standards across all devices. Please do take a trip to Thwack and check out my post, “The Value of Configuration Consistency“.
Please see my Disclosures page for more information about my role as a Solarwinds Ambassador.
If you liked this post, please do click through to the source at The Value of Configuration Consistency (Thwack) and give me a share/like. Thank you!
This is the week of Cloudflare's seventh birthday. It's become a tradition for us to announce a series of products each day of this week and bring major new benefits to our customers. We're beginning with one I'm especially proud of: Unmetered Mitigation.
CC BY-SA 2.0 image by Vassilis
Cloudflare runs one of the largest networks in the world. One of our key services is DDoS mitigation and we deflect a new DDoS attack aimed at our customers every three minutes. We do this with over 15 terabits per second of DDoS mitigation capacity. That's more than the publicly announced capacity of every other DDoS mitigation service we're aware of combined. And we're continuing to invest in our network to expand capacity at an accelerating rate.
Virtually every Cloudflare competitor will send you a bigger bill if you are unlucky enough to get targeted by an attack. We've seen examples of small businesses that survive massive attacks to then be crippled by the bills other DDoS mitigation vendors sent them. From the beginning of Cloudflare's history, it never felt right that you should have to pay more if you came under an attack. That feels barely a Continue reading
When building a DDoS mitigation service it’s incredibly tempting to think that the solution is scrubbing centers or scrubbing servers. I, too, thought that was a good idea in the beginning, but experience has shown that there are serious pitfalls to this approach.
A scrubbing server is a dedicated machine that receives all network traffic destined for an IP address and attempts to filter good traffic from bad. Ideally, the scrubbing server will only forward non-DDoS packets to the Internet application being attacked. A scrubbing center is a dedicated location filled with scrubbing servers.
The three most pressing problems with scrubbing are: bandwidth, cost, knowledge.
The bandwidth problem is easy to see. As DDoS attacks have scaled to >1Tbps having that much network capacity available is problematic. Provisioning and maintaining multiple-Tbps of bandwidth for DDoS mitigation is expensive and complicated. And it needs to be located in the right place on the Internet to receive and absorb an attack. If it’s not then attack traffic will need to be received at one location, scrubbed, and then clean traffic forwarded to the real server: that can introduce enormous delays with a limited number of locations.
In the past, we’ve spoken about how Cloudflare is architected to sustain the largest DDoS attacks. During traffic surges we spread the traffic across a very large number of edge servers. This architecture allows us to avoid having a single choke point because the traffic gets distributed externally across multiple datacenters and internally across multiple servers. We do that by employing Anycast and ECMP.
We don't use separate scrubbing boxes or specialized hardware - every one of our edge servers can perform advanced traffic filtering if the need arises. This allows us to scale up our DDoS capacity as we grow. Each of the new servers we add to our datacenters increases our maximum theoretical DDoS “scrubbing” power. It also scales down nicely - in smaller datacenters we don't have to overinvest in expensive dedicated hardware.
During normal operations our attitude to attacks is rather pragmatic. Since the inbound traffic is distributed across hundreds of servers we can survive periodic spikes and small attacks without doing anything. Vanilla Linux is remarkably resilient against unexpected network events. This is especially true since kernel 4.4 when the performance of SYN cookies was greatly improved.
But at some point, malicious traffic volume Continue reading
In this video, Tony Fortunato demonstrates how he used Wireshark to analyze a traceroute utility.
Organizations can save money by negotiating better deals on a common piece of network equipment, analysts say.