Alphabet’s Project Loon Delivers Internet in Puerto Rico
The balloon-based connectivity project is working with AT&T.
The balloon-based connectivity project is working with AT&T.
The WiFi technology is expected to help support video consumption ... eventually.
From time to time, someone publishes a new blog post lauding the wonderfulness of BGPsec, such as this one over at the Internet Society. In return, I sometimes feel like I am a broken record discussing the problems with the basic idea of BGPsec—while it can solve some problems, it creates a lot of new ones. Overall, BGPsec, as defined by the IETF Secure Interdomain (SIDR) working group is a “bad idea,” a classic study in the power of unintended consequences, and the fond hope that more processing power can solve everything. To begin, a quick review of the operation of BGPsec might be in order. Essentially, each AS in the AS Path signs the “BGP update” as it passes through the internetwork, as shown below.

In this diagram, assume AS65000 is originating some route at A, and advertising it to AS65001 and AS65002 at B and C. At B, the route is advertised with a cryptographic signature “covering” the first two hops in the AS Path, AS65000 and AS65001. At C, the route is advertised with a cryptogrphic signature “covering” the first two hops in the AS Path, AS65000 and AS65002. When F advertises this route to H, at Continue reading

This is a guest by Michele Palmia, now @EyeEm, good times @IBM, @UniPd and @UCC.
We’ve now been running computer vision models in production at EyeEm for more than three years - on literally billions of images. As an engineer involved in building the infrastructure behind it from scratch, I both enjoyed and suffered the many technical challenges this task raised. This journey has also taught me a lot about managing processes and relationships with different teams, tasks of an especially challenging nature in a dynamic startup environment.
What follows is an attempt to consolidate the computer vision pipeline history at EyeEm, some of the challenges we had to face, some of the learning we’ve gained, and a glimpse into its future.
What is the cloud? Why is it called a cloud? How does the cloud work? What does it mean when something is 'in the cloud'?
I wrote a new book: Explain the Cloud Like I'm 10, answering those questions for the complete beginner. It makes the perfect gift for Halloween. And Thanksgiving. And Christmas. Oh, and birthdays too!
The irony is, if you read HighScalability, you're not the target audience :-) Explain the Cloud Like I'm 10 is for people who hear about the cloud everyday and have wondered what it is.
Talking with people outside the tech bubble I've found the cloud is still a mystery. I think that's because almost every explanation of the cloud I could find was a rewording of the same unhelpful technobabble.
In Explain the Cloud Like I'm 10 I've used a lot of pictures and a lot of examples. I go slow and easy. I try really hard to build up an intuitive understanding of what the cloud is and how it works.
If you know of anyone who might benefit from a book like this, I'd appreciate it if you'd pass it on.
thanks!

The post Worth Reading: The Economics of DDoS appeared first on rule 11 reader.

Scaling up TCP servers is usually straightforward. Most deployments start by using a single process setup. When the need arises more worker processes are added. This is a scalability model for many applications, including HTTP servers like Apache, NGINX or Lighttpd.
CC BY-SA 2.0 image by Paul Townsend
Increasing the number of worker processes is a great way to overcome a single CPU core bottleneck, but opens a whole new set of problems.
There are generally three ways of designing a TCP server with regard to performance:
(a) Single listen socket, single worker process.
(b) Single listen socket, multiple worker processes.
(c) Multiple worker processes, each with separate listen socket.

(a) Single listen socket, single worker process This is the simplest model, where processing is limited to a single CPU. A single worker process is doing both accept() calls to receive the new connections and processing of the requests themselves. This model is the preferred Lighttpd setup.

(b) Single listen socket, multiple worker process The new connections sit in a single kernel data structure (the listen socket). Multiple worker processes are doing both the accept() calls and processing of the requests. This model enables some spreading of the inbound Continue reading
As some readers may already know, this site has been running on a static site generator since late 2014/early 2015, when I migrated from WordPress to Jekyll on GitHub Pages. I’ve since migrated again, this time to Hugo on S3/CloudFront. Along the way, I’ve taken an interest in using make and a Makefile to help automate certain tasks at the CLI. In this post, I’ll share how I’m using a Makefile to help with publishing blog articles.
If you’re not familiar with make or its use of a Makefile, have a look at this article I wrote on using a Makefile with Markdown documents, then come back here.
In general, the process for publishing a blog post using Hugo and S3/CloudFront basically looks like this:
content/post directory.)hugo.Some of these steps Continue reading
In this video, learn about a free alternative tool to ping for network analysis.
In this video, learn about a free alternative tool to ping for network analysis.
Omer asked a pretty common question about BFD on one of my blog posts (slightly reworded):
Would you still use BFD even if you have a direct router-to-router physical link without L2 transport in the middle to detect if there is some kind of software failure on the other side?
Sander Steffann quickly replied:
Read more ...The most important part of writing quality software is testing. Writing unit tests provide assurance the changes you’re making aren’t going to break anything in your software application. Sounds pretty great, right? Why is it that in networking operations we’re still mainly using ping, traceroute, and human verification for network validation and testing?
I’ve written in the past that deploying configurations faster, or more generally, configuration management, is just one small piece of what network automation is. A major component much less talked about is automated testing. Automated testing starts with data collection and quickly evolves to include verification. It’s quite a simple idea and one that we recommend as the best place to start with automation as it’s much more risk adverse to deploying configurations faster.
In our example, the network is the application, and unit tests need to be written to verify our application (as network operators) has valid configurations before each change is implemented, but also integrations tests are needed to ensure our application is operating as expected after each change.
If you choose to go down the DIY path for network automation, which could involve using an open source Continue reading