This is a guest repost by Ken Fromm, a 3x tech co-founder — Vivid Studios, Loomia, and Iron.io. Here's Part 1.
Job processing at scale at high concurrency across a distributed infrastructure is a complicated feat. There are many components involvement — servers and controllers to process and monitor jobs, controllers to autoscale and manage servers, controllers to distribute jobs across the set of servers, queues to buffer jobs, and whole host of other components to ensure jobs complete and/or are retried, and other critical tasks that help maintain high service levels. This section peels back the layers a bit to provide insight into important aspects within the workings of a serverless platform.
Throughput has always been the coin of the realm in computer processing — how quickly can events, requests, and workloads be processed. In the context of a serverless architecture, I’ll break throughput down further when discussing both latency and concurrency. At the base level, however, a serverless architecture does provide a more beneficial architecture than legacy applications and large web apps when it comes to throughput because it provide for far better resource utilization.
In a post by Travis Reeder on What is Serverless Computing and Why is Continue reading
The Network and Distributed System Security Symposium (NDSS 2017) is just around the corner (26 February - 1 March), and details of the program are quickly coming into focus. The full slate of activities includes two keynotes, two workshops, and a full program of excellent peer-reviewed academic research papers.
The post Worth Reading: Why SPF needs forwarding addresses appeared first on 'net work.