In this Day Two Cloud podcast clip, we discuss whether the code we use to manage our infrastructure and the code we use for our applications should be stored in different repositories. To hear the entire episode, go to Day Two Cloud 085: Hosting Your Infrastructure Code In The Cloud. Hosts Ned Bellavance and Ethan […]
Over the past year, COVID-19 underlined the importance of a secure and resilient Internet to ensure we stay connected online. For MANRS, this meant even more incentive to work with network operators, Internet exchange points (IXPs), and content delivery network (CDN) and cloud providers to ensure data went where it was supposed to go via secure paths.
2020 saw strong growth in MANRS participation across all three programs.
MANRS contributed to the decline in reported routing incidents from more than 5,000 in 2017 to below 4,000 in 2020, making the entire Internet more secure for everyone. While we cannot claim full credit, we can attribute the fewer routing incidents to the increasing number of network operators implementing best routing practices.
The year also saw us launching a new program for CDN and cloud providers in collaboration with eight founding participants: Akamai, Amazon Web Services, Azion, Cloudflare, Facebook, Google, Continue reading
Complex cloud architecture types do not need to be insecure. They just need to be built on the same level of oversight as the systems and architectures they are replacing.
Clarified in the introduction that this privacy policy applies to sites from both the Internet Society and also the Internet Society Foundation. Previously, it said only “Internet Society”.
References to “Chief Administrative Officer” were changed to “Legal Department”.
Under “Can I Choose not to Receive Commercial Email Communications?”, the mention of “the OTA member preference center” was removed as that functionality was merged into the Internet Society membership portal.
On today’s Heavy Networking, we explore how to get network data you reference all the time and store it in a CSV using Ansible, the Genie parser, and Jinja2. Our guide for how to assemble these gears and get them cranking is John Capobianco, automation maven and Sr. IT Planner and Integrator for the House of Commons in the Canadian Parliament.
On today’s Heavy Networking, we explore how to get network data you reference all the time and store it in a CSV using Ansible, the Genie parser, and Jinja2. Our guide for how to assemble these gears and get them cranking is John Capobianco, automation maven and Sr. IT Planner and Integrator for the House of Commons in the Canadian Parliament.
This is a guest post by Paddy Byers, Co-founder and CTO at Ably, a realtime data delivery platform. You can view the original article on Ably's blog.
Users need to know that they can depend on the service that is provided to them. In practice, because from time to time individual elements will inevitably fail, this means you have to be able to continue in spite of those failures.
In this article, we discuss the concepts of dependability and fault tolerance in detail and explain how the Ably platform is designed with fault tolerant approaches to uphold its dependability guarantees.
As a basis for that discussion, first some definitions:
Dependability The degree to which a product or service can be relied upon. Availability and Reliability are forms of dependability.
Availability The degree to which a product or service is available for use when required. This often boils down to provisioning sufficient redundancy of resources with statistically independent failures.
Reliability The degree to which the product or service conforms to its specification when in use. This means a system that is not merely available but is also engineered with extensive redundant measures to continue to work as its Continue reading
Check Point sponsored this post.
Lior Sonntag
Lior is a Security Researcher at Check Point Software Technologies. He is a security enthusiast who loves to break stuff and put it back together. He's passionate about various InfoSec topics such as Cloud Security, Offensive Security, Vulnerability Research and Reverse Engineering.
The biggest cyberattack in recent times came in the form of what seems like a
It’s amazing to me that it’s been ten years since I attended by first Tech Field Day event. I remember being excited to be invited to Tech Field Day 5 and then having to rush out of town a day early to beat a blizzard to be able to attend. Given that we just went through another blizzard here I thought the timing was appropriate.
How did attending an industry event change my life? How could something with only a dozen people over a couple of days change the way I looked at my career? I know I’ve mentioned parts of this to people in the past but I feel like it’s important to talk about how each piece of the puzzle built on the rest to get me to where I am today.
Voices Carry
The first thing Tech Field Day did to change my life was to show me that I mattered. I grew up in a very small town and spent most of my formative school years being bored. The Internet didn’t exist in a usable form for me. I devoured information wherever I could find it. And I languished as I realized that I needed more Continue reading
The Managed Rules team was recently given the task of allowing Enterprise users to debug Firewall Rules by viewing the part of a request that matched the rule. This makes it easier to determine what specific attacks a rule is stopping or why a request was a false positive, and what possible refinements of a rule could improve it.
The fundamental problem, though, was how to securely store this debugging data as it may contain sensitive data such as personally identifiable information from submissions, cookies, and other parts of the request. We needed to store this data in such a way that only the user who is allowed to access it can do so. Even Cloudflare shouldn't be able to see the data, following our philosophy that any personally identifiable information that passes through our network is a toxic asset.
This means we needed to encrypt the data in such a way that we can allow the user to decrypt it, but not Cloudflare. This means public key encryption.
Now we needed to decide on which encryption algorithm to use. We came up with some questions to help us evaluate which one to use:
A few months back, Samsung and Xilinx co-introduced an SSD with a Xilinx FPGA processor on-board, making computational storage very real. The SSD meant data could be processed where it resided rather than moving it to and from memory.Now they’ve introduced High Bandwidth Memory (HBM) integrated with an artificial intelligence (AI) processor, called the HBM-PIM. The new processing-in-memory (PIM) architecture brings AI processing capabilities inside the memory rather than moving contents in and out to the processor, to accelerate large-scale processing in data centers, high-performance computing (HPC) systems and AI-enabled mobile applications.To read this article in full, please click here
A few months back, Samsung and Xilinx co-introduced an SSD with a Xilinx FPGA processor on-board, making computational storage very real. The SSD meant data could be processed where it resided rather than moving it to and from memory.Now they’ve introduced High Bandwidth Memory (HBM) integrated with an artificial intelligence (AI) processor, called the HBM-PIM. The new processing-in-memory (PIM) architecture brings AI processing capabilities inside the memory rather than moving contents in and out to the processor, to accelerate large-scale processing in data centers, high-performance computing (HPC) systems and AI-enabled mobile applications.To read this article in full, please click here
In this Linux tip, learn how to use the rig command. It randomly generates name, address and phone number listings. It's useful when you're testing an application and need hundreds or thousands of addresses to make sure that it works correctly.
In January, Jason Edelman kindly invited me for a chat about the state of (software defined) networking and network automation in particular. The recording was recently published on Network Collective.
In January, Jason Edelman kindly invited me for a chat about the state of (software defined) networking and network automation in particular. The recording was recently published on Network Collective.
Greg Kurtzer, one of the co-founders of the CentOS Linux distribution, the creator of the Singularity container environment for HPC workloads, the founder of the new Rocky Linux distribution that seeks to replace the now defunct CentOS, and an HPC guru in his own right, is on a mission. …
Caching is a magic trick. Instead of a customer’s origin responding to every request, Cloudflare’s 200+ data centers around the world respond with content that is cached geographically close to visitors. This dramatically improves the load performance for web pages while decreasing the bandwidth costs by having Cloudflare respond to a request with cached content.
However, if content is not in cache, Cloudflare data centers must contact the origin server to receive the content. This isn’t as fast as delivering content from cache. It also places load on an origin server, and is more costly compared to serving directly from cache. These issues can be amplified depending on the geographic distribution of a website’s visitors, the number of data centers contacting the origin, and the available origin resources for responding to requests.
To decrease the number of times our network of data centers communicate with an origin, we organize data centers into tiers so that only upper-tier data centers can request content from an origin and then they spread content to lower tiers. This means content that loads faster for visitors, is cheaper to serve, and reduces origin resource consumption.
Today, I’m thrilled to announce a fundamental improvement to Argo Continue reading
Tetrate sponsored this post.
Jimmy Song
Jimmy is a developer advocate at Tetrate, CNCF Ambassador, co-founder of ServiceMesher, and Cloud Native Community (China). He mainly focuses on Kubernetes, Istio, and cloud native architectures.
Different companies or software providers have devised countless ways to control user access to functions or resources, such as Discretionary Access Control (DAC), Mandatory Access Control (MAC), Role-Based Access Control (RBAC), and Attribute-Based Access Control (ABAC). In essence, whatever the type of access control model, three basic elements can be abstracted: user, system/application, and policy.
In this article, we will introduce ABAC, RBAC, and a new access control model — Next Generation Access Control (NGAC) — and compare the similarities and differences between the three, as well as why you should consider NGAC.
What Is RBAC?
Ignasi Barrera
Ignasi is a founding engineer at Tetrate and is a member of the Apache Software Foundation.
RBAC, or Role-Based Access Control, takes an approach whereby users are granted (or denied) access to resources based on their role in the organization. Every role is assigned a collection of permissions and restrictions, which is great because you don’t need to keep track of every system user and their attributes. You just Continue reading
A few years ago, we released Argo to help make the Internet faster and more efficient. Argo observes network conditions and finds the optimal route across the Internet for origin server requests, avoiding congestion along the way.
Tiered Cache is an Argo feature that reduces the number of data centers responsible for requesting assets from the origin. With Tiered Cache active, a request in South Africa won’t go directly to an origin in North America, but, instead, look in a large, nearby data center to see if the data requested is cached there first. The number and location of the data centers used by Tiered Cache is controlled by a piece of configuration called the topology. By default, we use a generic topology for every customer that strikes a balance between cache hit ratios and latency that is suitable for most users.
Today we’re introducing Smart Topology, which maximizes cache hit ratios by building on Argo’s internal infrastructure to identify the single best data center for making requests to the origin.
Standard Cache
The standard method for caching assets is to let each data center be a reverse proxy for the origin server. In this scheme, a miss in any Continue reading