On Tuesday, November 8, 2022, constituents cast their ballots for the 2022 US midterm elections, which included races for all 435 seats in the House of Representatives, 35 of the 100 seats in the Senate, and many gubernatorial races in states including Florida, Michigan, and Pennsylvania. Preparing for elections is a giant task, and states and localities have their work cut out for them with corralling poll workers, setting up polling places, and managing the physical security of ballots and voting machines.
We at Cloudflare are proud to be able to play a role in helping safeguard the integrity of the electoral process. Through our Impact programs, we provide cyber security products to help protect access to authoritative voting information and the security of sensitive voter data.
We have reported on our work in the election space with the Athenian Project, dedicated to protecting state and local governments that run elections; Cloudflare for Campaigns, a project with a suite of Cloudflare products to secure political campaigns’ and state parties’ websites and internal teams; and Project Galileo, in which we have helped voting rights organizations and election results sites stay online during traffic spikes.
On today's Day Two Cloud we talk about testing. While developers do the testing, operators may be responsible for setting up testing environments, which can be a lot of work. That work increases with microservices because of all the complexities and dependencies that come with connecting and orchestrating microservices-based applications. Today we talk about how to address testing challenges with Arjun Iyer, and explore a solution he's developed for simplifying end-to-end microservices testing in a Kubernetes environment. This is not a sponsored show, but we do talk about Signadot, a startup Arjun founded in the testing space.
The post Day Two Cloud 171: The Challenges Of Scaling Microservices Testing appeared first on Packet Pushers.
Applications generally assume the network provides near-real-time packet transmission without regard for what the application is trying to do, what kind of traffic is being transmitted, etc. Back in the real world, its often important for the network to coordinate with applications to more efficiently carry traffic offered. The Path Aware Research Group (PANRG) in the Internet Research Task Force (IRTF) is looking at the problems involved in understanding and signaling the path characteristics to applications.
In this episode of the Hedge, Brian Trammel joins Tom Ammon and Russ White to discuss the current work on path aware networking.
< MEDIUM: https://raaki-88.medium.com/aws-direct-connect-site-link-a-very-excellent-service-10c13a389c8d >
Site-link is really a nice extension to the DX Gateway’s offering. Let me simplify it.
Reference: https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-direct-connect-sitelink/ — I Can’t Recommend this more, this is a very very nice read.
Few Important Points
Problem — I want to connect my two Data-Centres to Direct Connect Gateway through AWS Backbone.
Let’s see a reference Architecture
Replicating the above scenario
Few important aspects
There’s no better way to start this blog post than with a widespread myth: we don’t need MLAG now that most vendors have implemented EVPN multihoming.
TL&DR: This myth is close to the not even wrong category.
As we discussed in the MLAG System Overview blog post, every MLAG implementation needs at least three functional components:
There’s no better way to start this blog post than with a widespread myth: we don’t need MLAG now that most vendors have implemented EVPN multihoming.
TL&DR: This myth is close to the not even wrong category.
As we discussed in the MLAG System Overview blog post, every MLAG implementation needs at least three functional components:
Container environments are highly dynamic and require continuous monitoring, observability, and security. Since container security is a continuous practice, it should be fully integrated into the entire development and deployment cycle. Implementing security as an integral part of this cycle allows you to mitigate risk and reduce the number of vulnerabilities across the dynamic and complex attack surface containers present.
Let’s take a look at three best practices for ensuring containers remain secure during build, deployment, and runtime.
Securing containers during the build and deployment stages is all about vulnerability management. It’s important to continuously scan for vulnerabilities and misconfigurations in software before deployment, and block deployments that fail to meet security requirements. Assess container and registry image vulnerabilities by scanning first- and third-party images for vulnerabilities and misconfigurations, and using a tool that scans multiple registries to identify vulnerabilities from databases such as NVD. You also need to continuously monitor images, workloads, and infrastructure against common configuration security standards (e.g. CIS Benchmarks). This enables you to meet internal and external compliance standards, and also quickly detect and remediate misconfigurations in your environment, thereby eliminating potential attack vectors.
Containerized workloads require a Continue reading
During the discussion of the On Applicability of MPLS Segment Routing (SR-MPLS) blog post on LinkedIn someone made an off-the-cuff remark that…
SRv6 as an host2host overlay - in some cases not a bad idea
It’s probably just my myopic view, but I fail to see the above idea as anything else but another tiny chapter in the “Solution in Search of a Problem” SRv6 saga1.
During the discussion of the On Applicability of MPLS Segment Routing (SR-MPLS) blog post on LinkedIn someone made an off-the-cuff remark that…
SRv6 as an host2host overlay - in some cases not a bad idea
It’s probably just my myopic view, but I fail to see the above idea as anything else but another tiny chapter in the “Solution in Search of a Problem” SRv6 saga1.