The microservices architecture provides developers and DevOps engineers significant agility that helps them move at the pace of the business. Breaking monolithic applications into smaller components accelerates development, streamlines scaling, and improves fault isolation. However, it also introduces certain security complexities since microservices frequently engage in inter-service communications, primarily through HTTP-based APIs, thus broadening the application’s attack surface. This scenario is similar to breaking a chunk of ice into smaller pieces, increasing its surface area. It is crucial that enterprises address these security challenges before benefiting from adopting a microservice architecture.
Kubernetes is the de-facto standard for microservices orchestration. However, as organizations increasingly adopt Kubernetes, they run the risk of inadvertently introducing security gaps. This is often the result of attempts to integrate traditional security tooling into a cloud-native ecosystem that is highly dynamic, ephemeral, and non-deterministic. Instead of implementing security around the platform, DevOps, security, and platform teams must look at enforcing defenses through the platform.
Let’s look at an example of a web application firewall (WAF) which is typically deployed at the ingress of a network or application. As shown in the diagram below, HTTP traffic is Continue reading
Cloudflare's Zero Trust platform helps organizations map and adopt a strong security posture. This ranges from Zero Trust Network Access, a Secure Web Gateway to help filter traffic, to Cloud Access Security Broker and Data Loss Prevention to protect data in transit and in the cloud. Customers use Cloudflare to verify, isolate, and inspect all devices managed by IT. Our composable, in-line solutions offer a simplified approach to security and a comprehensive set of logs.
We’ve heard from many of our customers that they aggregate these logs into Datadog’s Cloud SIEM product. Datadog Cloud SIEM provides threat detection, investigation, and automated response for dynamic, cloud-scale environments. Cloud SIEM analyzes operational and security logs in real time – regardless of volume – while utilizing out-of-the-box integrations and rules to detect threats and investigate them. It also automates response and remediation through out-of-the-box workflow blueprints. Developers, security, and operations teams can also leverage detailed observability data and efficiently collaborate to accelerate security investigations in a single, unified platform. We previously had an out-of-the-box dashboard for Cloudflare CDN available on Datadog. These help our customers gain valuable insights into product usage and performance metrics for response times, HTTP status codes, cache hit rate. Continue reading
This year at Summit, an attendee posed a question about how to work with setting facts and changing data in Ansible. Many times we’ve come across people using task after task to manipulate data, to turn items into lists, filter our options, trying to do heavy data manipulation and to turn data from one source into another. Trying to make these programmatic changes using a mixture of YAML and Jinja inside of roles and playbooks is a headache of its own. While many of these options will work, they aren’t very efficient or easy to implement. Ansible Playbooks were never meant for programming.
One solution that is usually overlooked is to do the manipulation in Python inside of a module or a filter. This article will detail how to create a filter to manipulate data. In addition, a repository for all code referenced in this article has been created.
This example was first developed as a module. However after review, it was determined that these data transformations are best done as filters. Filters can take multiple data inputs, do the programmatic operations, and then can be used in line where they are used as input or set as a fact. Continue reading
This year at Summit, an attendee posed a question about how to work with setting facts and changing data in Ansible. Many times we’ve come across people using task after task to manipulate data, to turn items into lists, filter our options, trying to do heavy data manipulation and to turn data from one source into another. Trying to make these programmatic changes using a mixture of YAML and Jinja inside of roles and playbooks is a headache of its own. While many of these options will work, they aren’t very efficient or easy to implement. Ansible Playbooks were never meant for programming.
One solution that is usually overlooked is to do the manipulation in Python inside of a module or a filter. This article will detail how to create a filter to manipulate data. In addition, a repository for all code referenced in this article has been created.
This example was first developed as a module. However after review, it was determined that these data transformations are best done as filters. Filters can take multiple data inputs, do the programmatic operations, and then can be used in line where they are used as input or set as Continue reading
Long story short: I decided to create open-source BGP configuration labs, and (so far) created a superset of labs we used in an ancient Advanced BGP Configuration and Troubleshooting (ABCT) course
Approximately 30 years ago I managed to persuade the powers-that-be within Cisco’s European training organization that they needed a deep-dive BGP course, resulting in a 3 (later 5) day Advanced BGP Configuration and Troubleshooting (ABCT) course1. I was delivering that course for close to a decade, and gradually built a decent story explaining the reasoning and use cases behind most of (then available) BGP features, from simple EBGP sessions to BGP route reflectors and communities2.
Now imagine having more than a dozen hands-on labs that go with the “BGP from rookie to hero” story available for any platform of your choice3. I plan to make that work (eventually) as an open-source project that you’ll be able to download and run free-of-charge.
When you combine the forces of open source and the wide and deep semiconductor experience of legendary chip architect Jim Keller, something interesting is bound to happen. …
The post Unleashing An Open Source Torrent On CPUs And AI Engines first appeared on The Next Platform.
Unleashing An Open Source Torrent On CPUs And AI Engines was written by Timothy Prickett Morgan at The Next Platform.
In this tutorial, I will share my experience installing the web-based user interface for Stable […]
The post Stable Diffusion Web UI on Linux first appeared on Brezular's Blog.
Today's Day Two Cloud peers inside the box of quantum computing. We explore how it works, what qbits are and why they matter, the current state of quantum computing hardware, what problems could be solved with quantum computing, and how you can get involved with it via the Qiskit open-source project. Our guest is Abby Mitchell, Quantum Developer Advocate at IBM.
The post Day Two Cloud 205: States Of Quantum Computing With Abby Mitchell appeared first on Packet Pushers.
The use of physical infrastructure has changed substantially in the last three years. Data centres are scaling down, offices and branches are being re-considered. One view is that offices are ‘playgrounds’ where white collar workers gather to chat, socialise, drink free coffee and have face-to-face for one or two days a week. An opposing view is that its legacy way of working but it will take time for people to adapt to remote work.
The post HS053 IT Facilities in 2023 appeared first on Packet Pushers.
Over the last couple of months, Workers KV has suffered from a series of incidents, culminating in three back-to-back incidents during the week of July 17th, 2023. These incidents have directly impacted customers that rely on KV — and this isn’t good enough.
We’re going to share the work we have done to understand why KV has had such a spate of incidents and, more importantly, share in depth what we’re doing to dramatically improve how we deploy changes to KV going forward.
Workers KV — or just “KV” — is a key-value service for storing data: specifically, data with high read throughput requirements. It’s especially useful for user configuration, service routing, small assets and/or authentication data.
We use KV extensively inside Cloudflare too, with Cloudflare Access (part of our Zero Trust suite) and Cloudflare Pages being some of our highest profile internal customers. Both teams benefit from KV’s ability to keep regularly accessed key-value pairs close to where they’re accessed, as well its ability to scale out horizontally without any need to become an expert in operating KV.
Given Cloudflare’s extensive use of KV, it wasn’t just external customers impacted. Our own internal teams felt the pain Continue reading