Over the last few years, there has been a rise in the number of attacks that affect how a computer boots. Most modern computers use a specification called Unified Extensible Firmware Interface (UEFI) that defines a software interface between an operating system (e.g. Windows) and platform firmware (e.g. disk drives, video cards). There are security mechanisms built into UEFI that ensure that platform firmware can be cryptographically validated and boot securely through an application called a bootloader. This firmware is stored in non-volatile SPI flash memory on the motherboard, so it persists on the system even if the operating system is reinstalled and drives are replaced.
This creates a ‘trust anchor’ used to validate each stage of the boot process, but, unfortunately, this trust anchor is also a target for attack. In these UEFI attacks, malicious actions are loaded onto a compromised device early in the boot process. This means that malware can change configuration data, establish persistence by ‘implanting’ itself, and can bypass security measures that are only loaded at the operating system stage. So, while UEFI-anchored secure boot protects the bootloader from bootloader attacks, it does not protect the UEFI firmware itself.
I usually post links to my blog posts to LinkedIn, and often get extraordinary comments. Unfortunately, those comments usually get lost in the mists of social media fog after a few weeks, so I’m trying to save them by reposting them as blog posts (always with original author’s permission). Here’s a comment David Sun left on my Network Automation Expert Beginners blog post
The most successful automation I’ve seen comes from orgs who start with proper software requirements specifications and more importantly, the proper organizational/leadership backing to document and support said infrastructure automation tooling.
I usually post links to my blog posts to LinkedIn, and often get extraordinary comments. Unfortunately, those comments usually get lost in the mists of social media fog after a few weeks, so I’m trying to save them by reposting them as blog posts (always with original author’s permission). Here’s a comment David Sun left on my Network Automation Expert Beginners blog post
The most successful automation I’ve seen comes from orgs who start with proper software requirements specifications and more importantly, the proper organizational/leadership backing to document and support said infrastructure automation tooling.
Several Cloudflare services became unavailable for 121 minutes on January 24th, 2023 due to an error releasing code that manages service tokens. The incident degraded a wide range of Cloudflare products including aspects of our Workers platform, our Zero Trust solution, and control plane functions in our content delivery network (CDN).
Cloudflare provides a service token functionality to allow automated services to authenticate to other services. Customers can use service tokens to secure the interaction between an application running in a data center and a resource in a public cloud provider, for example. As part of the release, we intended to introduce a feature that showed administrators the time that a token was last used, giving users the ability to safely clean up unused tokens. The change inadvertently overwrote other metadata about the service tokens and rendered the tokens of impacted accounts invalid for the duration of the incident.
The reason a single release caused so much damage is because Cloudflare runs on Cloudflare. Service tokens impact the ability for accounts to authenticate, and two of the impacted accounts power multiple Cloudflare services. When these accounts’ service tokens were overwritten, the services that run on these accounts began to experience Continue reading
If memory bandwidth is holding back the performance of some of your applications, and there is something that you can do about it other than to just suffer. …
Building The Perfect Memory Bandwidth Beast was written by Timothy Prickett Morgan at The Next Platform.
For a long time I’ve wanted something Raspberry-pi-like but with RISC-V. And finally there is one, and a defensible price! Especially with the Raspberry Pi 4 shortage this seemed like a good idea.
This post is my first impressions and setup steps.
When I was in my late teens I was playing with different architectures, mostly using discarded university computers. It was fun to have such different types of computers. Back then it was SPARC (And UltraSparc), Alpha, and x86. Maybe access to some HPPA. I even had a MIPS (SGI Indigo 2).
Nowadays instead of SPARC, Alpha, and x86 it’s ARM, RISC-V, and x64.
Luckily they can be smaller nowadays. Before I left home my room had more towers of computers than it had furniture. In my first flat I had a full size rack!
pv starfive-jh7110-VF2_515_v2.5.0-69-minimal-desktop.img \
| sudo dd of=/dev/sda
We need to repartition, because the boot partition is way too small. It only fits one kernel/initrd, which became a problem I ran into.
Unfortunately gparted doesn’t seem to work on disk images. It Continue reading
Role-Based Access Control, or RBAC, lets you set permissions around who can access a system and at what level. RBAC is basic, but essential, security function. This video looks at RBAC for Kubernetes from two perspectives: in native Kubernetes and in platforms such as Azure Active Directory. Host Michael Levan brings his background in system […]
The post Kubernetes Security And Networking 2: Getting Started With Role-Based Access Control (RBAC) – Video appeared first on Packet Pushers.
By 2025, Gartner estimates that over 95% of new digital workloads will be deployed on cloud-native platforms, up from 30% in 2021. This momentum of these workloads and solutions presents a significant opportunity for companies that can meet the challenges of the burgeoning industry.
As digitalization continues pushing applications and services to the cloud, many companies discover that traditional security, compliance, and observability approaches do not transfer directly to cloud-native architectures. This is the primary takeaway from Tigera’s recent The State of Cloud-Native Security report. As 75% of companies surveyed are focusing on cloud-native application development, it is imperative that leaders understand the differences, challenges, and opportunities of cloud-native environments to ensure they reap the efficiency, flexibility, and speed that these architectures offer.
The flexibility container workloads provide makes the traditional ‘castle and moat’ approach to security obsolete. Cloud-native architectures do not have a single vulnerable entry point but many potential attack vectors because of the increased attack surface. Sixty-seven percent of companies named security as the top challenge regarding the speed of deployment cycles. Further, 69% of companies identified container-level firewall capabilities, such as intrusion detection and prevention, web application firewall, protection from “Denial of Service” Continue reading
We introduced resource modules in Ansible 2.9, which provided a path for users to ease network management, especially across multiple different product vendors. This announcement was significant because these resource modules added a well structured representation of device configurations and made it easy to manage common network configurations.
At AnsibleFest 2022, we announced a new addition to the content ecosystem offered through the platform: Ansible validated content. Ansible validated content is use cases-focused automation that is packaged as Collections. They contain Ansible plugins, roles and playbooks that you can use as an automation job through Red Hat Ansible Automation Platform.
The Ansible validated content for network base focuses on abstract platform-agnostic network automation and enhances the experience of resource module consumption by providing production-ready content. This network base content acts as the core to the other network validated content, which I will explain more about in the examples below.
The network.base Collection acts as a core for other Ansible validated content, as it provides the platform agnostic role called Resource Manager, which is the platform-agnostic entry point for managing all of the resources supported for a given network OS. It includes the Continue reading
At Cloudflare, we take steps to ensure we are resilient against failure at all levels of our infrastructure. This includes Kafka, which we use for critical workflows such as sending time-sensitive emails and alerts.
We learned a lot about keeping our applications that leverage Kafka healthy, so they can always be operational. Application health checks are notoriously hard to implement: What determines an application as healthy? How can we keep services operational at all times?
These can be implemented in many ways. We’ll talk about an approach that allows us to considerably reduce incidents with unhealthy applications while requiring less manual intervention.
Cloudflare is a big adopter of Kafka. We use Kafka as a way to decouple services due to its asynchronous nature and reliability. It allows different teams to work effectively without creating dependencies on one another. You can also read more about how other teams at Cloudflare use Kafka in this post.
Kafka is used to send and receive messages. Messages represent some kind of event like a credit card payment or details of a new user created in your platform. These messages can be represented in multiple ways: JSON, Protobuf, Avro and so on.
It’s easy to get excited about what seems to be a new technology and conclude that it will forever change the way we do things. For example, I’ve seen claims that SmartNICs (also known as Data Processing Units – DPU) will forever change the network.
TL&DR: Of course they won’t.
Before we start discussing the details, it’s worth remembering what a DPU is: it’s another server with its own CPU, memory, and network interface card (NIC) that happens to have PCI hardware that emulates the host interface cards. It might also have dedicated FPGA or ASICs.
It’s easy to get excited about what seems to be a new technology and conclude that it will forever change the way we do things. For example, I’ve seen claims that SmartNICs (also known as Data Processing Units – DPU) will forever change the network.
TL&DR: Of course they won’t.
Before we start discussing the details, it’s worth remembering what a DPU is: it’s another server with its own CPU, memory, and network interface card (NIC) that happens to have PCI hardware that emulates the host interface cards. It might also have dedicated FPGA or ASICs.