

Last year, R2 made its debut, providing developers with object storage while eliminating the burden of egress fees. (For many, egress costs account for over half of their object storage bills!) Since R2’s launch, tens of thousands of developers have chosen it to store data for many different types of applications.
But for some applications, data stored in R2 doesn’t need to be retained forever. Over time, as this data grows, it can unnecessarily lead to higher storage costs. Today, we’re excited to announce that Object Lifecycle Management for R2 is generally available, allowing you to effectively manage object expiration, all from the R2 dashboard or via our API.
Object lifecycles give you the ability to define rules (up to 1,000) that determine how long objects uploaded to your bucket are kept. For example, by implementing an object lifecycle rule that deletes objects after 30 days, you could automatically delete outdated logs or temporary files. You can also define rules to abort unfinished multipart uploads that are sitting around and contributing to storage costs.

I got this comment on one of my ChatGPT-related posts:
It does save time for things like converting output to YAML (I do not feed it proprietary information), or have it write scripts in various languages, converting configs from one vendor to another, although often they are not complete or correct they save time so regardless of what we think of it, it is an efficiency multiplier.
I received similar feedback several times, but found that the real answer (as is too often the case) is It Depends.
I got this comment on one of my ChatGPT-related posts:
It does save time for things like converting output to YAML (I do not feed it proprietary information), or have it write scripts in various languages, converting configs from one vendor to another, although often they are not complete or correct they save time so regardless of what we think of it, it is an efficiency multiplier.
I received similar feedback several times, but found that the real answer (as is too often the case) is It Depends.
Organizations can reduce security risks in containerized applications by actively managing vulnerabilities through scanning, automated image deployment, tracking runtime risk and deploying mitigating controls to reduce risk
Kubernetes and containers have become de facto standards for cloud-native application development due to their ability to accelerate the pace of innovation and codify best practices for production deployments, but such acceleration can introduce risk if not operationalized properly.
In the architecture of containerized applications, it is important to understand that there are highly dynamic containers distributed across cloud environments. Some are ephemeral and short-lived, while others are more long-term. Traditional approaches to securing applications do not apply to cloud-native workloads because most deployments of containers occur automatically with an orchestrator that removes the manual step in the process. This automatic deployment requires that the images be continuously scanned to identify any vulnerabilities at the time of development in order to mitigate the risk of exploit during runtime.
In addition to these challenges, software supply chain adds complexity to vulnerability scanning and remediation. Applications increasingly depend on containers and components from third-party vendors and projects. As a result, it can take weeks or longer to patch the affected components and release new software Continue reading
When I reposted a link to xBGP: Faster Innovation in Routing Protocols paper, someone immediately replied
Quite interesting, but it feels like this could become the proverbial 15th standard.
xBGP is an API that allows BGP users to implement routing policies (route selection, filtering, or propagation) that use attributes or mechanisms defined in newer IETF RFCs or drafts, so the proverbial 15th standard is not that far off the mark. However, we must remember that what we call BGP is more than just a set of competing standards.
When I reposted a link to xBGP: Faster Innovation in Routing Protocols paper, someone immediately replied
Quite interesting, but it feels like this could become the proverbial 15th standard.
xBGP is an API that allows BGP users to implement routing policies (route selection, filtering, or propagation) that use attributes or mechanisms defined in newer IETF RFCs or drafts, so the proverbial 15th standard is not that far off the mark. However, we must remember that what we call BGP is more than just a set of competing standards.