Since its inception, Cloudflare Zaraz, the server-side third-party manager built for speed, privacy and security, has strived to offer a way for marketers and developers alike to get the data they need to understand their user journeys, without compromising on page performance. Cloudflare Zaraz makes it easy to transition from traditional client-side data collection based on marketing pixels in users’ browsers, to a server-side paradigm that shares events with vendors from the edge.
When implementing data collection on websites or mobile applications, analysts and digital marketers usually first define the set of interactions and attributes they want to measure, formalizing those requirements along technical specifications in a central document (“tagging plan”). Developers will later implement the required code to make those attributes available for the third party manager to pick it up. For instance, an analyst may want to analyze page views based on an internal name instead of the page title or page pathname. They would therefore define an example “page name” attribute that would need to be made available in the context of the page, by the developer. From there, the analyst would configure the tag management system to pick the attribute’s Continue reading
After introducing the routing protocols and explaining the basics of link-state routing it was time for implementation considerations including:
After introducing the routing protocols and explaining the basics of link-state routing it was time for implementation considerations including:
It’s time for Eyvonne, Tom, and Russ to talk about some current stories in the world of networking—the May roundtable. Yes, I know it’s already June, and I’m a day late, but … This month we talk about the IT worker shortage, Infiniband, and the “next big thing.”
So draw up a place to sit and hang out with us as we chat.
Kubernetes has become the de facto standard for container orchestration, providing a powerful platform for deploying and managing containerized applications at scale. As more organizations adopt Kubernetes for their production workloads, ensuring the security and privacy of data in transit has become increasingly critical. Encrypting traffic within a Kubernetes cluster is one of the most effective components in a multi-layered defence when protecting sensitive data from interception and unauthorized access. Here, we will explore why encrypting traffic in Kubernetes is important and how it addresses compliance needs.
Two encryption methods are commonly adopted for protecting the data integrity and confidentiality; encryption at rest and encryption in transit. Encryption at rest refers to encrypting stored data, e.g. in your cloud provider’s managed disk solution, whereby if the data was simply copied and extracted the raw information obtained would be unintelligible without cryptographic keys to decrypt the data.
Encrypting data in transit is an effective security mechanism and a critical requirement for organization compliance and regulatory frameworks, as it helps protect sensitive information from unauthorized access and interception while it is being transmitted over the network. We will dive deeper into this requirement.
Encrypting data in transit Continue reading
In today's Kubernetes Unpacked podcast, Michael and Kristina chat about KubeCon EU, which took place in April 2023 in Amsterdam. They explore the latest and greatest technologies that are coming, the value of in-person gatherings, and why conference codes of conduct matter. They also share their top 3 KubeCon takeaways.
The post Kubernetes Unpacked 027: KubeCon EU 2023 Recap appeared first on Packet Pushers.
In this episode, Ed and Tom interview Scott on the topic of IPv6 security and firewalls. This is one of Scott's many areas of expertise as he is the co-author of IPv6 Security from Cisco Press. They discuss firewalls strategies, design and operational considerations, pros and cons of a dual-stack approach, and more.
The post IPv6 Buzz 127: IPv6 Security And Firewalls appeared first on Packet Pushers.
Cloudflare will deprecate the Railgun product on January 31, 2024. At that time, existing Railgun deployments and connections will stop functioning. Customers have the next eight months to migrate to a supported Cloudflare alternative which will vary based on use case.
Cloudflare first launched Railgun more than ten years ago. Since then, we have released several products in different areas that better address the problems that Railgun set out to solve. However, we shied away from the work to formally deprecate Railgun.
That reluctance led to Railgun stagnating and customers suffered the consequences. We did not invest time in better support for Railgun. Feature requests never moved. Maintenance work needed to occur and that stole resources away from improving the Railgun replacements. We allowed customers to deploy a zombie product and, starting with this deprecation, we are excited to correct that by helping teams move to significantly better alternatives that are now available in Cloudflare’s network.
We know that this will require migration effort from Railgun customers over the next eight months. We want to make that as smooth as possible. Today’s announcement features recommendations on how to choose a replacement, how to get started, and guidance on where you Continue reading
Today we’re excited to announce an update to our Tiered Cache offering: Regional Tiered Cache.
Tiered Cache allows customers to organize Cloudflare data centers into tiers so that only some “upper-tier” data centers can request content from an origin server, and then send content to “lower-tiers” closer to visitors. Tiered Cache helps content load faster for visitors, makes it cheaper to serve, and reduces origin resource consumption.
Regional Tiered Cache provides an additional layer of caching for Enterprise customers who have a global traffic footprint and want to serve content faster by avoiding network latency when there is a cache miss in a lower-tier, resulting in an upper-tier fetch in a data center located far away. In our trials, customers who have enabled Regional Tiered Cache have seen a 50-100ms improvement in tail cache hit response times from Cloudflare’s CDN.
First, a quick refresher on caching: a request for content is initiated from a visitor on their phone or computer. This request is generally routed to the closest Cloudflare data center. When the request arrives, we look to see if we have the content cached to respond to Continue reading
One of my readers sent me this (paraphrased) question:
What I have seen in my network are multicast packets with the IP source address set to 0.0.0.0 and source port set to 0. Is that considered acceptable? Could I use a multicast IP address as a source address?
TL&DR: **** NO!!!
It also seemed like a good question to test ChatGPT, and this time it did a pretty good job.
One of my readers sent me this (paraphrased) question:
What I have seen in my network are multicast packets with the IP source address set to 0.0.0.0 and source port set to 0. Is that considered acceptable? Could I use a multicast IP address as a source address?
TL&DR: **** NO!!!
It also seemed like a good question to test ChatGPT, and this time it did a pretty good job.