
This is a sort of companion piece to my post last week because I saw a very short post here about doing less. It really hit home with me because I’m just as bad as Shawn about wanting everything to be perfect when I write it or create it.
One of the things that I’ve noticed in a lot of content that I’ve been consuming recently is the inclusion of mistakes. When you’re writing you have ample access to a backspace key so typos shouldn’t exist (and autocorrect can bugger off). But in video and audio content you can often make a mistake and not even realize it. Flubbing a word or needed to do a retake for something happens quite often, even if you never see or hear them.
What has me curious and a bit interested is that more of those quick errors are making it in. These are things that could easily be fixed in post production and yet they stay. It’s almost like the creators are admitting that mistakes happen and it’s hard to read scripts perfectly every time like some kind of robot. Honest mistakes over things like pronunciation or difficult word combinations Continue reading
Today on Day Two Cloud we explore what it takes to transition from traditional networking to a career as a cloud network engineer. Guest Kam Agahian shares insights from his own career journey about what's the same and what's different between on-prem and cloud networking, what skills might you want to pick up to make the transition, recommended certifications, and more.
The post Day Two Cloud 189: The Cloud Network Engineer Career Path With Kam Agahian appeared first on Packet Pushers.
Containerized applications are complex, which is why an effective container security strategy is difficult to design and execute. As digitalization continues to push applications and services to the cloud, bad actors’ attack techniques have also become more sophisticated, which further challenges container security solutions available on the market.
Despite the discussion around agent vs agentless in the cloud security landscape and which type of solution is better, the most valuable solution is one that provides a wide breadth of coverage. Calico is unique as it is already installed as part of the underlying platform and provides the dataplane for a Kubernetes cluster. When Calico Cloud or Calico Enterprise is deployed, security and observability capabilities can be enabled on top of these core components. We provide a simple plug-and-play active security solution that focuses on securing workloads and the Kubernetes platform with the least amount of complexity and configuration.
Cloud-native applications are susceptible to many attack vectors. We have broken them down to eight, as seen in the following illustration:

In previous blogs, we have explained how the use of vulnerability management, zero-trust workload security, and microsegmentation can help reduce the Continue reading
Years ago I loved ranting about the stupidities of building stretched VLANs to run high-availability network services clusters with two nodes (be it firewalls, load balancers, or data center switches with centralized control plane) across multiple sites.
I collected pointers to those blog posts and other ipSpace.net HA cluster resources on the new High Availability Service Clusters page.
Years ago I loved ranting about the stupidities of building stretched VLANs to run high-availability network services clusters with two nodes (be it firewalls, load balancers, or data center switches with centralized control plane) across multiple sites.
I collected pointers to those blog posts and other ipSpace.net HA cluster resources on the new High Availability Service Clusters page.

Just after midnight (UTC) on April 4, subscribers to UK ISP Virgin Media (AS5089) began experiencing an Internet outage, with subscriber complaints multiplying rapidly on platforms including Twitter and Reddit.
Cloudflare Radar data shows Virgin Media traffic dropping to near-zero around 00:30 UTC, as seen in the figure below. Connectivity showed some signs of recovery around 02:30 UTC, but fell again an hour later. Further nominal recovery was seen around 04:45 UTC, before again experiencing another complete outage between around 05:45-06:45 UTC, after which traffic began to recover, reaching expected levels around 07:30 UTC.
After the initial set of early-morning disruptions, Virgin Media experienced another round of issues in the afternoon. Cloudflare observed instability in traffic from Virgin Media’s network (called an autonomous system in Internet jargon) AS5089 starting around 15:00 UTC, with a significant drop just before 16:00 UTC. However in this case, it did not appear to be a complete outage, with traffic recovering approximately a half hour later.

Virgin Media’s Twitter account acknowledged the early morning disruption several hours after it began, posting responses stating “We’re aware of an issue that is affecting broadband services for Virgin Media customers as well as our contact centres. Our teams Continue reading