The massive failure resulting from a failed update to 8.5 million Windows hosts by Crowdstrike will live in Internet history for years to come. The failure will be studied by engineering teams and college classes to understand what went wrong and how we can stop this from happening in the future. Derick Winkworth (@cloudtoad), Eyvonne Sharp, Tom Ammon, and Russ White hang out at the hedge to talk about what happened and lessons learned from a network engineering perspective.
The reality is a bit muddier (in the VXLAN world) as we still need transit VLANs and router MAC addresses; the best way to explore what’s going on behind the scenes is to build a simple lab.
On today’s Day Two DevOps we talk with Jen Stone, a technical security assessor and aerial arts competition organizer. Jen shares her journey from IT service desk to becoming a security assessor. She emphasizes the importance of creativity and empathy in regulatory compliance while advocating for a collaborative approach to assessments and auditing Episode Guest:... Read more »
Today on Heavy Wireless we discuss the LATYS Focus device.This innovative RF technology amplifies and directs signals without traditional networking layers. Our guest is Artmiz Golkaramnay, Founder of LATYS. Artmiz explains the device’s functionality, which includes a directional antenna for focused signal amplification; its technical specifications; practical applications in industrial settings; cost benefits; and ease... Read more »
On today’s Packet Protector we answer listener questions about Wi-Fi security with guest Stephen Orr. Stephen is Chair of the Security Technical Task Group for the Wi-Fi Alliance and a Distinguished Solutions Engineer at Cisco. Questions include what recommendations Stephen would make for using multiple SSIDs vs. role-based device segmentation, what he sees as the... Read more »
COMMISSIONED: Whether you’re using one of the leading large language models (LLM), emerging open-source models or a combination of both, the output of your generative AI service hinges on the data and the foundation that supports it. …
The modern use of "cloud" arguably traces its origins to the cloud icon, omnipresent in network diagrams for decades. A cloud was used to represent the vast and intricate infrastructure components required to deliver network or Internet services without going into depth about the underlying complexities. At Cloudflare, we embody this principle by providing critical infrastructure solutions in a user-friendly and easy-to-use way. Our logo, featuring the cloud symbol, reflects our commitment to simplifying the complexities of Internet infrastructure for all our users.
This blog post provides an update about our infrastructure, focusing on our global backbone in 2024, and highlights its benefits for our customers, our competitive edge in the market, and the impact on our mission of helping build a better Internet. Since the time of our last backbone-related blog post in 2021, we have increased our backbone capacity (Tbps) by more than 500%, unlocking new use cases, as well as reliability and performance benefits for all our customers.
A snapshot of Cloudflare’s infrastructure
As of July 2024, Cloudflare has data centers in 330 cities across more than 120 countries, each running Cloudflare equipment and services. The goal of delivering Cloudflare products and services everywhere remains consistent, although Continue reading
The modern use of "cloud" arguably traces its origins to the cloud icon, omnipresent in network diagrams for decades. A cloud was used to represent the vast and intricate infrastructure components required to deliver network or Internet services without going into depth about the underlying complexities. At Cloudflare, we embody this principle by providing critical infrastructure solutions in a user-friendly and easy-to-use way. Our logo, featuring the cloud symbol, reflects our commitment to simplifying the complexities of Internet infrastructure for all our users.
This blog post provides an update about our infrastructure, focusing on our global backbone in 2024, and highlights its benefits for our customers, our competitive edge in the market, and the impact on our mission of helping build a better Internet. Since the time of our last backbone-related blog post in 2021, we have increased our backbone capacity (Tbps) by more than 500%, unlocking new use cases, as well as reliability and performance benefits for all our customers.
A snapshot of Cloudflare’s infrastructure
As of July 2024, Cloudflare has data centers in 330 cities across more than 120 countries, each running Cloudflare equipment and services. The goal of delivering Cloudflare products and services everywhere remains consistent, although Continue reading
The majority of the networks running now in the Enterprise are on traditional VLANs, and the migration paths are limited. Really limited. How will a business transition from traditional to whatever is next?
The only sane choice I found so far in the data center environment (and I know it has been embraced by many organizations facing that conundrum) is to build a parallel fabric (preferably when the organization is doing a server refresh) and connect the new fabric with the old one with a layer-3 link (in the ideal world) or an MLAG link bundle.
Hi all, welcome back to yet another Palo Alto Automation post. If you work with firewalls, you know that one of the most time-consuming tasks is decommissioning a single resource or an entire subnet from the firewall (aka removing all the references related to the resource). Let's imagine you have thousands of address objects and several hundred security policies. Now, suppose we decommissioned a server and our task is to remove any references we may have in the firewall. It could be an address object, a member in an address group, or a reference in a security policy. Removing this manually would take a lot of time, so in this blog post, let's look at a very simple way of cleaning this up using pan-os-php.
The Problem with the Manual Approach
Here is the problem with the manual approach. Let’s say we are trying to decommission the IP address 10.10.10.25. Let's say we have an address object created for this server, and it has been referenced in a few address groups and several security policies. If you try to remove the address object first, the firewall will complain because the object is being referenced elsewhere. So, you Continue reading
For those that aren’t aware, Talos Linux is a purpose-built Linux distribution designed for running Kubernetes. Bootstrapping a Talos Linux cluster is normally done via the Talos API, but this requires direct network access to the Talos Linux nodes. What happens if you don’t have direct network access to the nodes? In this post, I’ll share with you how to bootstrap a Talos Linux cluster over SSH.
In all honesty, if you can establish direct network access to the Talos Linux nodes then that’s the recommended approach. Consider what I’m going to share here as a workaround—a backup plan, if you will—in the event you can’t establish direct network access. I figured out how to bootstrap a Talos Linux cluster over SSH only because I was not able to establish direct network access.
Before getting into the details, I think it’s useful to point out that I’m talking about using SSH port forwarding (SSH tunneling) here, but Talos Linux doesn’t support SSH (as in, you can’t SSH into a Talos Linux system). In this case, you could do the same thing I did and use an SSH bastion host to handle the port forwarding.
Take a Network Break! This week we discuss a proposed class action lawsuit against CrowdStrike, while Delta investigates options to seek damages from CrowdStrike and Microsoft. Microsoft Azure goes down after a DDoS defense error, campus switch sales are forecast to drop significantly in 2024, and DigiCert warns customers that an error it made will... Read more »
It’s been while I was away from tech blog writing , this week end I powered on my EVE-NG based lab running on Dell R820 Edge Server and explored some mysteries behind BGP Route Reflector. Please read through it on my Github (https://github.com/kashif-nawaz/Understanding-BGP-Route-Reflector-Mysteries)
The Secure Shell (SSH) isn’t just about allowing you to remote into servers to tackle admin tasks. Thanks to this secure networking protocol, you can also mount remote directories with the help of the SSH File System (SSHF).
SSHFS uses SFTP (SSH File Transfer Protocol) to mount remote directories to a local machine using secure encryption, which means the connection is far more secure than your standard FTP. As well, once a remote directory is mounted, it can be used as if it was on the local machine.
Consider SSHFS to be a more secure way of creating network shares, the only difference is you have to have SSHFS installed on any machine that needs to connect to the share (whereas with Samba, you only have to have it installed on the machine hosting the share).
Let’s walk through the process of getting SSHFS up and running, so you can securely mount remote directories to your local machine.
What You’ll Need
To make this work, you’ll need at least two Linux machines. These machines can be Ubuntu or Fedora-based, because SSHFS is found in the standard repositories for most Linux distributions. You’ll also need a user with Continue reading
Last month, I took a good look at the Gowin R86S based on Jasper Lake (N6005) CPU
[ref],
which is a really neat little 10G (and, if you fiddle with it a little bit, 25G!) router that runs
off of USB-C power and can be rack mounted if you print a bracket. Check out my findings in this
[article].
David from Gowin reached out and asked me if I was willing to also take a look their Alder Lake
(N305) CPU, which comes in a 19” rack mountable chassis, running off of 110V/220V AC mains power,
but also with 2x25G ConnectX-4 network card. Why not! For critical readers: David sent me this
machine, but made no attempt to influence this article.
Hardware Specs
There are a few differences between this 19” model and the compact mini-pc R86S. The most obvious
difference is the form factor. The R86S is super compact, not inherently rack mountable,
although I 3D printed a bracket for it. Looking inside, the motherboard is mostly obscured bya large
cooling block with fins that are flush with the top plate. There are 5 copper ports in the front: 2x
Intel i226-V (these Continue reading
In this blog post, let's look at a common scenario where users face two MFA prompts when trying to connect to Global Protect VPN. Typically, this happens because MFA has been set up for both the portal and the gateway.
When a user connects to GP, the user first logs into the portal and completes the MFA, then, they automatically attempt to connect to the gateway, which triggers another prompt. We'll look at how to prevent two MFA prompts using authentication cookies, so the user only needs to complete the MFA once.
Global Protect Cookie Authentication
Cookie authentication simplifies the authentication process for users because they will no longer be required to log in to both the portal and the gateway in succession or complete multiple MFAs to authenticate to each. This improves the user experience by minimizing the number of times the users enter credentials.
To keep things simple, when a user logs into Global Protect, we can configure it to generate a 'cookie.' This cookie allows the user to re-authenticate automatically without having to re-enter their credentials or go through MFA again. It's similar to how web browsers remember your login details for websites; once Continue reading
In a previous blog post, I talked about how Terraform's native capabilities don't fully cover comprehensive IP address management, which can make network configurations a bit tricky.
In this post, I’m going to dive into a practical approach for handling IP addresses in Terraform. I'll show you how to leverage an external data source and use a Python script to process IP address operations, then integrate the results back into Terraform.
Introduction to External Data Source
In Terraform, a data source allows you to retrieve information from external systems or services, which you can then use in your configurations. Unlike resources, which are used to manage the lifecycle of infrastructure components, data sources are read-only. They provide a way to fetch data that you might need when setting up or configuring your infrastructure. This is especially useful when you want to incorporate existing information without directly managing the components within your Terraform scripts.
A simple data source in Terraform looks like this:
data "external" "ip" {
id = "ip"
}
Sample External Data Source
A lot of providers provide external data sources to interact with their systems and get configuration state. A data source in Terraform can range from a Continue reading
AI is making its way into network automation. Maybe the thought of a hallucinating ChatGPT getting its six-fingered hands on your network makes you want to run the other way. But the story of AI for IT operations is more nuanced than the hot takes we get about the confidently dumb results that Large Language... Read more »
Intel’s second quarter is pretty much a carbon copy of the first three months of 2024 when it comes to revenues across its newly constituted groups, and with an operating loss that is twice as big. …