Archive

Category Archives for "Networking"

Customers get increased integration with Cloudflare Email Security and Zero Trust through expanded partnership with CrowdStrike

Today, we’re excited to expand our recent Unified Risk Posture announcement with more information on our latest integrations with CrowdStrike. We previously shared that our CrowdStrike Falcon Next-Gen SIEM integration allows for deeper analysis and further investigations by unifying first- and third-party data, native threat intelligence, AI, and workflow automation to allow your security teams to focus on work that matters.

This post explains how Falcon Next-Gen SIEM allows customers to identify and investigate risky user behavior and analyze data combined with other log sources to uncover hidden threats. By combining Cloudflare and CrowdStrike, organizations are better equipped to manage risk and decisively take action to stop cyberattacks.

By leveraging the combined capabilities of Cloudflare and CrowdStrike, organizations combine Cloudflare’s email security and zero trust logging capabilities with CrowdStrike’s dashboards and custom workflows to get better visibility into their environments and remediate potential threats. Happy Cog, a full-service digital agency, currently leverages the integration. Co-Founder and President Matthew Weinberg said:

'The integration of Cloudflare’s robust Zero Trust capabilities with CrowdStrike Falcon Next-Gen SIEM enables organizations to gain a more comprehensive view of the threat landscape and take action to mitigate both internal and external risks posed by today’s security Continue reading

PMTUD in MPLS-enabled Networks

In the previous post on MSS, MSS Clamping, PMTUD, and MTU, we learned how PMTUD is performed by setting the Don’t fragment flag in the IP header which leads to the device that needs to perform fragmentation dropping the packet and sending ICMP Fragmentation needed packet towards the source. In MPLS-enabled networks, it’s not always possible to send the ICMP packet straight towards the source as the P routers have no knowledge of the customer specific networks. In RFC 3032 – MPLS Label Stack Encoding, such a scenario is described:

Suppose one is using MPLS to "tunnel" through a transit routing
domain, where the external routes are not leaked into the domain's
interior routers. For example, the interior routers may be running
OSPF, and may only know how to reach destinations within that OSPF
domain. The domain might contain several Autonomous System Border
Routers (ASBRs), which talk BGP to each other. However, in this
example the routes from BGP are not distributed into OSPF, and the
LSRs which are not ASBRs do not run BGP.

In this example, only an ASBR will know how to route to the source of
some arbitrary packet. If an interior router needs Continue reading

PP030: Volt Typhoon On the Attack, Starlink Joins the Navy, and More Security News

Today’s Packet Protector is an all-news episode. We cover the Volt Typhoon hacker group exploiting a zero-day in Versa Networks gear and a multitude of vulnerabilities in Zyxel network products. We also debate whether Microsoft’s endpoint security summit will be more than a public relations exercise, a serious backdoor in RFID cards used in offices... Read more »

HS082: Citizen Coders: Boon Or Bane?

The low-code/no-code movement means business users who aren’t programmers can create software. This capability might make these citizen coders more efficient and productive, but could also pose risks due to a lack of formal training in software development and security. Is citizen coding a boon or bane to business? Johna Johnson and John Burke discuss... Read more »

A good day to trie-hard: saving compute 1% at a time

Cloudflare’s global network handles a lot of HTTP requests – over 60 million per second on average. That in and of itself is not news, but it is the starting point to an adventure that started a few months ago and ends with the announcement of a new open-source Rust crate that we are using to reduce our CPU utilization, enabling our CDN to handle even more of the world’s ever-increasing Web traffic. 

Motivation

Let’s start at the beginning. You may recall a few months ago we released Pingora (the heart of our Rust-based proxy services) as an open-source project on GitHub. I work on the team that maintains the Pingora framework, as well as Cloudflare’s production services built upon it. One of those services is responsible for the final step in transmitting users’ (non-cached) requests to their true destination. Internally, we call the request’s destination server its “origin”, so our service has the (unimaginative) name of “pingora-origin”.

One of the many responsibilities of pingora-origin is to ensure that when a request leaves our infrastructure, it has been cleaned to remove the internal information we use to route, measure, and optimize traffic for our customers. This has to be Continue reading

Emulating congestion with Containerlab

The Containerlab dashboard above shows variation in throughput in a leaf and spine network due to large "Elephant" flow collisions in an emulated network, see Leaf and spine traffic engineering using segment routing and SDN for a demonstration of the issue using physical switches.

This article describes the steps needed to emulate realistic network performance problems using Containerlab. First, using the FRRouting (FRR) open source router to build the topology provides a lightweight, high performance, routing implementation that can be used to efficiently emulate large numbers of routers using the native Linux dataplane for packet forwarding. Second, the containerlab tools netem set command can be used to introduce packet loss, delay, jitter, or restrict bandwidth of ports.

The netem tool makes use of the Linux tc (traffic control) module. Unfortunately, if you are using Docker desktop, the minimal virtual machine used to run containers does not include the tc module.

multipass launch docker
Instead, use Multipass as a convenient way to create and start an Ubuntu virtual machine with Docker support on your laptop. If you are already on a Linux system with Docker installed, skip forward to the git clone step.
multipass ls
List the multipass virtual machines.
 Continue reading

Live: BGP Labs and Netlab Testing @ SINOG 8

I’ll talk about the BGP labs and the magic behind the scenes that ensures the lab configurations are correct at the SINOG 8 meeting later today (selecting the English version of the website is counter-intuitive; choose English from the drop-down field on the right-hand side of the page).

The SINOG 8 presentations will be live-streamed; I should start around 13:15 Central European Time (11:15 GMT; figuring out the local time is left as an exercise for the reader).

More NPM packages on Cloudflare Workers: Combining polyfills and native code to support Node.js APIs

Today, we are excited to announce a preview of improved Node.js compatibility for Workers and Pages. Broader compatibility lets you use more NPM packages and take advantage of the JavaScript ecosystem when writing your Workers.

Our newest version of Node.js compatibility combines the best features of our previous efforts. Cloudflare Workers have supported Node.js in some form for quite a while. We first announced polyfill support in 2021, and later built-in support for parts of the Node.js API that has expanded over time.

The latest changes make it even better:

To give it a try, add the following flag to wrangler.toml, and deploy your Worker with Wrangler:

compatibility_flags = ["nodejs_compat_v2"]

Packages that could not be imported with nodejs_compat, even as a dependency of another package, will now load. This Continue reading

Looking for 240/4 Addresses

In the IANA IPv4 Address registry a block of addresses, 240.0.0.0/4 is marked as reserved for "Future Use"". If we have run out of available IPv4 addresses, then why are some quarter of a billion IPv4 addresses still sitting idle in an IANA registry waiting for an undefined Future Use?

Optimizing OSPF RFC Access with Serper API and Large Language Models

Disclaimer: All Writings And Opinions Are My Own And Are Interpreted Solely From My Understanding. Please Contact The Concerned Support Teams For A Professional Opinion, As Technology And Features Change Rapidly.

This series of blog posts will focus on one feature at a time to simplify understanding.

At this point, ChatGPT—or any Large Language Model (LLM)—needs no introduction. I’ve been exploring GPTs with relative success, and I’ve found that API interaction makes them even more effective.

But how can we turn this into a workflow, even a simple one? What are our use cases and advantages? For simplicity, we’ll use the OpenAI API rather than open-source, self-hosted LLMs like Meta’s Llama.

Let’s consider an example: searching for all OSPF-related RFCs on the web. Technically, we’ll use a popular search engine, but to do this programmatically, I’ll use Serper. You can find more details at https://serper.dev. Serper is a powerful search API that allows developers to programmatically access search engine results. It provides a simple interface to retrieve structured data from search queries, making it easier to integrate search functionality into applications and workflows.

Let’s build the first building block and try to fetch results using Serper. When you sign Continue reading

Network CI/CD – Configuration Management with Napalm and Nornir

Network CI/CD - Configuration Management with Napalm and Nornir

Hi all, welcome back to part 4 of the Network CI/CD blog series. So far, we've covered the purpose of a Network CI/CD pipeline, the problems it solves for Network Engineers, and how to set up GitLab, including creating projects, installing runners, and understanding GitLab executors. We also looked at how to use GitLab variables to securely hide secrets.

In this part, we'll explore how to manage a campus network using Nornir and Napalm and deploy configurations through a CI/CD pipeline. Let's get to it!

Network CI/CD Pipeline - What’s the Point?
Hi all, welcome to the ‘Network CI/CD’ blog series. To kick things off, let’s ask the question, “Why do we even need a CI/CD pipeline for networks?” Instead of diving straight into technical definitions
Network CI/CD - Configuration Management with Napalm and Nornir

As I mentioned previously, I'm not a CI/CD expert at all and I'm still learning. The reason for creating this series is to share what I learn with the community. The pipeline we are building is far from perfect, but that's okay. The goal here is to create a simple pipeline that works and then build upon it as we go. This way, you can start small and gradually Continue reading

HN748: How AI and HPC Are Changing Data Center Networks

On today’s episode of Heavy Networking, Rob Sherwood joins us to discuss the impact that High Performance Computing (HPC)and artificial intelligence computing are having on data center network design. It’s not just a story about leaf/spine architecture. That’s the boring part. There’s also power and cooling issues, massive bandwidth requirements, and changes in how we... Read more »

TNO002: How Simplification Helps MSPs Scale

Simplification is the theme of today’s episode. Host Scott Robohn and guest Jack Maxfield explore the operational impacts of simplification for a Managed Service Provider (MSP). They discuss the challenges of managing multi-vendor environments and how to use templating and tools to simplify the management process. Proactive client communication and the integration of network and... Read more »

Network CI/CD Pipeline – GitLab Variables

Network CI/CD Pipeline - GitLab Variables

Hi all, welcome back to our Network CI/CD blog series. In the previous posts, we covered what CI/CD is and why you need it for Network Automation. We also covered GitLab basics and how to set up your first pipeline. In this post, we’ll look into how to keep your credentials secure by hiding them from the repository and using GitLab variables. Let’s get to it!

GitLab Variables

In GitLab CI/CD, variables play an important role in managing dynamic values throughout your pipeline. These variables can store anything from environment-specific settings to sensitive information like credentials. By using variables, you can easily manage and change values without hardcoding them in your scripts or playbooks.

GitLab provides a secure way to store sensitive data such as passwords or API tokens. You can define these variables in your project’s Settings > CI/CD > Variables section, and they will be securely injected into your pipeline during runtime.

Network CI/CD Pipeline - GitLab Variables

If you recall, in our previous examples, we had the username and password hardcoded in the Ansible variables file. This is not secure at all, and you should never expose sensitive information like credentials directly in your repository. By using GitLab variables, you can securely Continue reading