Maybe it’s just me, but I always need a few extra devices in my virtual labs to have endpoints I could ping to/from or to have external routing information sources. We used VRF- and VLAN tricks in the days when we had to use physical devices to carve out a dozen hosts out of a single Cisco 2501, and life became much easier when you could spin up a few additional virtual machines in a virtual lab instead.
Unfortunately, those virtual machines eat precious resources. For example, netlab allocates 1GB to every Linux virtual machine when you only need bash
and ping
. Wouldn’t it be great if you could start that ping
in a busybox container instead?
Maybe it’s just me, but I always need a few extra devices in my virtual labs to have endpoints I could ping to/from or to have external routing information sources. We used VRF- and VLAN tricks in the days when we had to use physical devices to carve out a dozen hosts out of a single Cisco 2501, and life became much easier when you could spin up a few additional virtual machines in a virtual lab instead.
Unfortunately, those virtual machines eat precious resources. For example, netlab allocates 1GB to every Linux virtual machine when you only need bash
and ping
. Wouldn’t it be great if you could start that ping
in a busybox container instead?
This chapter introduces Azure Virtual WAN (vWAN) service. It offers a single deployment, management, and monitoring pane for connectivity services such as Inter-VNet, Site-to-Site VPN, and Express Route. In this chapter, we are focusing on S2S VPN and VNet connections. The Site-to-Site VPN solutions in vWAN differ from the traditional model, where we create resources as an individual components. In this solution, we only deploy a vWAN resource and manage everything else through its management view. Figure 11-1 illustrates our example topology and deployment order. The first step is to implement a vWAN resource. Then we deploy a vHub. It is an Azure-managed VNet to which we assign a CIDR, just like we do with the traditional VNet. We can deploy a vHub as an empty VNet without associating any connection. A vHub deployment process launches a pair of redundant routers, which exchange reachability information with the VNet Gateway router and VGW instances using BGP. We intend to allow Inter-VNet data flows between vnet-swe1, vnet-swe2, and Branch-to-VNet traffic. For Site-to-Site VPN, we deploy VPN Gateway (VGW) into vHub. The VGW started in the vHub creates two instances, instance0, and instance1, in active/active mode. We don’t deploy a GatewaySubnet for VGW Continue reading
This blog post was written in collaboration with:
Aloys Augustin, Nathan Skrzypczak, Hedi Bouattour, Onong Tayeng, and Jerome Tollet at Cisco. Aloys and Nathan are part of a team of external contributors to Calico Open Source that has been working on an integration between Calico Open Source and the FD.io VPP dataplane technology for the last couple of years.
Mrittika Ganguli, principal engineer and architect at Intel’s Network and Edge (NEX). Ganguli leads a team with Qian Q Xu, Ping Yu, and Xiaobing Qian to enhance the performance of Calico and VPP through software and hardware acceleration.
This blog will cover what the Calico/VPP dataplane is and demonstrate the performance and flexibility advantages of using the VPP dataplane through a benchmarking setup. By the end of this blog post, you will have a clear understanding of how Calico/VPP dataplane, with the help of DPDK and accelerated memif interfaces, can provide high throughput and low-latency Kubernetes cluster networking for your environment. Additionally, you will learn how these technologies can be used to reduce CPU utilization by transferring packets directly in memory between different hosts, making it an efficient solution for building distributed network functions with lightning-fast speeds.
Robert Graham published a blog post describing how his IDS/IPS system handled 2 Mpps on a Pentium III CPU 20 years ago… and yet some people keep claiming that “Driving a 100 Gbps network at 80% utilization in both directions consumes 10–20 cores just in the networking stack” (in 2023). I guess a suboptimal-enough implementation can still consume all the CPU cycles it can get and then some.
Robert Graham published a blog post describing how his IDS/IPS system handled 2 Mpps on a Pentium III CPU 20 years ago… and yet some people keep claiming that “Driving a 100 Gbps network at 80% utilization in both directions consumes 10–20 cores just in the networking stack” (in 2023). I guess a suboptimal-enough implementation can still consume all the CPU cycles it can get and then some.
Today's Heavy Networking is a forward-looking episode about semantic networking. Semantic networking aims to make decisions on how to route packets based on more than just the destination address and give network operators more routing choices based on considerations such as bandwidth, cost, performance, application type, and so on. But how do you add semantic information to IP headers? How do you program routers and networking hardware to consume semantics? Do we really need this? Guests Adrian Farrel and Hannes Gredler join Greg Ferro and Ethan Banks to discuss and debate.
The post Heavy Networking 664: Semantic Networking – Science Project Or Networking’s Future? appeared first on Packet Pushers.
For this week’s episode of the Hedge, Tom Ammon and Russ White are joined by Chris Romeo to talk about the importance of the human element in threat modeling. If you’ve ever wondered about the importance of threat modeling or how to get started in threat modeling, this episode will guide you on your way.
Supermicro has always been an interesting IT supplier for the datacenter, and it is getting more interesting by the year as it continues to grow very fast and has set itself a goal of break $10 billion in annual sales, which would put Supermicro behind only Dell and Hewlett Packard Enterprise, on par with Inspur, and ahead of
Since the advent of the X86 server market thirty years ago, Supermicro has been unique among its server making peers in a number of ways. …
Supermicro Aspires To Be A $10 Billion Server Maker was written by Timothy Prickett Morgan at The Next Platform.
As a network operator, I want to describe in plain language what I need a network to do, and the network is configured accordingly. Then I want the network to monitor itself, and when things aren’t going well, the network will repair itself with no involvement from me. Hey, daydreaming is fun.
In the real world, plain language describing my network requirements isn’t going to conjure a relevant network. I must perform hard work to create a network design that’s useful for a business. I have to think through issues like capacity needs under peak load, redundancy to survive a network failure, and resiliency to support business operations in the face of a catastrophic outage. I need to understand individual application requirements, and be sure the network can support those requirements. I have to consider modularity, repeatability, and supportability. I must work within a budget.
My design will translate into an arcane collection of devices, interfaces, interconnections, protocols, and topologies. I’ll rely on education, experience, and experimentation to fine-tune the design, and then I’ll put it into production. Depending on your personality, this arduous task likely falls somewhere between “fun” and “frightening” for you. But no matter who you are, Continue reading
It’s Always the Network is a refrain that causes operations teams to shudder. No matter what your flavor of networking might be it’s always your fault. Even if the actual problem is DNS, a global BGP outage, or even some issue with the SaaS provider. Why do we always get blamed? And how can you prevent this from happening to you?
Users don’t know about the world outside of their devices. As soon as they click on something in a browser window they expect it to work. It’s a lot like ordering a package and having it delivered. It’s expected that the package arrives. You don’t concern yourself with the details of how it needs to be shipped, what routes it will take, and how factors that exist half a world away could cause disruptions to your schedule at home.
The network is the same to the users. If something doesn’t work with a website or a remote application it must be the “network” that is at fault. Because your users believe that everything not inside of their computer is the network. Networking is the way that stuff happens everywhere else. As professionals we know the differences between Continue reading
Before identity-driven Zero Trust rules, some SaaS applications on the public Internet relied on the IP address of a connecting user as a security model. Users would connect from known office locations, with fixed IP address ranges, and the SaaS application would check their address in addition to their login credentials.
Many systems still offer that second factor method. Customers of Cloudflare One can use a dedicated egress IP for this purpose as part of their journey to a Zero Trust model. Unlike other solutions, customers using this option do not need to deploy any infrastructure of their own. However, not all traffic needs to use those dedicated egress IPs.
Today, we are announcing policies that give administrators control over when Cloudflare uses their dedicated egress IPs. Specifically, administrators can use a rule builder in the Cloudflare dashboard to determine which egress IP is used and when, based on attributes like identity, application, IP address, and geolocation. This capability is available to any enterprise-contracted customer that adds on dedicated egress IPs to their Zero Trust subscription.
In today’s hybrid work environment, organizations aspire for more consistent security and IT experiences to manage their employees’ traffic Continue reading
Today we’re excited to be announcing more flexibility to HTTP alerting, enabling customers to customize the types of activity they’re alerted on and how those alerts are organized.
Prior to today, HTTP alerts at Cloudflare have been very generic. You could choose which Internet properties you wanted and what sensitivity you wanted to be alerted on, but you couldn’t choose anything else. You couldn’t, for example, exclude the IP addresses you use to test things. You couldn’t choose to monitor only a specific path. You couldn’t choose which HTTP statuses you wanted to be alerted on. You couldn’t even choose to monitor your entire account instead of specific zones.
Our customers leverage the Cloudflare network for a myriad of use cases ranging from decreasing bandwidth costs and accelerating asset delivery with Cloudflare CDN to protecting their applications against brute force attacks with Cloudflare Bot Management. Whether the reasons for routing traffic through the Cloudflare network are simple or complex, one powerful capability that comes for free is observability.
With traffic flowing through the network, we can monitor and alert customers about anomalous events such as spikes in origin error rates, enabling them to investigate further and mitigate any issues as Continue reading