Archive

Category Archives for "Networking"

Process monitoring: How you can detect malicious behavior in your containers

The default pod provisioning mechanism in Kubernetes has a substantial attack surface, making it susceptible to malevolent exploits and container breakouts. To achieve effective runtime security, your containerized workloads in Kubernetes require multi-layer process monitoring within the container.

In this article, I will introduce you to process monitoring and guide you through a Kubernetes-native approach that will help you enforce runtime security controls and detect unauthorized access of host resources.

What is process monitoring?

When you run a containerized workload in Kubernetes, several layers should be taken into account when you begin monitoring the process within a container. This includes container process logs and artifacts, Kubernetes and cloud infrastructure artifacts, filesystem access, network connections, system calls required, and kernel permissions (specialized workloads). Your security posture depends on how effectively your solutions can correlate disparate log sources and metadata from these various layers. Without effective workload runtime security in place, your Kubernetes workloads, which have a large attack surface, can easily be exploited by adversaries and face container breakouts.

Traditional monitoring systems

Before I dive into the details on how to monitor your processes and detect malicious activities within your container platform, let us first take a look at some of Continue reading

Cisco chips away at product backlog but challenges remain

Cisco is getting more products out the door, thanks to significant product redesigns and relentless efforts by its supply-chain team to address component shortages, but the situation is still challenging.“While components for a few product areas remain highly constrained, we did see an overall improvement in the supply chain,” said Cisco CEO Chuck Robbins during a call with financial analysts to discuss the vendor's second-quarter results. Cisco reduced its backlog 6% sequentially in the second quarter, however total backlog grew year over year, Robbins said, though he didn't cite an exact dollar figure. The company still expects to have a backlog that’s roughly double what it would normally be at the end of the year. (In February of last year, Cisco said its product backlog was valued at nearly $14 billion.)To read this article in full, please click here

Cisco chips away at product backlog but challenges remain

Cisco is getting more products out the door, thanks to significant product redesigns and relentless efforts by its supply-chain team to address component shortages, but the situation is still challenging.“While components for a few product areas remain highly constrained, we did see an overall improvement in the supply chain,” said Cisco CEO Chuck Robbins during a call with financial analysts to discuss the vendor's second-quarter results. Cisco reduced its backlog 6% sequentially in the second quarter, however total backlog grew year over year, Robbins said, though he didn't cite an exact dollar figure. The company still expects to have a backlog that’s roughly double what it would normally be at the end of the year. (In February of last year, Cisco said its product backlog was valued at nearly $14 billion.)To read this article in full, please click here

Cisco streamlines SD-WAN hardware and software at the edge

Cisco is adding compute power and streamlining edge hardware and software offerings to make SD-WAN easier to deploy and manage.Taken together enhancements are aimed at helping to better handle growing distributed enterprises but also to help simplify environments—the hardware by allowing users to collapse multiple devices into one, and the software to ease configuration and management of SD-WANs.On the hardware side, Cisco is adding the 3U, Catalyst 8500-20X6C edge platform to its Catalyst 8000 Edge Platforms Family. It is an edge aggregation device built on the Cisco’s quantum-flow processor (QFP) ASIC and promises more than three times the performance over the existing high-end Catalyst 8500 Series Edge Platform, according to Archana Khetan, head of products in Cisco’s Enterprise Routing group. “With the increased power, customers can support more users and collapse the number of boxes they need to support edge applications as needed,” Khetan said.To read this article in full, please click here

Feedback: Designing Active/Active and Disaster Recovery Data Centers

In the Designing Active-Active and Disaster Recovery Data Centers I tried to give networking engineers a high-level overview of challenges one might face when designing a highly-available application stack, and used that information to show why the common “solutions” like stretched VLANs make little sense if one cares about application availability (as opposed to auditor report). Some (customer) engineers like that approach; here’s the feedback I received not long ago:

As ever, Ivan cuts to the quick and provides not just the logical basis for a given design, but a wealth of advice, pointers, gotchas stemming from his extensive real-world experience. What is most valuable to me are those “gotchas” and what NOT to do, again, logically explained. You won’t find better material IMHO.

Please note that I’m talking about generic multi-site scenarios. From the high-level connectivity and application architecture perspective there’s not much difference between a multi-site on-premises (or collocation) deployment, a hybrid cloud, or a multicloud deployment.

Feedback: Designing Active/Active and Disaster Recovery Data Centers

In the Designing Active-Active and Disaster Recovery Data Centers I tried to give networking engineers a high-level overview of challenges one might face when designing a highly-available application stack, and used that information to show why the common “solutions” like stretched VLANs make little sense if one cares about application availability (as opposed to auditor report). Some (customer) engineers like that approach; here’s the feedback I received not long ago:

As ever, Ivan cuts to the quick and provides not just the logical basis for a given design, but a wealth of advice, pointers, gotchas stemming from his extensive real-world experience. What is most valuable to me are those “gotchas” and what NOT to do, again, logically explained. You won’t find better material IMHO.

Please note that I’m talking about generic multi-site scenarios. From the high-level connectivity and application architecture perspective there’s not much difference between a multi-site on-premises (or collocation) deployment, a hybrid cloud, or a multicloud deployment.

AMD gains share in server market while overall x86 sales take a hit

AMD continues to gain ground in the data center, grabbing CPU market share from leader Intel despite a significant decline in server processor shipments.Overall, the processor market took a hit in the fourth quarter of 2022, as well as for the full year 2022, due to lower demand, ongoing inventory corrections, and a slowing economy, according to analyst firm Mercury Research.For 2022, total unit shipments (client and server, excluding ARM) were 374 million and revenues came in at $65 billion, down 21% and 19%, respectively, compared to 2021.Specific to server processors, sales for the year came in at 36.1 million units, down 4.2% from 37.7 million in 2021. Revenues were $24 billion in 2022, down 7.7% from $26 billion in 2021. Mercury’s principal analyst Dean McCarron attributes the sharper drop in revenue versus units because the average selling price (ASP) declined.To read this article in full, please click here

AMD gains share in server market while overall x86 sales take a hit

AMD continues to gain ground in the data center, grabbing CPU market share from leader Intel despite a significant decline in server processor shipments.Overall, the processor market took a hit in the fourth quarter of 2022, as well as for the full year 2022, due to lower demand, ongoing inventory corrections, and a slowing economy, according to analyst firm Mercury Research.For 2022, total unit shipments (client and server, excluding ARM) were 374 million and revenues came in at $65 billion, down 21% and 19%, respectively, compared to 2021.Specific to server processors, sales for the year came in at 36.1 million units, down 4.2% from 37.7 million in 2021. Revenues were $24 billion in 2022, down 7.7% from $26 billion in 2021. Mercury’s principal analyst Dean McCarron attributes the sharper drop in revenue versus units because the average selling price (ASP) declined.To read this article in full, please click here

Many ways to use the echo command on Linux

The echo command (a bash built-in) is one of the very basic commands on Linux. As with ls and pwd, you can't sit on the command line very long without using it. At the same time, echo has quite a few uses that many of us never take advantage of. So, this post looks into the many ways you can use this command.What is the echo command on Linux? Basically, echo is a command that will display any text that you ask it to display. However, when you type “echo hello”, the echo command isn't only spitting out those five letters, it's actually sending out six characters – the last one being a linefeed. Let's look at a couple of commands that make this obvious.To read this article in full, please click here

Anuta Networks Adds Synthetic Tests For On-Demand Network Performance Monitoring

Anuta Networks has added an active assurance capability to ATOM, its network automation and orchestration software. Active assurance lets engineers run synthetic tests on demand using software agents. For example, if a service provider wants to test the performance of a link or network segment to see if it’s meeting SLAs, it can run a […]

The post Anuta Networks Adds Synthetic Tests For On-Demand Network Performance Monitoring appeared first on Packet Pushers.

Qualcomm introduces 5g modem system for small, low-power IoT devices

5G connectivity is being used in a wide range of products, but one thing they have in common is that they tend to be very high-powered devices. Up until now, there has been little emphasis on low-power devices, such as small edge/IoT devices or consumer products.Qualcomm is addressing that with Snapdragon X35 5G Modem-RF, a 5G New Radio (NR) processor designed to serve the low-power and low-bandwidth markets.The company claims it is the first NR-Light processor based on the 2020 3GPP Release 17 spec that was written to support low-power 5G IoT devices over any of the broadcast frequencies allocated to 5G. At the same time, it is compatible with legacy LTE (4G). The spec is also known as “reduced capacity” or RedCap.To read this article in full, please click here

Upcoming Course: How Routers Really Work

I’m teaching a course on router internals over at Safari Books Online on the 24th (in 10 days). From the descriptions:

A network device—such as a router, switch, or firewall—is often seen as a single “thing,” an abstract appliance that is purchased, deployed, managed, and removed from service as a single unit. While network devices do connect to other devices, receiving and forwarding packets and participating in a unified control plane, they are not seen as a “system” in themselves.

The course is three hours. I’m in the process of updating the slides … or rather, I need to get to updating the slides in the next couple of days.

Register here.

Real-time flow analytics with Containerlab templates

The GitHub sflow-rt/containerlab project contains example network topologies for the Containerlab network emulation tool that demonstrate real-time streaming telemetry in realistic data center topologies and network configurations. The examples use the same FRRouting (FRR) engine that is part of SONiC, NVIDIA Cumulus Linux, and DENT. Containerlab can be used to experiment before deploying solutions into production. Examples include: tracing ECMP flows in leaf and spine topologies, EVPN visibility, and automated DDoS mitigation using BGP Flowspec and RTBH controls.

This article describes an experiment with Containerlab's advanced Generated topologies capability, taking the 3 stage Clos topology shown above and creating a template that can be used to generate topologies with any number of leaf and spine switches.

The clos3.yml topology file specifies the 2 leaf 2 spine topology shown above:

name: clos3
mgmt:
network: fixedips
ipv4_subnet: 172.100.100.0/24
ipv6_subnet: 2001:172:100:100::/80

topology:
defaults:
env:
COLLECTOR: 172.100.100.8
nodes:
leaf1:
kind: linux
image: sflow/clab-frr
mgmt_ipv4: 172.100.100.2
mgmt_ipv6: 2001:172:100:100::2
env:
LOCAL_AS: 65001
NEIGHBORS: eth1 eth2
HOSTPORT: eth3
HOSTNET: "172.16.1.1/24"
HOSTNET6: "2001:172:16:1::1/64"
exec:
- touch /tmp/initialized
leaf2:
kind: linux
image: sflow/clab-frr
mgmt_ipv4: 172.100.100.3
mgmt_ipv6: 2001:172:100:100::3
env:
LOCAL_AS: 65002
NEIGHBORS: Continue reading

13 Years Later, the Bad Bugs of DNS Linger on

It’s 2023, and we are still copying code without fully debugging. Did we not learn from the Great DNS Vulnerability of 2008? Fear not, internet godfather spoke about the cost of open source dependencies in an Open Source Summit Europe in Dublin talk — which he Dan Kaminsky discovered a fundamental design flaw in DNS code that allowed for arbitrary cache poisoning that affected nearly every DNS server on the planet. The patch was released in July 2008 followed by the permanent solution, Domain Name Security Extensions (DNSSEC), in 2010. The Domain Name System is the basic name-based global addressing system for The Internet, so vulnerabilities in DNS could spell major trouble for pretty much everyone on the Internet. Vixie and Kaminsky “set [their] hair on fire” to build the security vulnerability solution that “13 years later, is not widely enough deployed to solve this problem,” Vixie said. All of this software is open-source and inspectable but the DNS bugs are still being brought to Vixie’s attention in the present day. “This is never going to stop if we don’t start writing down the lessons people should know before they write software,”  Vixie said. How Did This Happen? It’s our fault, “the call is coming from inside the house.” Before internet commercialization and the dawn of the home computer room, publishers of Berkley Software Distribution (BSD) of UNIX decided to support the then-new DNS protocol. “Spinning up a new release, making mag tapes, and putting them all in shipping containers was a lot of work” so they published DNS as a patch and posted to Usenet newsgroups, making it available to anyone who wanted it via an FTP server and mailing list. When Vixie began working on DNS at Berkeley, DNS was for all intents abandonware insofar as all the original creators had since moved on. Since there was no concept of importing code and making dependencies, embedded systems vendors copied the original code and changed the API names to suit their local engineering needs… this sound familiar? And then Linux came along. The internet E-X-P-L-O-D-E-D. You get an AOL account. And you get an AOL account… Distros had to build their first C library and copied some version of the old Berkeley code whether they knew what it was or not. It was a copy of a copy that some other distro was using, they made a local version forever divorced from the upstream. DSL modems are an early example of this. Now the Internet of Things is everywhere and “all of this DNS code in all of the billions of devices are running on some fork of a fork of a fork of code that Berkeley published in 1986.” Why does any of this matter? The original DNS bugs were written and shipped by Vixie. He then went on to fix them in the  90s but some still appear today. “For embedded systems today to still have that problem, any of those problems, means that whatever I did to fix it wasn’t enough. I didn’t have a way of telling people.” Where Do We Go from Here? “Sure would have been nice if we already had an internet when we were building one,” Vixie said. But, try as I might, we can’t go backward we can only go forward. Vixie made it very clear, “if you can’t afford to do these things [below] then free software is too expensive for you.” Here is some of Vixie’s advice for software producers: Do the best you can with the tools you have but “try to anticipate what you’re going to have.” Assume all software has bugs “not just because it always has, but because that’s the safe position to take.” Machine-readable updates are necessary because “you can’t rely on a human to monitor a mailing list.” Version numbers are must-haves for your downstream. “The people who are depending on you need to know something more than what you thought worked on Tuesday.” It doesn’t matter what it is as long as it uniquely identifies the bug level of the software. Cite code sources in the README files in source code comments. It will help anyone using your code and chasing bugs. Automate monitoring of your upstreams, review all changes, and integrate patches. “This isn’t optional.” Let your downstream know about changes automatically “otherwise these bugs are going to do what the DNS bugs are doing.” Here is the advice for software consumers: Your software’s dependencies are your dependencies. “As a consumer when you import something, remember that you’re also importing everything it depends on… So when you check your dependencies, you’d have to do it recursively you have to go all the way up.” Uncontracted dependencies can make free software incredibly expensive but are an acceptable operating risk because “we need the software that everybody else is writing.” Orphaned dependencies require local maintenance and therein lies the risk because that’s a much higher cost than monitoring the developments that are coming out of other teams. “It’s either expensive because you hire enough people and build enough automation or it’s expensive because you don’t.” Automate dependency upgrades (mostly) because sometimes “the license will change from one you could live with to one that you can’t or at some point someone decides they’d like to get paid” [insert adventure here]. Specify acceptable version numbers. If versions 5+ have the fix needed for your software, say that to make sure you don’t accidentally get an older one. Monitor your supply chain and ticket every release. Have an engineer review every update to determine if it’s “set my hair on fire, work over the weekend or we’ll just get to it when we get to it” priority level. He closed with “we are all in this together but I think we could get it organized better than we have.” And it sure is one way to say it. There is a certain level of humility and grace one has to have after being on the tiny team that prevented the potential DNS collapse, is a leader in their field for over a generation, but still has their early career bugs (that they solved 30 years ago) brought to their attention at regular intervals because adopters aren’t inspecting the source code. The post 13 Years Later, the Bad Bugs of DNS Linger on appeared first on The New Stack.

Twenty-five open-source network emulators and simulators you can use in 2023

I surveyed the current state of the art in open-source network emulation and simulation. I also reviewed the development and support status of all the network emulators and network simulators previously featured in my blog.

Of all the network emulators and network simulators I mentioned in my blog over the years, I found that eighteen of them are still active projects. I also found seven new projects that you can try. See below for a brief update about each tool.

Active projects

Below is a list of the tools previously featured in my blog that are, in my opinion, still actively supported.

Cloonix

Cloonix version 28 was released in January 2023. Cloonix stitches together Linux networking tools to make it easy to emulate complex networks by linking virtual machines and containers. Cloonix has both a command-line-interface and a graphical user interface.

The Cloonix web site now has a new address at: clownix.net and theCloonix project now hosts code on Github. Cloonix adopted a new release numbering scheme since I reviewed it in 2017. So it is now at “v28”.

Cloudsim

CloudSim is still maintained. Cloudsim is a network simulator that enables modeling, simulation, and experimentation of emerging Cloud computing Continue reading