The default pod provisioning mechanism in Kubernetes has a substantial attack surface, making it susceptible to malevolent exploits and container breakouts. To achieve effective runtime security, your containerized workloads in Kubernetes require multi-layer process monitoring within the container.
In this article, I will introduce you to process monitoring and guide you through a Kubernetes-native approach that will help you enforce runtime security controls and detect unauthorized access of host resources.
When you run a containerized workload in Kubernetes, several layers should be taken into account when you begin monitoring the process within a container. This includes container process logs and artifacts, Kubernetes and cloud infrastructure artifacts, filesystem access, network connections, system calls required, and kernel permissions (specialized workloads). Your security posture depends on how effectively your solutions can correlate disparate log sources and metadata from these various layers. Without effective workload runtime security in place, your Kubernetes workloads, which have a large attack surface, can easily be exploited by adversaries and face container breakouts.
Before I dive into the details on how to monitor your processes and detect malicious activities within your container platform, let us first take a look at some of Continue reading
In the Designing Active-Active and Disaster Recovery Data Centers I tried to give networking engineers a high-level overview of challenges one might face when designing a highly-available application stack, and used that information to show why the common “solutions” like stretched VLANs make little sense if one cares about application availability (as opposed to auditor report). Some (customer) engineers like that approach; here’s the feedback I received not long ago:
As ever, Ivan cuts to the quick and provides not just the logical basis for a given design, but a wealth of advice, pointers, gotchas stemming from his extensive real-world experience. What is most valuable to me are those “gotchas” and what NOT to do, again, logically explained. You won’t find better material IMHO.
Please note that I’m talking about generic multi-site scenarios. From the high-level connectivity and application architecture perspective there’s not much difference between a multi-site on-premises (or collocation) deployment, a hybrid cloud, or a multicloud deployment.
In the Designing Active-Active and Disaster Recovery Data Centers I tried to give networking engineers a high-level overview of challenges one might face when designing a highly-available application stack, and used that information to show why the common “solutions” like stretched VLANs make little sense if one cares about application availability (as opposed to auditor report). Some (customer) engineers like that approach; here’s the feedback I received not long ago:
As ever, Ivan cuts to the quick and provides not just the logical basis for a given design, but a wealth of advice, pointers, gotchas stemming from his extensive real-world experience. What is most valuable to me are those “gotchas” and what NOT to do, again, logically explained. You won’t find better material IMHO.
Please note that I’m talking about generic multi-site scenarios. From the high-level connectivity and application architecture perspective there’s not much difference between a multi-site on-premises (or collocation) deployment, a hybrid cloud, or a multicloud deployment.
The topic of AI in the networking space has quickly moved from “what if” to “how soon”. Every major networking vendor is leveraging some aspect of...
The post Selector is evolving the way we operate networks appeared first on /overlaid.
One of my readers sent me a question along these lines:
Do I have to have an IBGP session between Customer Edge (CE) routers in a multihomed site if they run EBGP with the upstream provider(s)?
Let’s start with a simple diagram and a refactoring of the question:
One of my readers sent me a question along these lines:
Do I have to have an IBGP session between Customer Edge (CE) routers in a multihomed site if they run EBGP with the upstream provider(s)?
Let’s start with a simple diagram and a refactoring of the question:
Anuta Networks has added an active assurance capability to ATOM, its network automation and orchestration software. Active assurance lets engineers run synthetic tests on demand using software agents. For example, if a service provider wants to test the performance of a link or network segment to see if it’s meeting SLAs, it can run a […]
The post Anuta Networks Adds Synthetic Tests For On-Demand Network Performance Monitoring appeared first on Packet Pushers.
I’m teaching a course on router internals over at Safari Books Online on the 24th (in 10 days). From the descriptions:
A network device—such as a router, switch, or firewall—is often seen as a single “thing,” an abstract appliance that is purchased, deployed, managed, and removed from service as a single unit. While network devices do connect to other devices, receiving and forwarding packets and participating in a unified control plane, they are not seen as a “system” in themselves.
The course is three hours. I’m in the process of updating the slides … or rather, I need to get to updating the slides in the next couple of days.
This article describes an experiment with Containerlab's advanced Generated topologies capability, taking the 3 stage Clos topology shown above and creating a template that can be used to generate topologies with any number of leaf and spine switches.
The clos3.yml topology file specifies the 2 leaf 2 spine topology shown above:
name: clos3
mgmt:
network: fixedips
ipv4_subnet: 172.100.100.0/24
ipv6_subnet: 2001:172:100:100::/80
topology:
defaults:
env:
COLLECTOR: 172.100.100.8
nodes:
leaf1:
kind: linux
image: sflow/clab-frr
mgmt_ipv4: 172.100.100.2
mgmt_ipv6: 2001:172:100:100::2
env:
LOCAL_AS: 65001
NEIGHBORS: eth1 eth2
HOSTPORT: eth3
HOSTNET: "172.16.1.1/24"
HOSTNET6: "2001:172:16:1::1/64"
exec:
- touch /tmp/initialized
leaf2:
kind: linux
image: sflow/clab-frr
mgmt_ipv4: 172.100.100.3
mgmt_ipv6: 2001:172:100:100::3
env:
LOCAL_AS: 65002
NEIGHBORS: Continue reading
I surveyed the current state of the art in open-source network emulation and simulation. I also reviewed the development and support status of all the network emulators and network simulators previously featured in my blog.
Of all the network emulators and network simulators I mentioned in my blog over the years, I found that eighteen of them are still active projects. I also found seven new projects that you can try. See below for a brief update about each tool.
Below is a list of the tools previously featured in my blog that are, in my opinion, still actively supported.
Cloonix version 28 was released in January 2023. Cloonix stitches together Linux networking tools to make it easy to emulate complex networks by linking virtual machines and containers. Cloonix has both a command-line-interface and a graphical user interface.
The Cloonix web site now has a new address at: clownix.net and theCloonix project now hosts code on Github. Cloonix adopted a new release numbering scheme since I reviewed it in 2017. So it is now at “v28”.
CloudSim is still maintained. Cloudsim is a network simulator that enables modeling, simulation, and experimentation of emerging Cloud computing Continue reading