
I’m forever seeing announcements like this in the software defined networking world—
But you need to note one specific thing about this announcement. How did they achieve these forwarding rates? By using DPDK to offload the actual, well, forwarding to a custom ASIC on a NIC. The reality is that we’ve always done the control plane in software, and we’ve always done the forwarding in hardware. There have been precious few router platforms over the years where the forwarding plane is actually an “embedded system.”
Certainly we’re seeing a world where open source operating systems are learning to interact with commodity ASICs so it’s possible to separate the software from the hardware, and the operating system from the control plane, and this is all too the good. But if this is software defined networking, then we’ve been doing this since sometime in the 1990’s, perhaps even earlier…
Perhaps we’ve become so accustomed to considering the network operating Continue reading
Software Defined Networking & self-driving cars are closely related because they are automated systems. When automated systems go wrong who takes responsibility ?
The post Network Monitoring, Moral Hazards and Crumple Zones appeared first on EtherealMind.
It doesn't compete with MANO.
How do you ensure interoperability between system elements of today and tomorrow? Brocade says the answer can be found through an open source approach to MANO.
In this Infotrek episode, we talk about network design elements and options for small businesses with guest Steven Sedory.
The post Infotrek Episode 3: Small Shop Design Part 1 appeared first on Packet Pushers.
In this Infotrek episode, we talk about network design elements and options for small businesses with guest Steven Sedory.
The post Infotrek Episode 3: Small Shop Design Part 1 appeared first on Packet Pushers.
Got a core dump file, please contact your TAC engineers for further support ! This is the common statement that we all see on vendors’ websites and in their recommendations and it definetly is true. The reason is that a core dump file contains information that requires deep knowledge of the code of the cored …
The post Troubleshoot#2: Core Dumps for Network Engineers appeared first on Networkers-online.com.
Dell, VCE, and Nutanix walk into a data center...
Back in November we wrote a blog post about one latency spike. Today I'd like to share a continuation of that story. As it turns out, the misconfigured rmem setting wasn't the only source of added latency.
It looked like Mr Wolf hadn't finished his job.
After adjusting the previously discussed rmem sysctl we continued monitoring our systems' latency. Among other things we measured ping times to our edge servers. While the worst case improved and we didn't see 1000ms+ pings anymore, the line still wasn't flat. Here's a graph of ping latency between an idling internal machine and a production server. The test was done within the datacenter, the packets never went to the public internet. The Y axis of the chart shows ping times in milliseconds, the X axis is the time of the measurement. Measurements were taken every second for over 6 hours:

As you can see most pings finished below 1ms. But out of 21,600 measurements about 20 had high latency of up to 100ms. Not ideal, is it?
The latency occurred within our datacenter and the packets weren't lost. This suggested a kernel issue again. Linux responds to ICMP pings from its soft Continue reading