In the Business Impact of Network Automation podcast Ethan Banks asked an interesting question: “what will happen with older networking engineers who are not willing to embrace automation”
The response somewhat surprised me: Alejandro Salisas said something along the lines “they’ll be just fine” (for a while).
Let me recap his argument and add a few twists of my own:
Read more ...I have to admit I LOVE MPLS. I admit, I didn’t love it so much when I was first learning. I found it kinda hard at first. But then I absolutely loved it once I “saw” it. Newer to MPLS... Read More ›
The post MPLS L3VPN: Label Following Fun with Fish appeared first on Networking with FISH.
Russ White wrote a great article along the lines of what we discussed a while ago. My favorite part:
There are companies who consider the network an asset, and companies that consider the network a necessary evil.
Enjoy!
On a tangential topic: Russ will talk about network complexity in the Building Next-Generation Data Center online course starting on April 25th.
Whether we like it or not, the era of DevOps is upon us, fellow network engineers, and with it come opportunities to approach and solve common networking problems in new, innovative ways. One such problem is automated network change validation and testing in virtual environments, something I’ve already written about a few years ago. The biggest problem with my original approach was that I had to create a custom REST API SDK to work with a network simulation environment (UnetLab) that was never designed to be interacted with in a programmatic way. On the other hand, technologies like Docker have been very interesting since they were built around the idea of non-interactive lifecycle management and came with all API batteries already included. However, Docker was never intended to be used for network simulations and its support for multiple network interfaces is… somewhat problematic.
The easiest way to understand the problem is to see it. Let’s start with a blank Docker host and create a few networks:
docker network create net1
docker network create net2
docker network create net3
Now let’s see what prefixes have been allocated to those networks:
docker network inspect -f "{{range .IPAM.Config }}{{.Subnet}}{{end}}" Continue reading
Cisco and VMware change the SD-WAN playing field.
Minjar sells a service that compares costs across public clouds.
There was concern in some scientific quarters last year that President Trump’s election could mean budget cuts to the Department of Energy (DoE) that could cascade down to the country’s exascale program at a time when China was ramping up investments in its own initiatives.
The worry was that any cuts that could slow down the work of the Exascale Computing Project would hand the advantage to China in this critical race that will have far-reaching implications in a wide range of scientific and commercial fields like oil and gas exploration, financial services, high-end healthcare, national security and the military. …
U.S. Exascale Efforts Benefit in FY 2019 Budget was written by Jeffrey Burt at The Next Platform.
In my previous post, NSX Layer 2 VPN: Migrating workloads between Datacentres, I described the process and theory behind using an NSX Layer 2 VPN (L2VPN) to migrate workloads from a soon-to-be-retired VLAN backed datacentre, to an NSX Managed logical switch backed datacentre. In this post I will take you through the deployment of the L2VPN in my lab environment, following these high-level steps:
The Lab environment I am using currently reflects the diagram below, with two VMs deployed onto VLAN 20 within my “remote” site (my remote site is actually just a separate cluster from my “NSX Managed Site”, which is my workload cluster). In my NSX Managed site I have a Provider Logical Router (PLR) and Distributed Logical Router (DLR) configured.
To prepare the NSX Managed Site the L2VPN-Server needs to be connected to a “trunk” interface, which allows multiple VLAN or Logical Switches to be configured as sub-interfaces, rather than having an interface in each VLAN/Logical Switch.
The Continue reading
On today’s episode of “The Interview” with The Next Platform we talk with Doug Miles who runs the PGI compilers and tools team at Nvidia about the past, present, and future of OpenACC with an emphasis on what lies ahead in the next release.
Over the last few years we have described momentum with OpenACC in a number of articles covering everything from what it means in the wake of new memory options to how it pushes on OpenMP to develop alongside. Today, however, we take a step back with an HPC code veteran for the bigger picture and …
OpenACC Developments: Past, Present, and Future was written by Nicole Hemsoth at The Next Platform.
Peter Welcher examines the pros and cons of Network Address Translation and describes design scenarios.