Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/
The post History Of Networking – Dino Farinacci – History of LISP appeared first on Network Collective.
Nvidia caused a shift in high-end computing more than a decade ago when it introduced its general-purpose GPUs and CUDA development platform to work with CPUs to increase the performance of compute-intensive workloads in HPC and other environments and drive greater energy efficiencies in datacenters.
Nvidia and to a lesser extent AMD, with its Radeon GPUs, took advantage of the growing demand for more speed and less power consumption to build out their portfolios of GPU accelerators and expand their use in a range of systems, to the point where in the last Top500 list of the world’s fastest …
Dell EMC and Fujitsu Roll Intel FPGAs Into Servers was written by Jeffrey Burt at The Next Platform.
It may be tempting to take drastic measures to protect the network, but the results can be a bit problematic.
EMA research found that enterprises use network analytics technology to automate a variety of networking tasks for increased uptime and other benefits.
Remember the IPv6 elephant in the room – the inability to do small-site multihoming without NAT (well, NPT66)? IPv6 is old enough to buy its own beer, but the elephant is still hogging the room. Tons of ideas have been thrown around in IETF (mostly based on source address selection tricks), but none of that spaghetti stuck to the wall.
Read more ...In our last post, we removed our last piece of static configuration and replaced static routes with BGP. We’re going to pick up right where we left off and discuss another use case for MPLS – MPLS VPNs. Really – we’re talking about two different things here. The first is BGP VPNv4 address families used for route advertisement. The second is using MPLS as a data plane to reach the prefixes being announced by VPNv4 address family. If that doesn’t make sense yet – don’t worry – it will be pretty clear by the end of the post. So as usual – let’s jump right into this and talk about our lab setup.
As I mentioned in the last post, setting up BGP was a prerequisite to this post – so since that’s the case – Im going to pick up right where I left off. So I’ll post the lab topology picture here for the sake of posting a lab topology – but if you want to get your configuration prepped – take a look at the last post. At the end of the last post we had our Continue reading
Special thanks to Kylie Liang from the Microsoft Azure DevEx team for giving us a closer look at one of the new Azure module features.
---
For this blog entry, we wanted to share a step by step guide to using the Azure Container Instance module that has been included in Ansible 2.5.
The Container Instance service is a PaaS offering on Azure that is designed to let users run containers without managing any of the underlying infrastructure. The Ansible Azure Container Instance module allows users to create, update and delete an Azure Container Instance.
For the purposes of this blog, we’ll assume that you are new to Azure and Ansible and want to automate the Container Instance service. This tutorial will guide you through automating the following steps:

Moving a monolithic application to a modern cloud architecture can be difficult and often result in a greenfield development effort. However, it is possible to move towards a cloud architecture using Docker Enterprise Edition (EE) with no code changes and gain portability, security and efficiency in the process.

To conclude the series In part 5, I use the message service’s REST endpoint to replace one part of the application UI with a Javascript client. The original application client UI was written in Java Server Pages (JSP) so that any UI changes required the application to be recompiled and redeployed. I can use modern web tools and frameworks such as React.js to write a new client interface. I’ll build the new client using a multi-stage build and deploy it by adding the container to the Docker Compose file. I’ll also show how to deploy the entire application from your development to Docker EE to make it available for testing.
Modernizing Java Apps for Developers shows how to take an existing Java N-tier application and run it in containers using the Docker platform to modernize the architecture. The source code for each part of this series is available on github and Continue reading
I’ve seen a number of blogs and articles describing what network automation is and what it entails, and in many cases, the descriptions end up frightening people who haven’t yet started down an automation path. The biggest question when starting any of these sorts of projects is the simplest: should you automate at all?
My answer to that first question (Spoiler alert: it’s yes, but let me explain why) is that it depends on your network itself. For years, before I was involved with networking at the operating system level, I worked on network management and automation products. Often, I’d tell my customers that if they were happy with the status quo, then I certainly wouldn’t force them down a particular path or to use a particular product. However, if you’re a bit fed up with the manual steps involved in updating a device operating system or configuring a device, then you should look into automation to save yourself time and headaches. Of course, if you only have three devices and they get updated yearly, maybe don’t bother. But if you believe automation will provide the solutions you’re looking for, there are some first steps for automation that you Continue reading
The company also launched an integrated data center security architecture that includes four Cisco products.
We are excited to announce that the Docker Registry HTTP API V2 specification will be adopted in the Open Container Initiative (OCI), the organization under the Linux Foundation that provides the standards that fuel the containerization industry. The Docker team is proud to see another aspect of our technology stack become a de-facto standard. As we’ve done with our image format, we are happy to formally share and collaborate with the container ecosystem as part of the OCI community. Our distribution protocol is the underpinning of all container registries on the market and is so robust that it is leveraged over a billion times every two weeks as container content is distributed across the globe.
Putting the protocol into perspective, part of the core functionality of Docker is the ability to push and pull images. From the first “Hello, World” moment, this concept is introduced to every user and is a large part of the Docker experience. While we normally sit back in our armchairs and marvel at this magical occurence, the amount of design and consideration that has gone into that simple capability can easily be overlooked.
When Docker was first released, the team Continue reading