Episode 28 – For the Love of NAT

When it comes to NAT, network engineers love it, they hate it, or the love to hate it. In this episode, Tom Hollingsworth and Nick Buraglio join us to talk about NAT, why it exists, and its continued role in networking.

 


 

We would like to thank Core BTS for sponsoring this episode of Network Collective. Core BTS focuses on partnering with your company to deliver technical solutions that enhance and drive your business. If you’re looking for a partner to help your technology teams take the next step, you can reach out to Core BTS by emailing them here.


Nick Buraglio
Guest
Tom Hollingsworth
Guest

Jordan Martin
Co-Host
Eyvonne Sharp
Co-Host


Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post Episode 28 – For the Love of NAT appeared first on Network Collective.

Blockchain, service-centric networking key to IoT success

Connecting and securing the Internet of Things (IoT) should be achieved with a combination of service-centric networking (SCN) along with blockchain, researchers say. `A multi-university, multiple-discipline group led by Zhongxing Ming, a visiting scholar at Princeton University, say IoT’s adoption will face an uphill battle due in part to bottlenecks between potentially billions of devices, along with the mobile nature of much of it.The scientists, who call their IoT architecture Blockcloud, presented their ideas at GENESIS C.A.T., an innovation-in-blockchain technology event recently in Tokyo.To read this article in full, please click here

Don’t Miss Keith Bogart’s Live Webinar! Deciphering Spanning-Tree Technologies

Tune in Tomorrow, May 31st, at 10 am PDT/ 1 pm EDT for a FREE live webinar with expert instructor Keith Bogart (CCIE #4923).

 


About This Webinar:

Understanding the logic of 802.1d and how it builds a loop-free “tree” is critical to passing any Cisco certification exam. Presented by INE instructor Keith Bogart (CCIE #4923), this session will take you through that logic so that, given any bridged/switched layer-2 network, you can predict what tree will be formed. Ask questions live with an experienced industry expert!

Typical EVPN BGP Routing Designs

As discussed in a previous blog post, IETF designed EVPN to be next-generation BGP-based VPN technology providing scalable layer-2 and layer-3 VPN functionality. EVPN was initially designed to be used with MPLS data plane and was later extended to use numerous data plane encapsulations, VXLAN being the most common one.

Design Requirements

Like any other BGP-based solution, EVPN uses BGP to transport endpoint reachability information (customer MAC and IP addresses and prefixes, flooding trees, and multi-attached segments), and relies on an underlying routing protocol to provide BGP next-hop reachability information.

Read more ...

Towards a design philosophy for interoperable blockchain systems

Towards a design philosophy for interoperable blockchain systems Hardjono et al., arXiv 2018

Once upon a time there were networks and inter-networking, which let carefully managed groups of computers talk to each other. Then with a capital “I” came the Internet, with design principles that ultimately enabled devices all over the world to interoperate. Like many other people, I have often thought about the parallels between networks and blockchains, between the Internet, and something we might call ‘the Blockchain’ (capital ‘B’). In today’s paper choice, Hardjono et al. explore this relationship, seeing what we can learn from the design principles of the Internet, and what it might take to create an interoperable blockchain infrastructure. Some of these lessons are embodied in the MIT Tradecoin project.

We argue that if blockchain technology seeks to be a fundamental component of the future global distributed network of commerce and value, then its architecture must also satisfy the same fundamental goals of the Internet architecture.

The design philosophy of the Internet

This section of the paper is a précis of ‘The design philosophy of the DARPA Internet protocols’ from SIGCOMM 1988. The top three fundamental goals for the Internet as conceived Continue reading

Nvidia aims to unify AI, HPC computing in HGX-2 server platform

Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it's no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers.  The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.To read this article in full, please click here

Nvidia aims to unify AI, HPC computing in HGX-2 server platform

Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it's no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers.  The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.To read this article in full, please click here

Nvidia aims to unify AI, HPC computing in HGX-2 server platform

Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it's no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers.  The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.To read this article in full, please click here

Nvidia aims to unify AI, HPC computing in HGX-2 server platform

Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it's no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers.  The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.To read this article in full, please click here

Windows Updates and Ansible

Ansible-Get-Started-Windows

Welcome to the fourth installment of our Windows-centric Getting Started Series!

One of the duties of most IT departments is keeping systems up to date. In this post we’re taking a quick look at using Ansible to manage updates on your Windows nodes. Starting with a small example of six Windows machines, we’ll show an example of a play against those hosts. We’ll share the full example at the end.

Updates, Updates, Updates...

Managing Windows updates is something that can be understood and customized quickly with Ansible. Below is a small-scale example of running updates on hosts with some flexibility in what gets updated in the process. The example here is assuming a domain exists and the hosts are being passed domain credentials. If you’re looking to test this example, be sure to read Bianca’s earlier Getting Started post on connecting to a Windows host.

Because this example is running against exclusively Windows machines, the information needed to connect can be included in the inventory file:

[all:vars]
ansible_connection: winrm
ansible_user: administrator
ansible_password: This-Should-Be-a-Password!

For Example

The example hosts include three groups of servers, two in each group. There are terminal servers, application servers, and directory servers. For the purposes of Continue reading

BrandPost: The Next SDN Leap: Automation and Intent-Driven Networking

Without an agile, flexible, and secure network infrastructure, organizations are in danger of falling behind competitors. That’s why many organizations are seeking to transform their businesses with cloud computing and hybrid cloud environments that are more adaptive and flexible. Software-defined networks (SDN) and network functions virtualization (NFV) can ease that path, but it requires automation along with intelligence that understands and can even predict what users and organizations want and need to do.Digital transformation has quickly moved beyond hype to become one of the top business imperatives. “Digital transformation is forcing companies to be agile and move with speed, and the network needs to be equally agile and fast,” writes industry analyst Zeus Kerravala. “The separation of control and data planes enables control to be abstracted away from the device and centralized so a network administrator can issue a change that is propagated instantly across the entire network.”To read this article in full, please click here

IDG Contributor Network: Internet testing results: why fixing the internet middle mile is essential for SD-WAN performance

It’s no secret that the public Internet is a quagmire of latency and packet loss problems. No wonder, many of clients are reluctant to trust Internet-based SD-WANs with VoIP and business-critical applications. After all, how can an SD-WAN running over Internet provide a predictable user experience if the underlying transport is so unpredictable?To answer that question, SD-WAN Experts recently evaluated the performance and stability of long-distance Internet connections. Our goal: to determine the source of the Internet's performance problems by measuring variability and latency in the last and middle miles.What we found was by swapping out the Internet core for a managed middle mile makes an enormous difference. Case in point is Amazon. The latency and variation between our AWS workloads was significantly better across Amazon’s network than the public Internet (see figure). Why that’s the case and how we tested is explained below and in greater depth from this post on our site.To read this article in full, please click here

IDG Contributor Network: Internet testing results: why fixing the internet middle mile is essential for SD-WAN performance

It’s no secret that the public Internet is a quagmire of latency and packet loss problems. No wonder, many of clients are reluctant to trust Internet-based SD-WANs with VoIP and business-critical applications. After all, how can an SD-WAN running over Internet provide a predictable user experience if the underlying transport is so unpredictable?To answer that question, SD-WAN Experts recently evaluated the performance and stability of long-distance Internet connections. Our goal: to determine the source of the Internet's performance problems by measuring variability and latency in the last and middle miles.What we found was by swapping out the Internet core for a managed middle mile makes an enormous difference. Case in point is Amazon. The latency and variation between our AWS workloads was significantly better across Amazon’s network than the public Internet (see figure). Why that’s the case and how we tested is explained below and in greater depth from this post on our site.To read this article in full, please click here

The Virtual Cloud Network Demystified

Introduction

Welcome to Summer 2018!  It’s been nearly one month now since our CEO Pat Gelsinger announced the Virtual Cloud Network vision at Dell Technologies world in Las Vegas.  Essentially the reveal (in my personal opinion) was focused on raising awareness that VMware has now delivered to the market what many of you have heard for quite some time now as “the vision” for networking and security, whereas NSX has become an integral part of many various parts of your business:

Enter stage left, the Virtual Cloud Network.  VCN builds upon the fundamentals you’re already familiar with from NSX—these include (but are not limited to) integrated security, consistent connectivity, and inherit automation, but really focuses on tying together an end-to-end architecture that allows our customers to deliver applications and services everywhere.  Our customers have asked and we have listened… the demand for any infrastructure, any cloud, any transport, any device, and any application has drastically changed the landscape and technologies associated with building/architecting and having a modern enterprise network.

We’ve been quite busy over the past month with lots of interest coming from partners and customers wondering what this really means.  Well today the wait Continue reading