Archive

Category Archives for "Networking"

Myth Busted: Who Says Software Based Networking Performance Does Not Match Physical Networking?

100 Gbps Performance with NSX Data Center

NSX Data Center has shown for some time now (see VMworld 2016 NSX Performane Session (NET 8030) that it can drive upwards of 100G of throughput per node for typical data center workloads. In that VMworld session, we ran a live demo showing the throughput being limited by the actual physical ports on the host, which were 2 x 40 Gbps, and not by NSX Data Center.

Typically, in physical networking, performance is measured in raw packets per seconds to assure variety of traffic at variable packet sizes be forwarded between multiple physical ports. While in virtualized data center this is not a case, as hypervisor hosts only have to satisfy few uplinks, typically no more than four physical links. In addition, most of the virtualized workload use TCP protocol. In that case. ESXi hypervisor fowards the TCP data segments in highly optimized way, thus not always based on number of packets transferred but the amount of data segment forwarded in software. In typical data center workloads, TCP optimizations such as TSO, LRO and RSS or Rx/Tx Filters help drive sufficient throughput at hardly any CPU cost. TSO/LRO help move large amounts of Continue reading

Check Out Our New Network Programmability Course

Course Title: The Full Stack Engineer’s Guide to Network Programmability with Python
Course Duration: 30 hrs 33 min

 

The Full Stack Engineer’s Guide to Network Programmability with Python will provide learners with an inductive and comprehensive introduction to the Python programming language to include the various data types, control flow structures, functions, methods, classes, objects, reading and writing files, data storage using MySQL, and regular expressions. We will also cover on- and off-box Python automation and explore the guest shell in IOS-XE!

Red Hat reaches the Summit – a new top scientific supercomputer

Red Hat just announced its role in bringing a top scientific supercomputer into service in the U.S. Named “Summit” and housed at the Department of Energy’s OAK Ridge National Labs, this system with its 4,608 IBM compute servers is running — you guessed it — Red Hat Enterprise Linux.The Summit collaborators With IBM providing its POWER9 processors, Nvidia contributing its Volta V100 GPUs, Mellanox bringing its Infiniband into play, and Red Hat supplying Red Hat Enterprise OS, the level of inter-vendor collaboration has reached something of an all-time high and an amazing new supercomputer is now ready for business.To read this article in full, please click here

Summit: How IBM and Oak Ridge laboratory are changing supercomputing

The team designing Oak Ridge National Laboratory's new Summit supercomputer correctly predicted the rise of data-centric computing – but its builders couldn't forecast how bad weather would disrupt the delivery of key components.Nevertheless, almost four years after IBM won the contract to build it, Summit is up and running on schedule. Jack Wells, Director of Science for Oak Ridge Leadership Computing Facility (OLCF), expects the 200-petaflop machine to be fully operational by early next year.[ Now see who's developing quantum computers.] "It's the world's most powerful and largest supercomputer for science," he said.To read this article in full, please click here

Summit: How IBM and Oak Ridge laboratory are changing supercomputing

The team designing Oak Ridge National Laboratory's new Summit supercomputer correctly predicted the rise of data-centric computing – but its builders couldn't forecast how bad weather would disrupt the delivery of key components.Nevertheless, almost four years after IBM won the contract to build it, Summit is up and running on schedule. Jack Wells, Director of Science for Oak Ridge Leadership Computing Facility (OLCF), expects the 200-petaflop machine to be fully operational by early next year.[ Now see who's developing quantum computers.] "It's the world's most powerful and largest supercomputer for science," he said.To read this article in full, please click here

Weekly 393 – Infrastructure Monitoring with Juniper AppFormix (Sponsored)

Juniper Appformix is a telemetry platform thats multi-vendor, cross layer, built-in machine learning and
with fancy visualisation. Its designed simplify operations and closed-loop automation.  In the era of multi-cloud, we need tools that run on-prem or in cloud and support OpenStack, K8s, VMware, Azure, Google, Amazon networks with integration into virtual machines, containers, overlay networks and physical devices.

The ability to draw data from a wide range of sources creates data flood that can overwhelm you. Appformix has machine learning and a range of automation functions to simplify and organise this diverse data flood. The increasing complexity of networks as the the edge of the network expands in multiple dimensions – on and off premises, virtual edge, overlay networks as well the physical devices must all operate in cahoots.

Appformix is automating this operational load so you aren’t getting calls at 2am. Thats a very fine thing.

Sumeet Singh, VP/GM for Juniper AppFormix, kicks off the discussion with a quick intro to Appformix, we cover the key features and the approach of the product before we move into use cases and what customers are using today. Surprisingly, this includes WAN operations in addition to DC/Cloud.

IDG Contributor Network: Living on the edge: 5 reasons why edge services are critical to your resiliency strategy

When it comes to computing, living on the edge is currently all the rage. Why? Edge computing is a way to decentralize computing power and move processing closer to the end points where users and devices access the internet and data is generated. This allows for better control of the user experience and for data to be processed faster at the edge of the network – on devices such as smartphones and IoT devices.As enterprise organizations look to extend their corporate digital channel strategies involving websites with rich media and personalized content, it is vital to have a strong resiliency strategy.Deploying a combination of cloud and edge services can help by: reducing unplanned downtime; improving security and performance; extending the benefits of multi-cloud infrastructure; speeding application development and delivery; and improving user experience.To read this article in full, please click here

IDG Contributor Network: Living on the edge: 5 reasons why edge services are critical to your resiliency strategy

When it comes to computing, living on the edge is currently all the rage. Why? Edge computing is a way to decentralize computing power and move processing closer to the end points where users and devices access the internet and data is generated. This allows for better control of the user experience and for data to be processed faster at the edge of the network – on devices such as smartphones and IoT devices.As enterprise organizations look to extend their corporate digital channel strategies involving websites with rich media and personalized content, it is vital to have a strong resiliency strategy.Deploying a combination of cloud and edge services can help by: reducing unplanned downtime; improving security and performance; extending the benefits of multi-cloud infrastructure; speeding application development and delivery; and improving user experience.To read this article in full, please click here

CEO Succession at Internet Society – Status update (June 2018)

This is a quick update on the CEO Succession process at the Internet Society (ISOC). For background, please check my previous notes to the community.

As you know,  the application window for potential candidates for ISOC’s CEO position closed in early April. Let me update you on where we are in the process.

The process for selecting a new CEO for ISOC is progressing well and is on track. As anticipated, and as a consequence of the broad appeal of the role, the open call for applicants resulted in a significant amount of interest from all around the world. The Board received more than one hundred applications from candidates with a diverse set of backgrounds in business and the private sector, government, the technical community, the global NGO space, and the wider Internet community.

The strength and quality of the applications has been very high and it has been an incredibly tough challenge to identify and evaluate the most suitable candidates for this role from such a large and qualified pool of talent and experience.

Nevertheless, given the importance that the CEO position holds for both ISOC and the Internet as a whole, the deliberation by the Board has been Continue reading