Welcome to Day Two Cloud, where the topic is visibility. Hybrid cloud visibility with a side of Kubernetes, to be specific. VMware has come alongside as today’s sponsor for a discussion about vRealize Operations Cloud to give you that visibility into applications and infrastructure running in complex, multi-cloud environments.
The post Day Two Cloud 123: Managing Multi-Cloud Applications And Infrastructure With vRealize Operations Cloud (Sponsored) appeared first on Packet Pushers.
Visit SC21 virtually AND enter to win $200 in AWS credits
You won’t be surprised that AWS serves up some of the most powerful HPC products and services on the planet. …
Want To Get Your Hands On AWS’ Latest HPC Services? Here’s How was written by David Gordon at The Next Platform.
The hyperscalers, cloud builders, HPC centers, and OEM server manufacturers of the world who build servers for everyone else all want, more than anything else, competition between component suppliers and a regular, predictable, almost boring cadence of new component introductions. …
AMD Deepens Its Already Broad Epyc Server Chip Roadmap was written by Timothy Prickett Morgan at The Next Platform.
VMworld 2021 – what a whirlwind. Thank you for attending and making the virtual event a success. With so many sessions and so little time, we thought it was important to point out one of the most notable networking sessions of this year: Automation is Modernizing Networks, delivered by Tom Gills, SVP & General Manager, Networking and Advanced Security.
In case you missed it, we’re going to catch you up on essential insights, networking news, and more.
The vision behind VMware’s cloud networking is to centralize policy and networking infrastructure. Today, there are more than 23,000 customers using VMware’s virtual networking products. 96 out of the Fortune 100 have chosen VMware to virtualize their network infrastructure. VMware has replaced more than 12,000 power-hungry, hardware load balancer appliances. There are more than 450,000 branch sites globally, accelerating the digital transformation for enterprises of all kinds.
Taking a step back, we can see how clearly all of these developments are enhancing digital operations for our various constituents. With two strokes of a key, our customers can send applications directly into production. This includes scanning for security/compliance violations, enforcing these security and compliance Continue reading
In a decade and a half, Nvidia has come a long way from its early days provider of graphics chips for personal computers and other consumer devices. …
Nvidia Declares That It Is A Full-Stack Platform was written by Jeffrey Burt at The Next Platform.
The two previous posts described what the script does and modules used as well as how the script leverages YAML.
This time, we will go through the function that generates the access-list name. The code for this is below:
def generate_acl_name(interface_name: str) -> str: """Generate unique ACL name to avoid conflicts with any existing ACLs by appending a random number to the outside interface name""" # Create a random number between 1 and 999 random_number = random.randint(1, 999) acl_name = f"{interface_name}_{random_number}" return acl_name
The goal with this code is to generate a new access-list with a unique name. Note that the script doesn’t do any check if this access-list already exists which is something I will look into in an improved version of the script. I wanted to first start with something that works and take you through the process together with myself as I learn and improve on the existing code.
The function takes an interface_name which is a string. This is provided by the YAML data that we stored in the yaml_dict earlier. The function is then called like this:
acl_name = generate_acl_name(yaml_dict["outside_interface"])
The name is stored in the yaml_dict under the outside_interface mapping:
In [6]: yaml_dict Continue reading
One of my readers sent me an age-old question:
I have my current guest network built on top of my production network. The separation between guest- and corporate network is done using a VLAN – once you connect to the wireless guest network, you’re in guest VLAN that forwards your packets to a guest router and off toward the Internet.
Our security team claims that this design is not secure enough. They claim a user would be able to attach somehow to the switch and jump between VLANs, suggesting that it would be better to run guest access over a separate physical network.
Decades ago, VLAN implementations were buggy, and it was possible (using a carefully crafted stack of VLAN tags) to insert packets from one VLAN to another (see also: VLAN hopping).
One of my readers sent me an age-old question:
I have my current guest network built on top of my production network. The separation between guest- and corporate network is done using a VLAN – once you connect to the wireless guest network, you’re in guest VLAN that forwards your packets to a guest router and off toward the Internet.
Our security team claims that this design is not secure enough. They claim a user would be able to attach somehow to the switch and jump between VLANs, suggesting that it would be better to run guest access over a separate physical network.
Decades ago, VLAN implementations were buggy, and it was possible (using a carefully crafted stack of VLAN tags) to insert packets from one VLAN to another (see also: VLAN hopping).
Rockport Networks has announced a switchless data center networking product that targets high-performance compute clusters running latency-sensitive workloads. Instead of switches in a leaf-spine or Clos fabric design, Rockport builds a multi-path mesh using network cards installed in the PCIe slots of servers and storage systems.
The post Rockport’s Switchless Networking – Don’t Call It A SmartNIC appeared first on Packet Pushers.
If you want to know how and why AMD motors have been chosen for so many of the pre-exascale and exascale HPC and AI systems, despite the dominance of Intel in CPUs and the dominance of Nvidia in GPUs, you need look no further for an answer than the new “Aldebaran” Instinct MI200 GPU accelerator from Nvidia and its Infinity Fabric 3.0 coherent interconnect that is being also added to selected Epyc CPUs from Nvidia. …
The AMD “Aldebaran” GPU That Won Exascale was written by Timothy Prickett Morgan at The Next Platform.