Microsoft has announced the public preview of its new Azure virtual machines powered by the Arm-based Ampere Altra server processors.The VM series includes the general-purpose Dpsv5 and memory-optimized Epsv5 virtual machines, which Microsoft claims can deliver up to 50% better price-performance than comparable IBM x86-based VMs.The new VMs have been specifically engineered to efficiently run scale-out workloads, web servers, application servers, open-source databases, cloud-native and rich .NET applications, Java applications, gaming servers, and media servers.To read this article in full, please click here
Advanced Micro Devices took a big step toward competing in data-center networking with its announced agreement to acquire Pensando for approximately $1.9 billion. AMD wants the DPU-based architecture and technology Pensando is developing that includes intelligent, programmable software to support software-defined cloud, compute, networking, storage, and security services that could be rolled out quickly in edge, colocation, or service-provider networks.What is SDN and where it’s going
“There are a wide range of use cases—such as 5G and IoT—that need to support lots of low latency traffic,” Soni Jiandani, Pensando co-founder and chief business office told Network World last November:“We’ve taken a ground-up approach to giving enterprise customers a fully programmable system with the ability to support multiple infrastructure services without dedicated CPUs.”To read this article in full, please click here
With the advent of the latest generation of IBM mainframes – the z16 – Big Blue is furthering one of the most successful technology systems the IT world has known. Here are a few key moments in the history of the Big Iron:IBM 360
In 1964 IBM began what many would consider the first true series of mainframes, the IBM 360. At the time IBM said its central processors included 19 combinations of graduated speed and memory capacity. The system also included more than 40 types of peripheral equipment for entering, storing, and retrieving information. Built-in communications capability made the System/360 available to remote terminals, regardless of distance.To read this article in full, please click here
With the advent of the latest generation of IBM mainframes – the z16 – Big Blue is furthering one of the most successful technology systems the IT world has known. Here are a few key moments in the history of the Big Iron:IBM 360
In 1964 IBM began what many would consider the first true series of mainframes, the IBM 360. At the time IBM said its central processors included 19 combinations of graduated speed and memory capacity. The system also included more than 40 types of peripheral equipment for entering, storing, and retrieving information. Built-in communications capability made the System/360 available to remote terminals, regardless of distance.To read this article in full, please click here
NOC (pronounced “knock,”) stands for a network operations center, and if the term conjures up images of a NASA-like control room, you would not be too far off from reality – at least at some organizations.While the role of a NOC can vary, the general idea is to create a room or centralized facility where information technology (IT) professionals can constantly monitor, maintain and troubleshoot all aspects of a network. The NOC must also be equipped with all of the technology required in order to support those operations, including monitors, computers, telecommunications equipment and a fast connection to network resources.NOCs were created for two main reasons. The first was to give IT staffers a central location to work from, instead of having them run around trying to fix problems or perform preventative maintenance, like patching systems, from different locations.To read this article in full, please click here
In this Linux tip, we’re going to look at the btrfsck command. It provides file system checking for btrfs file systems – sometimes referred to as the "better" file systems, but actually named for its B-tree underpinnings.
The conventional wisdom is that you should update your IT gear, namely the servers, every three-to-five years, which is usually when service warranties run out. However, some companies hold onto their gear for longer than that for a variety of reasons: lack of funds, business uncertainty, on-premises versus the cloud, and so forth.And for a while, the CPU guys were helping. New generations of processors were only incrementally faster than the old ones making it hard to justify an upgrade. The result was longer lifecycles for server hardware. A 2020 survey by IDC found 20.3% of respondents holding on to servers for six years and 12.4% keeping servers for seven years or more.To read this article in full, please click here
The old saying “adversity makes for strange bedfellows” has been proven true, with Nvidia saying it is now willing to work with Intel’s foundry business to manufacture its chips.Nvidia CEO Jen-Hsun Huang dropped the news on a press call when he was asked . about diversifying the company’s supply chain, which relies on TSMC for its chip manufacturing, and TSMC is both overloaded with orders and in a politically unstable region of the world (Taiwan).Huang said his company realized it needed more resilience going forward, and so over the last couple years has added to the number of process nodes it uses, and is in more fabs than ever. “So we've expanded our supply chain, supply base, probably four-fold in the last two years,” Huang said.To read this article in full, please click here
The btrfsck command is a filesystem-check command like fsck, but it works with the btrfs file system.First a little bit about btrfs. As the name implies, btrfs uses a B-tree data structure that is self-balancing and maintains sorted data, facilitating searches, sequential access, insertions, and deletions. It is also often referred to as the “better file system”. Oracle developed it and first used it about 15 years ago. By November 2013, it was declared adequately stable and began to be used by other distributions as well, and now its use is quite common.Benefits of btrfs
The benefits of btrfs are impressive, although it’s still a work in progress and some concerns have kept it from playing a more dominant role on Linux systems. It keeps 2 copies of metadata on a volume, allowing for data recovery if and when the hard drive is damaged or suffers from bad sectors. It uses checksums and verifies them with each read. In addition, compared to ext4 volumes, btrfs does not require double the storage space to accommodate file versioning and history data.To read this article in full, please click here
Public cloud providers are quickly becoming the biggest buyers of data center infrastructure equipment, as purchasing of hardware and software both rebounded sharply in 2021, according to a recent report by Synergy Research Group.Overall spending grew by roughly 10% in year-on-year terms, reaching a total of $185 billion in 2021. The lion’s share of that spending was on hardware, according to Synergy, with 77% of the total spend going towards servers, storage and networking gear. Software, including operating systems, cloud management, virtualization and network security, made up the rest of the total.To read this article in full, please click here
At its GPU technology conference (GTC) last year, Nvidia announced it would come out with its own server chip called Grace based on the Arm Neoverse v9 server architecture. At the time, details were scant, but this week Nvidia revealed the details, and they are remarkable.With Grace, customers have two options, both dubbed superchips by Nvidia. The first is the Grace Hopper Superchip that was formally introduced last year, but only broadly described. It consists of a 72-core CPU, and a Hopper H100 GPU tightly connected by Nvidia’s new high-speed NVLink-C2C chip-to-chip interconnect, which has 900GB/s of transfer speed.To read this article in full, please click here
Nvidia is known for its GPUs, but has introduced Spectrum-4, a combination of networking technologies that reinforces its commitment not only to graphics processors, but also to systems designed to handle the demanding network workloads of AI and high-performance computing.The latest Nvidia Spectrum products rely on the new Spectrum-4 Ethernet-switch ASIC that boasts 51.2 Tb/s switching and routing capacity. The chip underpins the latest members of the company’s Spectrum switches, which are available later this year. The switches are part of a larger Spectrum-4 platform that integrates Nvidia’s ConnectX-7 smartNIC, its new BlueField-3 DPU, and its DOCA software-development platform.To read this article in full, please click here
In this Linux tip, we’re going to look at using the date command to run tests. You can always use the date command to see what day it is, but you can also use it in scripts to test what time, day of the month or month of the year it is (and a lot of other things too).
Exacerbated by the pandemic, the chip shortage neared crisis proportions at the start of the year. Network vendors calculated the impact on their businesses in recent earnings reports: Cisco's current product backlog is at nearly $14 billion, Juniper reported a backlog of $1.8 billion, and Arista said that lead times on sales are 50 to 70 weeks.Then Russia invaded Ukraine, putting even more stress on the global supply chain. Ukraine manufactures 70% of the world's neon gas, which is needed for the industrial lasers used in semiconductor manufacturing, according to research firm TrendForce.To read this article in full, please click here
While the rest of the computing industry struggles to get to one exaflop of computing, Nvidia is about to blow past everyone with an 18-exaflop supercomputer powered by a new GPU architecture.The H100 GPU, has 80 billion transistors (the previous generation, Ampere, had 54 billion) with nearly 5TB/s of external connectivity and support for PCIe Gen5, as well as High Bandwidth Memory 3 (HBM3), enabling 3TB/s of memory bandwidth, the company says. Due out in the third quarter, it’s the first in a new family of GPUs named Hopper after Admiral Grace Hopper who created COBOL and coined the term computer bug.To read this article in full, please click here
Shared object files streamline programs by providing information applications need to do their jobs, but that don't have to be part of the application itself. To find out which of these files a Linux command calls on, use the ldd command.What is a shared object file?
Shared object files (designated as .so) are libraries that are automatically linked into a program when the program starts, yet exist as a standalone files. They contain information that can be used by one or more programs to offload resources so that any program calling a .so file doesn't itself have to actually provide all the needed tools. These files can be linked to any program and be loaded anywhere in memory.To read this article in full, please click here
When you think about the metaverse and the enterprise, do you think about millions of workers buzzing about in a virtual world to do their work? Maybe employees picking Star Wars characters as avatars and fighting with light sabers? CEOs likely blanch at that image; to most, virtual workers implies virtual work, and it’s hard to say how that generates real sales and products. Fortunately, there’s an alternative that depends not on enterprises using the metaverse but on riding its coattails.If you ask enterprises what they think about the next frontier in cloud computing is, the responses are mixed between “the edge” and “IoT”, and of course the latter is really an example of an edge application. Well that frontier may be delayed because service providers would have to make a significant investment in infrastructure just to create an edge/IoT option for enterprises, and most enterprises aren’t willing to start planning for that next frontier until services are available. With buyers waiting for services and sellers wanting proven demand, we could be in for an era of false starts, edge-wise.To read this article in full, please click here
AMD is adding four new processor SKUs to its EPYC (formerly codenamed Milan-X) lineup of high-end chips, building additional L3 cache capability onto the existing EPYC series.The key new feature of the new 7773X, 7573X, 7473X, and 7373X chips, which were initially announced in a roadmap made public late last year, is in their physical construction — AMD refers to the new technique as 3D V-Cache. Where most processors are constructed with a single piece of silicon inside, the new AMD chips mount a second microprocessor die atop the first one, which allows for a larger L3 cache.IDC's research vice president for computing semiconductors, Shane Rau, said that this is an important feature for the very high-end applications that AMD is targeting with the EPYC series, which AMD groups under the rubric of "technical computing" — highly demanding enterprise workloads like modeling and visualization, as well as academic and scientific applications.To read this article in full, please click here
The fail2ban tool in Linux monitors system logs for signs of attacks, putting offending systems into what is called "jail", and modifying firewall settings. It shows what systems are in jail at any given time, and requires root access to configure and view findings. It's generally used on Linux servers.fail2ban primarily focuses on SSH attacks, but can be configured to look for other kinds of attacks as well.How to install fail2ban on Fedora 34
To prepare for installing fail2ban, it's a good idea to update the system first:$ sudo dnf update && sudo dnf upgrade -y
Then install fail2ban and verify its presence on your system with commands like these:To read this article in full, please click here
This year, server vendors will begin shifting to a new form of memory, Double Data Rate version 5, or DDR5 for short. With its improved performance, it will be very appealing in certain use cases, like virtualization and artificial intelligence. We’ll get to that in a minute.The DDR spec has been developed by the Joint Electronic Device Engineering Council since 2001, and with each iteration the spec supports faster speed and lower power draw. This holds true for DDR5.
[ Get regularly scheduled insights by signing up for Network World newsletters. ]To read this article in full, please click here