Intel kicked off the Supercomputing 2023 conference with a series of high performance computing (HPC) announcements, including a new Xeon line and Gaudi AI processor.Intel will ship its fifth-generation Xeon Scalable Processor, codenamed Emerald Rapids, to OEM partners on December 14. Emerald Rapids features a maximum core count of 64 cores, up slightly from the 56-core fourth-gen Xeon.In addition to more cores, Emerald Rapids will feature higher frequencies, hardware acceleration for FP16, and support 12 memory channels, including the new Intel-developed MCR memory that is considerably faster than standard DDR5 memory.According to benchmarks that Intel provided, the top-of-the-line Emerald Rapids outperformed the top-of-the-line fourth gen CPU with a 1.4x gain in AI speech recognition and a 1.2x gain in the FFMPEG media transcode workload. All in all, Intel claims a 2x to 3x improvement in AI workloads, a 2.8x boost in memory throughput, and a 2.9x improvement in the DeepMD+LAMMPS AI inference workload.To read this article in full, please click here
To get started as a Linux (or Unix) user, you need to have a good perspective on how Linux works and a handle on some of the most basic commands. This first post in a “getting started” series examines some of the first commands you need to be ready to use.On logging in
When you first log into a Linux system and open a terminal window or log into a Linux system from another system using a tool like PuTTY, you’ll find yourself sitting in your home directory. Some of the commands you will probably want to try first will include these:pwd -- shows you where you are in the file system right now (stands for “present working directory”)
whoami – confirms the account you just logged into
date -- show the current date and time
hostname -- display the system’s name
Using the whoami command immediately after logging in might generate a “duh!” response since you just entered your assigned username and password. But, once you find yourself using more than one account, it’s always helpful to know a command that will remind you which you’re using at the moment.To read this article in full, Continue reading
Nvidia has announced a new AI computing platform called Nvidia HGX H200, a turbocharged version of the company’s Nvidia Hopper architecture powered by its latest GPU offering, the Nvidia H200 Tensor Core.The company also is teaming up with HPE to offer a supercomputing system, built on the Nvidia Grace Hopper GH200 Superchips, specifically designed for generative AI training.A surge in enterprise interest in AI has fueled demand for Nvidia GPUs to handle generative AI and high-performance computing workloads. Its latest GPU, the Nvidia H200, is the first to offer HBM3e, high bandwidth memory that is 50% faster than current HBM3, allowing for the delivery of 141GB of memory at 4.8 terabytes per second, providing double the capacity and 2.4 times more bandwidth than its predecessor, the Nvidia A100.To read this article in full, please click here
LiquidStack, one of the first major players in the immersion cooling business, has entered the single-phase liquid cooling market with an expansion of its DataTank product portfolio.Immersion cooling is the process of dunking the motherboard in a nonconductive liquid to cool it. It's primarily centered around the CPU but, in this case, involves the entire motherboard, including the memory and other chips.Immersion cooling has been around for a while but has been something of a fringe technology. With server technology growing hotter and denser, immersion has begun to creep into the mainstream.To read this article in full, please click here
Frontier maintained its top spot in the latest edition of the TOP500 for the fourth consecutive time and is still the only exascale machine on the list of the world's most powerful supercomputers. Newcomer Aurora debuted at No. 2 in the ranking, and it’s expected to surpass Frontier once the system is fully built.Frontier, housed at the Oak Ridge National Laboratory (ORNL) in Tenn., landed the top spot with an HPL score of 1.194 quintillion floating point operations per second (FLOPS), which is the same score from earlier this year. A quintillion is 1018 or one exaFLOPS (EFLOPS). The speed measurement used in evaluating the computers is the High Performance Linpack (HPL) benchmark, which measures how well systems solve a dense system of linear equations.To read this article in full, please click here
When evaluating the design of your backup systems or developing a design of a new backup and recovery system, there are arguably only two metrics that matter: how fast you can recover, and how much data you will lose when you recover. If you build your design around the agreed-upon numbers for these two metrics, and then repeatedly test that you are able to meet those metrics in a recovery, you’ll be in good shape.The problem is that few people know what these metrics are for their organization. This isn’t a matter of ignorance, though. They don’t know what they are because no one ever created the metrics in the first place. And if you don’t have agreed upon metrics (also known as service levels), every recovery will be a failure because it will be judged against the unrealistic metrics in everyone’s heads. With the exception of those who are intimately familiar with the backup and disaster recovery system, most people have no idea how long recoveries actually take.To read this article in full, please click here
The UK government has revealed technical and funding details for what will be one of the world’s fastest AI supercomputers, to be housed at the University of Bristol — and one of three new supercomputers slated to go online in the country over the next few years.Dubbed Isambard-AI, the new machine, first announced in September, will be built with HPE’s Cray EX supercomputers and powered by 5,448 NVIDIA GH200 Grace Hopper Superchips. The chips, which were launched by Nivida earlier this year, provide three times as much memory as the chipmaker’s current edge AI GPU, the H100, and 21 exaflops of AI performance. To read this article in full, please click here
In this video transcript, Sandra Henry-Stocker discusses how to calculate factorials on a Linux system. She explains that factorials are the multiplication of numbers starting with a specified number and decreasing incrementally until reaching 1. To calculate factorials on Linux, you can use commands like "seq" and "bc." The "seq" command is used to generate a list of sequential numbers, and the "bc" command is used to perform the factorial calculations.
Linux’s compgen command is not actually a Linux command. In other words, it’s not implemented as an executable file, but is instead a bash builtin. That means that it’s part of the bash executable. So, if you were to type “which compgen”, your shell would run through all of the locations included in your $PATH variable, but it just wouldn’t find it.$ which compgen
/usr/bin/which: no compgen in (.:/home/shs/.local/bin:/home/shs/bin:/usr/local/bin:
/usr/bin:/usr/local/sbin:/usr/sbin)
Obviously, the which command had no luck in finding it.If, on the other hand, you type “man compgen”, you’ll end up looking at the man page for the bash shell. From that man page, you can scroll down to this explanation if you’re patient enough to look for it.To read this article in full, please click here
In this episode, Sandra Henry-Stocker, author of the "Unix as a Second Language" blog on NetworkWorld, explores the use of the "nohup" (no hangup) command in Linux.
Cisco is teaming with Aviz Networks to offer an enterprise-grade SONiC offering for large customers interested in deploying the open-source network operating system.Under the partnership, Cisco’s 8000 series routers will be available with Aviz Networks’ SONiC management software and 24/7 support. The support aspect of the agreement may be the most significant portion of the partnership, as both companies attempt to assuage customers’ anxiety about supporting an open-source NOS.While SONiC (Software for Open Networking in the Cloud) is starting to attract the attention of some large enterprises, deployments today are still mainly seen in the largest hyperscalers. With this announcement, Cisco and Aviz are making SONiC more viable for smaller cloud providers, service providers, and those very large enterprises that own and operate their own data centers, said Kevin Wollenweber, senior vice president and general manager with Cisco networking, data center and provider connectivity.To read this article in full, please click here
The rush to embrace artificial intelligence, particularly generative AI, is going to drive hyperscale data-center providers like Google and Amazon to nearly triple their capacity over the next six years.That’s the conclusion from Synergy Research Group, which follows the data center market. In a new report, Synergy notes that while there are many exaggerated claims around AI, there is no doubt that generative AI is having a tremendous impact on IT markets.To read this article in full, please click here
In this episode, Sandra Henry-Stocker, the author of the "Unix as a Second Language" blog on NetworkWorld, introduces various ways to use the Linux date command. She demonstrates how to use the "date" command to display the current day of the week, date, time, and time zone.
A few months ago, Arm Holdings introduced the Neoverse Complete Subsystem (CSS), designed to accelerate development of Neoverse-based systems. Now it has launched Arm Total Design, a series of tools and services to help accelerate development of Neoverse CSS designs.Partners within the Arm Total Design ecosystem gain preferential access to Neoverse CSS, which can enable them to reduce time to market and lower the costs associated with building custom silicon. This ecosystem covers all stages of silicon development. It aims to make specialized solutions based on Arm Neoverse widely available across various infrastructure domains, such as AI, cloud, networking, and edge computing.To read this article in full, please click here
Dell Technologies has issued a significant update to its PowerMax operating system, which runs its high-density storage for mission-critical workloads.The PowerMaxOS 10.1 update is aimed at organizations that want to improve energy efficiency to cut operating costs and lower the environmental impact of their storage infrastructure. Gains in performance, efficiency and cybersecurity are also part of the upgrade.On the energy-efficiency front, new features include real-time power and environmental monitoring and alerting based on usage. Power for all components in a rack is monitored for voltage, current, and frequency, for example, along with temperature and humidity of the rack.To read this article in full, please click here
Linux systems provide numerous ways to work with numbers on the command line – from doing calculations to using commands that generate a range of numbers. This post details some of the more helpful commands and how they work.The expr command
One of the most commonly used commands for doing calculations on Linux is expr. This command lets you use your terminal window as a calculator and to write scripts that include calculations of various types. Here are some examples:$ expr 10 + 11 + 12
33
$ expr 99 - 102
-3
$ expr 7 \* 21
147
Notice that the multiplication symbol * in the command above requires a backslash to ensure the symbol isn’t interpreted as a wildcard. Here are some more examples:To read this article in full, please click here
Palo Alto Networks has bolstered its cloud security software with features that help customers quickly spot suspicious behaviors and trace security issues to their source to better protect enterprise software-as-a-service (SaaS) applications.The vendor has added a variety of new components, under the moniker Darwin, to its core cloud-security package, Prisma Cloud. The core platform already includes application-security features such as access control, advanced threat protection, user-behavior monitoring, and the ability to code security directly into SaaS applications. Managed through a single console, Prisma Cloud also includes firewall as a service, zero-trust network access (ZTNA), a cloud-access security broker (CASB), and a secure web gateway.To read this article in full, please click here
Chipmaker Nvidia and the world’s largest contract manufacturer Foxconn are partnering to start building AI factories globally, the two companies announced on Tuesday.AI factories are data centers with infrastructure specially built for processing, refining, and transforming vast amounts of data into valuable AI models and tokens, Nvidia founder and CEO Jensen Huang and Foxconn Chairman and CEO Young Liu, said during a fireside chat at Hon Hai Tech Day, in Taipei.“A new type of manufacturing has emerged — the production of intelligence. And the data centers that produce it are AI factories,” Huang said in a statement, adding that the data center infrastructure would include Nvidia’s accelerated computing platform — the latest GH200 Grace Hopper Superchip and Nvidia’s AI enterprise software.To read this article in full, please click here
AI and intelligent application-development trends will impact the enterprise the most in 2024, says research firm Gartner, which unveiled its annual look at the top strategic technology trends that organizations need to prepare for in the coming year.“A lot of the trends are around AI development, but also in protecting the investment that organizations have already made. For example, they’ve invested in machine learning, natural language. And there's a ramp up in software engineering right now where people are building more things because they have access to that data and the development tools are getting better,” said Chris Howard, distinguished vice president analyst and chief of research, during his presentation of this year's trends list at Gartner’s flagship IT Symposium/Xpo conference in Orlando, Florida.To read this article in full, please click here
It’s no secret that data centers are facing power shortage issues, especially in high density areas. One colocation provider has come up with a unique solution: It’s building small nuclear power plants for itself.Data center provider Standard Power specializes in high-performance computing, such as blockchain mining and AI workloads. These kinds of workloads demand a lot of compute power, which equals a very large electric bill.The company was concerned about the ability of local electric providers to deliver the capacity needed for such demanding workloads. So, rather than rely on the local electrical grid, Standard is partnering with NuScale Power Corporation, a maker of small modular nuclear-powered plants, for its Ohio and Pennsylvania facilities.To read this article in full, please click here