HP Enterprise says it will deliver a series of servers powered by the Arm-based Altra and Altra Max by Ampere, the CPU startup run by former Intel executive Renee James.Ampere, not to be confused with the GPU processor of the same name from Nvidia, has scored some wins with cloud providers, notably Microsoft Azure and Oracle Cloud Infrastructure, but it had yet to land in OEM partner. Until now.Starting in Q3 2022, HPE says it will ship ProLiant RL300 Gen11 servers, available for both outright purchase and for leasing through HPE’s GreenLake consumption model. HPE says this will be the first in a series of HPE ProLiant RL Gen11 servers using 80-core Altra and 128-core Altra Max processors.To read this article in full, please click here
Micron Technology is bucking the trend of moving to PCI Express-based storage and releasing a new SATA III-based SSD with ultradense memory storage and read optimized for faster data access.The SATA interface has been around since the beginning of the century, but it has progressed much slower than the PCIe interface and with nowhere near the leaps in performance. Among gamers, who are as obsessed with performance as someone doing AI models, PCIe drives are standard issue, and SATA drives are at best used for storage.That’s because SATA III has a throughput of about 550MB/s, while PCIe 4.0 has more than 10 times the throughput.To read this article in full, please click here
Micron Technology is bucking the trend of moving to PCI Express-based storage and releasing a new SATA III-based SSD with ultradense memory storage and read optimized for faster data access.The SATA interface has been around since the beginning of the century, but it has progressed much slower than the PCIe interface and with nowhere near the leaps in performance. Among gamers, who are as obsessed with performance as someone doing AI models, PCIe drives are standard issue, and SATA drives are at best used for storage.That’s because SATA III has a throughput of about 550MB/s, while PCIe 4.0 has more than 10 times the throughput.To read this article in full, please click here
An overwhelming majority of enterprises continue to move workloads from the cloud back to on-premises data centers, although it is a smaller percentage than before, according to IDG research.A survey found that 71% of respondents expect to move all or some of their workloads currently running in public clouds back to private IT environments over the next two years. Only 13% expect to run all their workloads in the cloud, according to the survey sponsored by Supermicro.In the past, those expecting to move workloads back from the cloud was as high as 85%, according to Natalya Yezhkova, research vice president in IDC’s enterprise infrastructure practice.To read this article in full, please click here
The latest PCI Express (PCIe) specification again doubles the data rate over the previous spec.PCI Express 7.0 calls for a data rate of 128 gigatransfers per second (GT/s) and up to 512 GB/s bi-directionally via x16 data lane slot (not every PCI Express slot in a PC or server uses 16 transfer lanes), according to PCI-SIG, the industry group that maintains and develops the specification.
[ Get regularly scheduled insights by signing up for Network World newsletters. ]
The slower, previous spec, PCI Express 6.0 has yet to come to market, and doubling the rate with each version has become the norm.To read this article in full, please click here
The latest PCI Express (PCIe) specification again doubles the data rate over the previous spec.PCI Express 7.0 calls for a data rate of 128 gigatransfers per second (GT/s) and up to 512 GB/s bi-directionally via x16 data lane slot (not every PCI Express slot in a PC or server uses 16 transfer lanes), according to PCI-SIG, the industry group that maintains and develops the specification.
[ Get regularly scheduled insights by signing up for Network World newsletters. ]
The slower, previous spec, PCI Express 6.0 has yet to come to market, and doubling the rate with each version has become the norm.To read this article in full, please click here
The latest PCI Express (PCIe) specification again doubles the data rate over the previous spec.PCI Express 7.0 calls for a data rate of 128 gigatransfers per second (GT/s) and up to 512 GB/s bi-directionally via x16 data lane slot (not every PCI Express slot in a PC or server uses 16 transfer lanes), according to PCI-SIG, the industry group that maintains and develops the specification.
[ Get regularly scheduled insights by signing up for Network World newsletters. ]
The slower, previous spec, PCI Express 6.0 has yet to come to market, and doubling the rate with each version has become the norm.To read this article in full, please click here
StorONE has introduced what it claims is the first storage platform to enable connectivity between standard mechanical hard disk drives (HDD) and flash drives over NVMe-over Fabric (NVMe-oF) infrastructures, which it says can reduce the cost of an NVMe solution by tenfold or more.Storage arrays have traditionally been separated by drive make. You have all-flash arrays and all-hard-disk arrays but not a mix of the two. Typical operation is to put “hot” data, or data that is frequently accessed, on the much faster SSDs, and put less frequently accessed data on the slower HDDs. That approach requires two or more separate arrays, plus the connection between them.To read this article in full, please click here
StorONE has introduced what it claims is the first storage platform to enable connectivity between standard mechanical hard disk drives (HDD) and flash drives over NVMe-over Fabric (NVMe-oF) infrastructures, which it says can reduce the cost of an NVMe solution by tenfold or more.Storage arrays have traditionally been separated by drive make. You have all-flash arrays and all-hard-disk arrays but not a mix of the two. Typical operation is to put “hot” data, or data that is frequently accessed, on the much faster SSDs, and put less frequently accessed data on the slower HDDs. That approach requires two or more separate arrays, plus the connection between them.To read this article in full, please click here
IT vendors typically race to deliver incremental improvements to existing product lines, but occasionally a truly disruptive technology comes along. One of those disruptive technologies, which is beginning to find its way into enterprise data centers, is High-Bandwidth Memory (HBM).HBM is significantly faster than incumbent memory chip technologies, uses less power and takes up less space. It is becoming particularly popular for resource-intensive applications such as high-performance computing (HPC) and artificial intelligence (AI).To read this article in full, please click here
IT vendors typically race to deliver incremental improvements to existing product lines, but occasionally a truly disruptive technology comes along. One of those disruptive technologies, which is beginning to find its way into enterprise data centers, is High-Bandwidth Memory (HBM).HBM is significantly faster than incumbent memory chip technologies, uses less power and takes up less space. It is becoming particularly popular for resource-intensive applications such as high-performance computing (HPC) and artificial intelligence (AI).To read this article in full, please click here
AMD is working on an accelerated processing unit that will outperform its current top APU that powers the world’s first exascale supercomputer.At its recent analyst day, the company introduced a new high-end accelerator, the Instinct MI300, an APU that combines Zen 4 CPUs, the latest generation of GPU technology, plus AMD’s Infinity Cache and Infinity architecture in one package. It will deliver eight times the AI performance of AMD’s current high-end ACU, the MI250, and will be available next year.A pool of high-bandwidth memory on the ACU is shared between the CPU and the GPU allowing them to communicate freely without the performance or energy overhead of redundant memory copies.To read this article in full, please click here
AMD is working on an accelerated processing unit that will outperform its current top APU that powers the world’s first exascale supercomputer.At its recent analyst day, the company introduced a new high-end accelerator, the Instinct MI300, an APU that combines Zen 4 CPUs, the latest generation of GPU technology, plus AMD’s Infinity Cache and Infinity architecture in one package. It will deliver eight times the AI performance of AMD’s current high-end ACU, the MI250, and will be available next year.A pool of high-bandwidth memory on the ACU is shared between the CPU and the GPU allowing them to communicate freely without the performance or energy overhead of redundant memory copies.To read this article in full, please click here
Pure Storage announced updates to its AIRI//S line of AI-ready infrastructure, which it co-developed with Nvidia.The two vendors launched AIRI in 2018, claiming it was the first AI-oriented reference architecture that simplified the process of building an AI infrastructure by connecting compute with storage. AIRI is essentially a combination of Pure’s scale-out FlashBlade//S and Nvidia’s DGX ultra-dense GPU box. Pure provided the storage, and Nvidia provided the compute.This latest move, unveiled at the Pure//Accelerate techfest22 conference in Los Angeles, is quite an advancement, however.The new release of AIRI//S is powered by Nvidia DGX A100 systems, featuring end-to-end networking provided by Nvidia’s Quantum InfiniBand and Spectrum networking. A DGX A100 system comes with eight Ampere-generation A100 GPUs and up to ten ConnectX-6 network adapters from Mellanox.To read this article in full, please click here
Pure Storage announced updates to its AIRI//S line of AI-ready infrastructure, which it co-developed with Nvidia.The two vendors launched AIRI in 2018, claiming it was the first AI-oriented reference architecture that simplified the process of building an AI infrastructure by connecting compute with storage. AIRI is essentially a combination of Pure’s scale-out FlashBlade//S and Nvidia’s DGX ultra-dense GPU box. Pure provided the storage, and Nvidia provided the compute.This latest move, unveiled at the Pure//Accelerate techfest22 conference in Los Angeles, is quite an advancement, however.The new release of AIRI//S is powered by Nvidia DGX A100 systems, featuring end-to-end networking provided by Nvidia’s Quantum InfiniBand and Spectrum networking. A DGX A100 system comes with eight Ampere-generation A100 GPUs and up to ten ConnectX-6 network adapters from Mellanox.To read this article in full, please click here
Intel has introduced a reference design it says can enable accelerator cards for security workloads including secure access service edge (SASE), IPsec, and SSL/TLS.The upside of the server cards would be offloading some application processing from CPUs, effectively increasing server performance without requiring additional server rack space, according to Intel.
[ Get regularly scheduled insights by signing up for Network World newsletters. ]
The announcement was made at RSA Conference 2022, and details were published in a blog post by Bob Ghaffardi, Intel vice president and general manager of the Enterprise and Cloud Division.To read this article in full, please click here
Intel has introduced a reference design it says can enable accelerator cards for security workloads including secure access service edge (SASE), IPsec, and SSL/TLS.The upside of the server cards would be offloading some application processing from CPUs, effectively increasing server performance without requiring additional server rack space, according to Intel.
[ Get regularly scheduled insights by signing up for Network World newsletters. ]
The announcement was made at RSA Conference 2022, and details were published in a blog post by Bob Ghaffardi, Intel vice president and general manager of the Enterprise and Cloud Division.To read this article in full, please click here
Intel has introduced a reference design it says can enable accelerator cards for security workloads including secure access service edge (SASE), IPsec, and SSL/TLS.The upside of the server cards would be offloading some application processing from CPUs, effectively increasing server performance without requiring additional server rack space, according to Intel.
[ Get regularly scheduled insights by signing up for Network World newsletters. ]
The announcement was made at RSA Conference 2022, and details were published in a blog post by Bob Ghaffardi, Intel vice president and general manager of the Enterprise and Cloud Division.To read this article in full, please click here
Ampere Computing introduced the next generation of its Arm-based server processors and said it has begun sampling the chip to select customers.Former Intel president Renee James launched Ampere in 2018, and the company so far has released two processors aimed at cloud data centers: the 80-core Ampere Altra and the 128-core Ampere Altra Max. Those processors used cores licensed from Arm Holdings. But now, with the new AmphereOne chip, Ampere has created customized versions of the Arm processor cores to better tailor them to customer needs.
Read more: The three-way race for GPU dominance in the data centerTo read this article in full, please click here
Ampere Computing introduced the next generation of its Arm-based server processors and said it has begun sampling the chip to select customers.Former Intel president Renee James launched Ampere in 2018, and the company so far has released two processors aimed at cloud data centers: the 80-core Ampere Altra and the 128-core Ampere Altra Max. Those processors used cores licensed from Arm Holdings. But now, with the new AmphereOne chip, Ampere has created customized versions of the Arm processor cores to better tailor them to customer needs.
Read more: The three-way race for GPU dominance in the data centerTo read this article in full, please click here