Archive

Category Archives for "The Next Platform"

Boosting Memory Capacity And Performance While Saving Megawatts

Antonio Peña, senior researcher at the Barcelona Supercomputing Center, and his team in Spain have demonstrated how – without code modification – large data centers can increase application performance while saving megawatts of power per day plus run 100X to 10,000X larger AI inference jobs that can handle encrypted data.

Boosting Memory Capacity And Performance While Saving Megawatts was written by Rob Farber at The Next Platform.

One Way To Bring DPU Acceleration To Supercomputing

That is not a typo in the title. We did not mean to say GPU in title above, or even make a joke that in hybrid CPU_GPU systems, the CPU is more of a serial processing accelerator with a giant slow DDR4 cache for GPUs in hybrid supercomputers these days – therefore making the CPU a kind of accelerator for the GPU.

One Way To Bring DPU Acceleration To Supercomputing was written by Timothy Prickett Morgan at The Next Platform.

Lenovo Spreads The AI Message Far And Wide

Artificial intelligence and machine learning are foundational to many of the modernization efforts that enterprises are embracing, from leveraging them to more quickly analyze the mountains of data they’re generating and automating operational processes to running the advanced applications – like natural language processing, speech and image recognition, and machine vision – needed by a broad array of industries, from financial services, agriculture, healthcare and automotive.

Lenovo Spreads The AI Message Far And Wide was written by Jeffrey Burt at The Next Platform.

Broadcom Widens And Smartens Switch Chip Lineup

Cisco Systems may still be the biggest supplier of switches and routers in general, but it has long since been surpassed by Broadcom when it comes to suppling the silicon that does the switching itself and sometimes even a little bit of routing in the datacenter in particular.

Broadcom Widens And Smartens Switch Chip Lineup was written by Timothy Prickett Morgan at The Next Platform.

The Redemption Of AMD In HPC

Many of the technologists at AMD who are driving the Epyc CPU and Instinct GPU roadmaps as well as the $35 billion acquisition of FPGA maker Xilinx have long and deep experience in the high performance computing market that is characterized by the old school definition of simulation and modeling workloads running on federated or clustered systems.

The Redemption Of AMD In HPC was written by Timothy Prickett Morgan at The Next Platform.

Injecting Machine Learning And Bayesian Optimization Into HPC

No matter what kind of traditional HPC simulation and modeling system you have, no matter what kind of fancy new machine learning AI system you have, IBM has an appliance that it wants to sell you to help make these systems work better – and work better together if you are mixing HPC and AI.

Injecting Machine Learning And Bayesian Optimization Into HPC was written by Timothy Prickett Morgan at The Next Platform.

Pure Expands Its As-A-Service Playbook

The push by established datacenter tech vendors to get into the as-a-service game has accelerated in recent months, fueled in part by the COVID-19 pandemic and the need by organizations to more quickly embrace cloud services to help them adapt to the suddenly shifted business model that features a more widely distributed workforce, which brings a truckload of security and management issues.

Pure Expands Its As-A-Service Playbook was written by Jeffrey Burt at The Next Platform.

Gordon Bell Prize Winners Leverage Machine Learning For Molecular Dynamics

For more than three decades, researchers have used a particular simulation method for molecular dynamics called Ab initio molecular dynamics, or AIMD, which has proven itself to be the method most accurate for analyzing how atoms and molecules move and interact over a fixed time period.

Gordon Bell Prize Winners Leverage Machine Learning For Molecular Dynamics was written by Jeffrey Burt at The Next Platform.

Taking Kubernetes Up To The Next Level

From the time Kubernetes was born in the labs at Google by engineers Joe Beda, Brendan Burns, and Craig McLuckie and then contributed to the open source community, it has become the de facto orchestration platform for containers, enabling easier development, scaling and movement of modern applications between on-premises datacenters and the cloud and between the multiple clouds – public and private – that enterprises are embracing.

Taking Kubernetes Up To The Next Level was written by Jeffrey Burt at The Next Platform.

InfiniBand Is Still Setting The Network Pace For HPC And AI

If this is the middle of November, even during a global pandemic, this must be the SC20 supercomputing conference and there either must be a speed bump that is being previewed for the InfiniBand interconnect commonly used for HPC and AI or it is actually shipping in systems.

InfiniBand Is Still Setting The Network Pace For HPC And AI was written by Timothy Prickett Morgan at The Next Platform.

1 60 61 62 63 64 154