Almost two years after the acquisition by Intel, the deep learning chip architecture from startup Nervana Systems will finally be moving from its codenamed “Lake Crest” status to an actual product.
In that time, Nvidia, which owns the deep learning training market by a long shot, has had time to firm up its commitment to this expanding (if not overhyped in terms of overall industry dollar figures) market with new deep learning-tuned GPUs and appliances on the horizon as well as software tweaks to make training at scale more robust. In other words, even with solid technology at a reasonable …
Intel, Nervana Shed Light on Deep Learning Chip Architecture was written by Nicole Hemsoth at The Next Platform.
The quantum computing competitive landscape continues to heat up in early 2018. But today’s quantum computing landscape looks a lot like the semiconductor landscape 50 years ago.
The silicon-based integrated circuit (IC) entered its “medium-scale” integration phase in 1968. Transistor counts ballooned from ten transistors on a chip to hundreds of transistors on a chip within a few short years. After a while, there were thousands of transistors on a chip, then tens of thousands, and now we have, fifty years later, tens of billions.
Quantum computing is a practical application of quantum physics using individual subatomic particles chilled to …
Quantum Computing Enters 2018 Like It Is 1968 was written by Timothy Prickett Morgan at The Next Platform.
Hyperscalers have billions of users who get access to their services for free, but the funny thing is that these users act like they are paying for it and expect for these services to be always available, no excuses.
Organizations and consumers also rely on Facebook, Google, Microsoft, Amazon, Alibaba, Baidu, and Tencent for services that they pay for, too, and they reasonably expect that their data will always be immediately accessible and secure, the services always available, their search returns always popping up milliseconds after their queries are entered, and the recommendations that come to them …
Machine Learning Drives Changing Disaster Recovery At Facebook was written by Jeffrey Burt at The Next Platform.
The new year in the IT sector got off to a roaring start with the revelation of the Meltdown and Spectre security threats, the latter of which affects most of the processors used in consumer and commercial computing gear made on the last decade or so.
Much has been written about the nature of the Meltdown and Spectre threats, which leverage the speculative execution features of modern processors to give user-level applications access to operating system kernel memory, which is a very big problem. Chip suppliers and operating system and hypervisor makers have known about these exploits since last June, …
The Spectre And Meltdown Server Tax Bill was written by Timothy Prickett Morgan at The Next Platform.
Here at The Next Platform, we tend to keep a close eye on how the major hyperscalers evolve their infrastructure to support massive scale and evermore complex workloads.
Not so long ago the core services were relatively standard transactions and operations, but with the addition of training and inferencing against complex deep learning models—something that requires a two-handed approach to hardware—the hyperscale hardware stack has had to quicken its step to keep pace with the new performance and efficiency demands of machine learning at scale.
While not innovating on the custom hardware side quite the same way as Google, …
Facebook’s Expanding Machine Learning Infrastructure was written by Nicole Hemsoth at The Next Platform.
In its quest to meet the world’s ever-increasing demand for energy, the oil and gas industry has become one of the largest users—and leading innovators—of high-performance computing (HPC). As natural resources deplete, and the cost of accessing them increases, highly sophisticated computational modeling becomes an essential tool in energy exploration and development.
Advanced computational techniques provide a high-fidelity model of the subsurface, which gives oil and gas companies a greater understanding of the geophysics of the region they propose to explore. A clearer picture of the earth enables targeted drilling, reduced acquisition costs, and minimal environmental impact. In an industry …
HPC Optimizes Energy Exploration for Oil and Gas Startups was written by Nicole Hemsoth at The Next Platform.
Every IT organization wants a more scalable, programmable, and adaptable platform with real-time applications that can chew on ever-increasing amounts and types of data. And it would be nice if it could run in the cloud, too.
Because of this, companies no longer think about databases, but rather are building or buying data platforms that are based on industry-standard technologies, big data tools like NoSQL and unified in a single place. It is a trend that started gaining momentum around 2010 and will accelerate this year, according to Ravi Mayuram, senior vice president of engineering and chief technology officer at …
Everyone Wants A Data Platform, Not A Database was written by Jeffrey Burt at The Next Platform.
The future of IT is in the cloud, but it will be a hybrid cloud. And that means things will by necessity get complicated.
Public clouds from the likes of Amazon Web Services, Microsoft, Google and IBM offer enterprises the ability to access massive, elastic and highly scalable infrastructure environments for many of their workloads without having to pay the cost of bringing those capabilities into their on-premises environments, but there will always be applications that businesses will want to keep behind the firewall for security and efficiency reasons. That reality is driving the demand not only for the …
Microsoft Boosts Azure Storage With Flashy Avere was written by Jeffrey Burt at The Next Platform.
The differences between peak theoretical computing capacity of a system and the actual performance it delivers can be stark. This is the case with any symmetric or asymmetric processing complex, where the interconnect and the method of dispatching work across the computing elements is crucial, and in modern hybrid systems that might tightly couple CPUs, GPUs, FPGAs, and memory class storage on various interconnects, the links could end up being as important as the compute.
As we have discussed previously, IBM’s new “Witherspoon” AC922 hybrid system, which was launched recently and which starts shipping next week, is designed from …
NVLink Shines On Power9 For AI And HPC Tests was written by Timothy Prickett Morgan at The Next Platform.
It’s a familiar story arc for open source efforts started by vendors or vendor-led industry consortiums. The initiatives are launched and expanded, but eventually they find their way into independent open source organizations such as the Linux Foundation, where vendor control is lessened, communities are able to grow, and similar projects can cross-pollinate in hopes of driving greater standardization in the industry and adoption within enterprises.
It happened with Xen, the virtualization technology that initially started with XenSource and was bought by Citrix Systems but now is under the auspices of the Linux Foundation. The Linux kernel lives there, too, …
Juniper Flips OpenContrail To The Linux Foundation was written by Jeffrey Burt at The Next Platform.
Artificial intelligence and machine learning, which found solid footing among the hyperscalers and is now expanding into the HPC community, are at the top of the list of new technologies that enterprises want to embrace for all kinds of reasons. But it all boils down to the same problem: Sorting through the increasing amounts of data coming into their environments and finding patterns that will help them to run their businesses more efficiently, to make better businesses decisions, and ultimately to make more money.
Enterprises are increasingly experimenting with the various frameworks and tools that are on the market …
Enterprises Challenged By The Many Guises Of AI was written by Jeffrey Burt at The Next Platform.
You can’t call them the Super 8 because the discount hotel chain already has that name. But that is what they – with the they being Google, Amazon, Microsoft, and Facebook in the United States and Baidu, Alibaba, Tencent, and China Mobile in China – are. They are the biggest spenders, the hardest negotiators, and the most demanding customers in the IT sector.
Any component supplier that gets them buying their stuff gets kudos for their design wins and is assured, at least for a generation of products, a very steady and large demand, even if they might not bring …
Two Hyperscalers Down For AMD’s Epyc, Six To Go was written by Timothy Prickett Morgan at The Next Platform.
NVM-Express holds the promise of accelerating the performance and lowering the latency of flash and other non-volatile storage. Every server and storage vendor we can think of is working to bring NVM-Express into the picture to get the full benefits of flash, but even six years after the first specification for the technology was released, NVM-Express is still very much a work in progress, with capabilities like stretching it over a network still a couple of years away.
Pure Storage launched eight years ago with the idea of selling only all-flash arrays and saw NVM-Express coming many years ago, and …
A Purified Implementation Of NVM-Express Storage was written by Jeffrey Burt at The Next Platform.
The word has come down from the top: Your company is going blockchain, and you will be implementing it. You have heard the buzz and are aware there is a difference between blockchain – the distributed, peer-to-peer ledger system – and its digital currency cousin, Bitcoin, which has been in the headlines. But how do you build an enterprise-class blockchain?
Let’s start with the basic premise, as that will inform the architectural and technical choices you make. Organizations are jumping on the blockchain bandwagon as a means of making transactions that span multiple parties simpler, more efficient and available at …
Building An Enterprise Blockchain was written by Timothy Prickett Morgan at The Next Platform.
Putting more and more cores on a single CPU and then having two CPUs in a standard workhorse server is something that yields the best price/performance for certain kinds of compute-hungry workloads, and these days, particularly those who want top bin Xeon parts and the cost of the processor is no object because it saves on the total number of server nodes that has to be deployed.
But this is not the only way to pack the most compute density into a rack. A case can be made for middle bin parts, particularly for workloads that scale well across many …
Battle For Datacenter Compute: Qualcomm Centriq Versus Intel Xeon was written by Timothy Prickett Morgan at The Next Platform.
One of the more significant efforts in Europe to address the challenges of the convergence of high performance computing (HPC), high performance data analytics (HPDA) and soon artificial intelligence (AI), and ensure that researchers are equipped and familiar with the latest technology, is happening in France at GENCI (Grand équipement national de calcul intensif).
Grand équipement national de calcul intensif (GENCI) is a “civil company” (société civile) under French law and 49% owned by the State, represented by the French Ministry of Higher Education, Research and Innovation (MESRI), 20% by the Commissariat à l’Energie Atomique et aux énergies alternatives ( …
GENCI: Advancing HPC in France and Across Europe was written by Nicole Hemsoth at The Next Platform.
Downtime has been plaguing companies for decades, and the problems have only been exacerbated during the internet era and with the rise of ecommerce and the cloud.
Systems crash, money is lost because no one is buying anything, more money is spent on the engineers and the time they need to fix the problem and get things back online. In the meantime, enterprises have to deal with frustrated customers and risk losing many of them, who lose trust the in the company and opt to move their business elsewhere. For much of that time, the response to system failures has …
Creating Chaos to Save the Datacenter was written by Nicole Hemsoth at The Next Platform.
Burst buffers are carving out a significant space for themselves in the HPC arena as a way to improve data checkpointing and application performance at a time when traditional storage technologies are struggling to keep up with the increasingly large and complex workloads including traditional simulation and modeling and new things like as data analytics.
The fear has been that storage technologies such as parallel file systems could become the bottleneck that limits performance, and burst buffers have been designed to manage peak I/O situations so that organizations aren’t forced to scale their storage environments to be able to support …
Burst Buffers Blow Through I/O Bottlenecks was written by Jeffrey Burt at The Next Platform.
In his keynote at the recent AWS re:Invent conference, Amazon vice president and chief technology officer Werner Vogels said that the cloud had created a “egalitarian” computing environment where everyone has access to the same compute, storage, and analytics, and that the real differentiator for enterprises will be the data they generate, and more importantly, the value the enterprises derive from that data.
For Rob Thomas, general manager of IBM Analytics, data is the focus. The company is putting considerable muscle behind data analytics, machine learning, and what it calls more generally cognitive computing, much of it based …
Put Building Data Culture Ahead Of Buying Data Analytics was written by Jeffrey Burt at The Next Platform.
Kubernetes has quickly become a key technology in the emerging containerized application environment since it was first announced by Google engineers just more than three years ago, catching hold as the primary container orchestration tool used by hyperscalers, HPC organizations and enterprises and overshadowing similar tools like Docker Swarm, Mesos and OpenStack.
Born from earlier internal Google projects Borg and Omega, the open-source Kubernetes has been embraced by top cloud providers and growing numbers of enterprises, and support is growing among datacenter infrastructure software vendors.
Red Hat has built out its OpenShift cloud application platform based on both …
No Slowdown in Sight for Kubernetes was written by Nicole Hemsoth at The Next Platform.