There is no question that the memory hierarchy in systems is being busted wide open and that new persistent memory technology that can be byte addressable like DRAM or block addressable like storage are going to radically change the architecture of machines and the software that runs on them. Picking what memory might go mainstream is another story.
It has been decades since IBM made its own DRAM, but the company still has a keen interest in doing research and development on core processing and storage technologies and in integrating new devices with its Power-based systems.
To that end, IBM …
IBM Throws Weight Behind Phase Change Memory was written by Timothy Prickett Morgan at The Next Platform.
The ubiquity of the Xeon server has been a boon for datacenters and makers of IT products alike, creating an ever more powerful on which to build compute, storage, and now networking or a mix of the three all in the same box. But that universal hardware substrate cuts both ways, and IT vendors have to be clever indeed if they hope to differentiate from their competitors.
So it is with the “Wolfcreek” storage platform from DataDirect Networks, which specializes in high-end storage arrays aimed at HPC, webscale, and high-end enterprise workloads. DDN started unveiling the Wolfcreek system last June …
Scaling All Flash Arrays Up And Out was written by Timothy Prickett Morgan at The Next Platform.
Over the last year, we have focused on the role burst buffer technology might play in bolstering the I/O capabilities on some of the world’s largest machines and have focused on use cases ranging from the initial target to more application-centric goals.
As we have described in discussions with the initial creator of the concept, Los Alamos National Lab’s, Gary Grider, the starting point for the technology was for moving the checkpoint and restart capabilities forward faster (detailed description of how this works here). However, as the concept developed over the years, some large supercomputing sites, including the National …
First Burst Buffer Use at Scale Bolsters Application Performance was written by Nicole Hemsoth at The Next Platform.
If you are trying to figure out what impact the new “Pascal” family of GPUs is going to have on the business at Nvidia, just take a gander at the recent financial results for the datacenter division of the company. If Nvidia had not spent the better part of a decade building its Tesla compute business, it would be a little smaller and quite a bit less profitable.
In the company’s first quarter of fiscal 2017, which ended on May 1, Nvidia posted sales of $1.31 billion, up 13 percent from the year ago period, and net income hit $196 …
Tesla Pushes Nvidia Deeper Into The Datacenter was written by Timothy Prickett Morgan at The Next Platform.
The running joke is that when a headline begs a question, the answer is, quite simply, “No.” However, when the question is multi-layered, wrought with dependencies that stretch across an entire supply chain, user bases, and device range, and across companies in the throes of their own economic and production uncertainties, a much more nuanced answer is required.
Although Moore’s Law is not technically dead yet, organizations from the IEEE to individual device makers are already thinking their way out of a box that has held the semiconductor industry neatly for decades. However, it turns out, that thought process is …
Can Open Source Hardware Crack Semiconductor Industry Economics? was written by Nicole Hemsoth at The Next Platform.
Over the past few years, IBM has been devoting a great deal of corporate energy into developing Watson, the company’s Jeopardy-beating supercomputing platform. Watson represents a larger focus at IBM that integrates machine learning and data analytics technologies to bring cognitive computing capabilities to its customers.
To find out about how the company perceives its own invention, we asked IBM Fellow Dr. Alessandro Curioni to characterize Watson and how it has evolved into new application domains. Curioni, will be speaking on the subject at the upcoming ISC High Performance conference. He is an IBM Fellow, Vice President Europe and …
IBM Research Lead Charts Scope of Watson AI Effort was written by Nicole Hemsoth at The Next Platform.
Wheat has been an important part of the human diet for the past 9,000 years or so, and depending on the geography can comprise up to 40 percent to 50 percent of the diet within certain regions today.
But there is a problem. Pathogens and changing climate are adversely affecting wheat yields just as Earth’s population is growing, and the Genome Analysis Center (TGAC) is front and center in sequencing and assembling the wheat genome, a multi-year effort that is going to be substantially accelerated by some hardware and updated software.
With the world’s population expected to hit 10 billion …
Shared Memory Pushes Wheat Genomics To Boost Crop Yields was written by Timothy Prickett Morgan at The Next Platform.
We have been convinced for many years that machine learning, the kind of artificial intelligence that actually works in practice, not in theory, would be a key element of the next platform. In fact, it might be the most important part of the stack. And therefore, those who control how we deploy machine learning will, to a large extent, control the nature of future applications and the systems that run them.
Machine learning is the killer app for the hyperscalers, just like modeling and simulation were for supercomputing centers decades ago, and we believe we are only seeing the tip …
Facebook Flow Is An AI Factory Of The Future was written by Timothy Prickett Morgan at The Next Platform.
The strong interest in deep learning neural networks lies in the ability of neural networks to solve complex pattern recognition tasks – sometimes better than humans. Once trained, these machine learning solutions can run very quickly – even in real-time – and very efficiently on low-power mobile devices and in the datacenter.
However training a machine learning algorithm to accurately solve complex problems requires large amounts of data that greatly increases the computational workload. Scalable distributed parallel computing using a high-performance communications fabric is an essential part of what makes the training of deep learning on large complex datasets …
Intel Stretches Deep Learning on Scalable System Framework was written by Nicole Hemsoth at The Next Platform.
There are a lot of moving parts in a modern platform, and in this regard, they are no different from the platforms made a generation earlier. But a modern platform has a lot more automation and is handling more dynamic workloads that are popping into and out of existence on different parts of a cluster like quantum particles, and it takes a higher level of sophistication to monitor and manage the stack and the apps running on it.
Frustration with existing open source monitoring tools like Nagios and Ganglia is why the hyperscaler giants created their own tools – Google …
Google And Friends Add Prometheus To Kubernetes Platform was written by Timothy Prickett Morgan at The Next Platform.
Although the future of exascale computing might be garnering the most deadlines in high performance computing, one of the most important stories unfolding in the supercomputing space, at least from a system design angle, is the merging of compute and data-intensive machines.
In many ways, merging both the compute horsepower of today’s top systems with the data-intensive support in terms of data movement, storage, and software is directly at odds with current visions of exascale supercomputers. Hence there appear to be two camps forming on either side of the Top 500 level centers; one that argues strongly in favor of …
Next Generation Supercomputing Strikes Data, Compute Balance was written by Nicole Hemsoth at The Next Platform.
Several decades ago, Gordon Moore made it far simpler to create technology roadmaps along the lines of processor capabilities, but as his namesake law begins to slow on the rails, the IEEE is stepping in to create a new, albeit more diverse roadmap for future systems.
The organization has launched a new effort to identify and trace the course of what follows Moore’s Law with the International Roadmap for Devices and Systems (IRDS), which will take a workload focused view of the mixed landscape and the systems that will be required. In other words, instead of pegging a …
More than Moore: IEEE Set to Standardize on Uncertainty was written by Nicole Hemsoth at The Next Platform.
Storage giant EMC, soon to be part of the Dell Technologies conglomerate, declared that this would be the year of all flash for the company when it launched its DSSD D5 arrays back in February. It was not kidding, and as a surprise at this weeks EMC World 2016 conference, the company gave a sneak peek at a future all-flash version of its Isilon storage arrays, which are also aimed at high performance jobs but which are designed to scale capacity well beyond that of the DSSD.
The DSSD D5 is an impressive beast, packing 100 TB of usable …
EMC Shoots For Explosive Performance With Isilon Nitro was written by Timothy Prickett Morgan at The Next Platform.
IT managers at the world’s largest organizations have a lot of reasons to envy hyperscalers, including the fact that they seem to be flush with cash and it looks like they can buy or build just about anything their hearts desire.
While hyperscalers have to cope with scale issues, they do not have as much complexity, so they can pick a technology and run with it. Enterprises, on the other hand, are merging and acquiring all the time and have lots of silos of existing applications that cannot be thrown away.
The need to support existing as well as new …
Cloud Foundry Is Crossing The Chasm was written by Timothy Prickett Morgan at The Next Platform.
The personal computer has been the driver of innovation in the IT sector in a lot of ways for the past three and a half decades, but perhaps one of the most important aspects of the PC business is that it gave chip maker Intel a means of perfecting each successive manufacturing technology at high volume before moving it over to more complex server processors that would otherwise have lower yields and be more costly if they were the only chips Intel made with each process.
That PC volume is what gave Intel datacenter prowess, in essence, and it is …
The Long Future Ahead For Intel Xeon Processors was written by Timothy Prickett Morgan at The Next Platform.
While innovators in the HPC and hyperscale arenas usually have the talent and often have the desire to get into the code for the tools that they use to create their infrastructure, most enterprises want their software with a bit more fit and finish, and if they can get it so it is easy to operate and yet still in some ways open, they are willing to pay a decent amount of cash to get commercial-grade support.
OpenStack has pretty much vanquished Eucalyptus, CloudStack, and a few other open source alternatives from the corporate datacenter, and it is giving …
Mashing Up OpenStack With Hyperconverged Storage was written by Timothy Prickett Morgan at The Next Platform.
For storage at scale, particularly for large scientific computing centers, burst buffers have become a hot topic for both checkpoint and application performance reasons. Major vendors in high performance computing have climbed on board and we will be seeing a new crop of big machines featuring burst buffers this year to complement the few that have already emerged.
The what, why, and how of burst buffers can be found in our interview with the inventor of the concept, Gary Grider at Los Alamos National Lab. But for centers that have already captured the message and are looking for the …
Storage Performance Models Highlight Burst Buffers at Scale was written by Nicole Hemsoth at The Next Platform.
One of the breakthrough moments in computing, which was compelled by necessity, was the advent of symmetric multiprocessor, or SMP, clustering to make two or more processors look and act, as far as the operating system and applications were concerned, as a single, more capacious processor. With NVLink clustering for GPUs and for lashing GPUs to CPUs, Nvidia is bringing something as transformative as SMP was for CPUs to GPU accelerators.
The NVLink interconnect has been in development for years, and is one of the “five miracles” that Nvidia CEO and co-founder Jen-Hsun Huang said at the GPU Technology Conference …
NVLink Takes GPU Acceleration To The Next Level was written by Timothy Prickett Morgan at The Next Platform.
It is almost without question that search engine giant Google has the most sophisticated and scalable data analytics platform on the planet. The company has been on the leading edge of analytics and the infrastructure that supports it for a decade and a half and through its various services it has an enormous amount of data on which to chew and draw inferences to drive its businesses.
In the wake of the launch of Amazon Web Services a decade ago, Google came to the conclusion that what end users really needed was services to store and process data, not access …
Google Pits Dataflow Against Spark was written by Timothy Prickett Morgan at The Next Platform.
Letting go of infrastructure is hard, but once you do – or perhaps more precisely, once you can – picking it back up again is a whole lot harder.
This is perhaps going to be the long-term lesson that cloud computing teaches the information technology industry as it moves back in time to data processing, as it used to be called, and back towards a rental model that got IBM sued by the US government in the nascent days of computing and compelled changes in Big Blue’s behavior that made it possible for others to create and sell systems against …
How Big Is The Ecosystem Growing On Clouds? was written by Timothy Prickett Morgan at The Next Platform.