Object storage may not have been born in the cloud, but it was the major public cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud Platform that have been its biggest drivers.
The idea of object storage wasn’t new; it had been around for about two decades. But as the cloud service providers began building out their datacenters and platforms more than a decade ago, they were faced with the need to find a storage architecture that could scale to meet the demands brought on by the massive amounts of data being created, and as well as the …
Bridging Object Storage And NAS In The Enterprise was written by Jeffrey Burt at The Next Platform.
One of the most interesting and strategically located datacenters in the world has taken a shining to HPC, and not just because it is a great business opportunity. Rather, Verne Global is firing up an HPC system rental service in its Icelandic datacenter because its commercial customers are looking for supercomputer-style systems that they can rent rather than buy to augment their existing HPC jobs.
Verne Global, which took over a former NATO airbase and an Allied strategic forces command center outside of Keflavik, Iceland back in 2012 and converted it into a super-secure datacenter, is this week taking the …
Renting The Cleanest HPC On Earth was written by Timothy Prickett Morgan at The Next Platform.
Qualcomm launched its Centriq server system-on-chip (SoC) a few weeks ago. The event filled in Centriq’s tech specs and pricing, and disclosed a wide range of ecosystem partners and customers. I wrote about Samsung’s process and customer testimonials for Centriq elsewhere.
Although Qualcomm was launching its Centriq 2400 processor, instead of focusing on a bunch of reference design driven hardware partners, Qualcomm chose to focus its Centriq launch event on ecosystem development, with a strong emphasis on software workloads and partnerships. Because so much of today’s cloud workload mix is based on runtime environments – using containers, interpretive languages, …
Deep Dive Into Qualcomm’s Centriq Arm Server Ecosystem was written by Timothy Prickett Morgan at The Next Platform.
VMware jumped into burgeoning software-defined networking (SDN) field in a big way four years ago when it bought started Nicira for $1.26 billion, a deal that led to the launch of VMware’s NSX offering a year later. NSX put the company on a crash course with other networking vendors, particularly Cisco Systems, all of whom were trying to plot their strategies to deal with the rapid changes in what had been a relatively staid part of the industry.
Many of these vendors had made their billions over the years selling expensive appliance-style boxes filled with proprietary technologies, and now faced …
VMware Tweaks NSX Virtual Networks For Containers, Microservices was written by Jeffrey Burt at The Next Platform.
The server race is really afoot now that IBM has finally gotten off the starting blocks with its first Power9 system, based on its “Nimbus” variant of that processor and turbocharged with the latest “Volta” Tesla GPU accelerators from Nvidia and EDR InfiniBand networks from Mellanox Technologies.
The machine launched today, known variously as by the code-name “Witherspoon” or “Newell,” is the building block of the CORAL systems being deployed by the US Department of Energy – “Summit” at Oak Ridge National Laboratory and “Sierra” at Lawrence Livermore National Laboratory. But more importantly, the Witherspoon system represents a new …
Power9 To The People was written by Timothy Prickett Morgan at The Next Platform.
Moving large-scale enterprise operations into the cloud is not a decision to be made lightly. There are engineering and financial considerations, and the process of determining the costs pros and cons of such a move is significantly more complex than simply comparing the expense of running a workload on-premises or in a public cloud.
Still, the trend is toward businesses making the move to one degree or another, driven by the easy ability to scale up or down depending on the workload and paying only for the infrastructure resources they use, not having to put up the capital expense to …
FICO CIO on the Costs, Concerns of Cloud Transition was written by Nicole Hemsoth at The Next Platform.
Striking acceptable training times for GPU accelerated machine learning on very large datasets has long-since been a challenge, in part because there are limited options with constrained on-board GPU memory.
For those who are working on training against massive volumes (in the many millions to billions of examples) using cloud infrastructure, the impetus is greater than ever to pare down training time given the per-hour instance costs and for cloud-based GPU acceleration on hardware with more memory (the more expensive Nvidia P100 with 16 GB memory over a more standard 8 GB memory GPU instance). Since hardware limitations are not …
Faster Machine Learning in a World with Limited Memory was written by Nicole Hemsoth at The Next Platform.
It has been a long time coming, but hyperconverged storage pioneer Nutanix is finally letting go of hardware, shifting from being an a server-storage hybrid appliance maker to a company that sells software that provides hyperconverged functionality on whatever hardware large enterprises typically buy.
The move away from selling appliances was something that The Next Platform has been encouraging Nutanix to do to broaden its market appeal, but until the company reached a certain level of demand from customers, Nutanix had to restrict its hardware support matrix so it could affordably put a server-storage stack in the field and not …
Disaggregated Or Hyperconverged, What Storage Will Win The Enterprise? was written by Timothy Prickett Morgan at The Next Platform.
We’ve all grown up in a world of digital filing cabinets. POSIX I/O has enabled code portability and extraordinary advances in computation, but it is limited by its design and the way it mirrors the paper offices that it has replaced.
The POSIX API and its implementation assumes that we know roughly where our data is, that accessing it is reasonably quick and that all versions of the data are the same. As we move to exascale, we need to let go of this model and embrace a sea of data and a very different way of handling it.
In …
When POSIX I/O Meets Exascale, Do the Old Rules Apply? was written by Nicole Hemsoth at The Next Platform.
In many ways, public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform can be the great equalizers, giving enterprises access to computing and storage resources that they may not have the money to be able to bring into their on-premises environments. Given the new compute-intensive workloads like data analytics and machine learning, and the benefits they can bring to modern businesses, this access to cloud-based platforms is increasingly critical to large enterprises.
Cloudera for several years has been pushing its software offerings – such as Data Science Workbench, Analytic DB, Operational DB, and Enterprise Data Hub – …
Cloudera Puffs Up Analytics Database For Clouds was written by Jeffrey Burt at The Next Platform.
The slowdown in server sales ahead of Intel’s July launch of the “Skylake” Xeon SP was real, and if the figures from the third quarter of this year are any guide, then it looks like that slump is over. Plenty of customers wanted the shiny new Skylake gear, and we think a fair number of them also wanted to buy older-generation “Broadwell” Xeons and the “Grantley” server platform given the premium that Intel is charging for Skylake processors and their “Purley” platform.
Server makers with older Broadwell machinery in the barn were no doubt happy to oblige customers and clear …
The Booming Server Market In The Wake Of Skylake was written by Timothy Prickett Morgan at The Next Platform.
Hyperconverged infrastructure is a relatively small but fast-growing part of the datacenter market, driving in large part by enterprises looking to simplify and streamline their environments as they tackle increasingly complex workloads.
Like converged infrastructure, hyperconverged offerings are modular in nature, converging compute, storage, networking, virtualization and management software into a tightly integrated single solution that drives greater datacenter densities, smaller footprints, rapid deployment and lower costs. They are pre-built, pre-validated before shipping from the factory, eliminating the need for the user to do the necessary and time-consuming integration. Hyperconverged merges the compute and storage into a single unit, and …
Getting Hyper And Converged About Storage was written by Jeffrey Burt at The Next Platform.
It’d be difficult to downplay the impact Amazon Web Services has had on the computing industry over the past decade. Since launching in 2006, Amazon’s cloud computing division has become the set the pace in the public cloud market, rapidly growing out its capabilities from the first service – Simple Storage Service (S3) – it rolled out to now offering thousands of services that touch on everything from compute instances to databases, storage, application development and emerging technologies like machine learning and data analytics.
The company has become dominant by offering organizations of all sizes a way of simply accessing …
The Systems of the Future Will Be Conversational was written by Nicole Hemsoth at The Next Platform.
For several years, GPU acceleration matched with Intel Xeon processors were the dominating news items in hardware at the annual Supercomputing Conference. However, this year that trend shifted in earnest, with a major coming-out party for ARM servers in HPC and more attention than ever paid to FPGAs as potential accelerators for future exascale systems.
The SC series held two days of lightening-round presentations on the state of FPGAs for future supercomputers, with insight from both academia, vendors, and end users at scale, including Microsoft. To say Microsoft is an FPGA user is a bit of an understatement, however, since …
Reinventing the FPGA Programming Wheel was written by Nicole Hemsoth at The Next Platform.
Generally speaking, the world’s largest chip makers have been pretty secretive about the giant supercomputers they use to design and test their devices, although occasionally, Intel and AMD have provided some insight into their clusters.
We have no idea what kind of resources Nvidia has for its EDA systems – we are trying to get some insight into that – but we do know that it has just upgraded a very powerful supercomputer to advance the state of the art in artificial intelligence that is also doing double duty on some aspects of its chip design business.
As part of …
Inside Nvidia’s Next-Gen Saturn V AI Cluster was written by Timothy Prickett Morgan at The Next Platform.
If thinking of NFS v4 puts a bad taste in your mouth, you are not alone. Or wrong. NFS v4.0 and v4.1 have had some valid, well-documented growing pains that include limited bandwidth and scalability. These issues were a result of a failure to properly address performance issues in the v4.0 release.
File systems are the framework upon which the entire house is built, so these performance issues were not trivial problems for us in IT. Thanks to the dedication of the NFS developer community, NFS v4.2 solves the problems of v4.0 and v4.1 and also introduces a host of …
The Redemption Of NFS was written by Timothy Prickett Morgan at The Next Platform.
Medical imaging is one areas where hospitals have invested significantly in on-premises infrastructure to support diagnostic analysis.
These investments have been stepped up in recent years with ever-more complex frameworks for analyzing scans, but as cloud continues to mature, the build versus buy hardware question gets more complicated. This is especially true with the addition to deep learning for medical images into more hospital settings—something that adds more hardware and software heft to an already top-heavy stack.
Earlier this week, we talked about the medical imaging revolution that is being driven forward by GPU accelerated deep learning, but as it …
Hospitals Untangling Infrastructure from Deep Learning Projects was written by Nicole Hemsoth at The Next Platform.
Tech vendors often like to boast about being first movers in a particular market, saying that leading the charge puts them at a great advantage over their competitors. It doesn’t always work that way, but sometimes it does.
A case in point is Amazon Web Services (AWS), which officially launched in 2006 with the release of the Simple Storage Service (S3) after several years of development and with it kicked off what is now the fast-growing and increasingly crowded public cloud space. Eleven years later, AWS owns just over 44 percent of the market, according to CEO Andy Jassy, pointing …
AWS Flexes Cloud Muscles with Host of New Additions was written by Nicole Hemsoth at The Next Platform.
When it comes to HPC, compute is like the singers in a rock band, making all of the noise and soaking up most of the attention. But the network that lashes the compute together is literally the beat of the drums and the thump of the bass that keeps everything in synch and allows for the harmonies of the singers to come together at all.
In this analogy, it is not clear what HPC storage is. It might be the van that moves the instruments from town to town, plus the roadies who live in the van that set up …
The Battle Of The InfiniBands was written by Timothy Prickett Morgan at The Next Platform.
The very first Superdome Flex shared memory system has gone out the door at Hewlett Packard Enterprise, and it is going to an HPC center that has big memory needs as it tries to understand the universe that we inhabit.
Earlier this month, Hewett Packard Enterprise unveiled the newest addition to NUMA iron, the Superdome Flex, which is an upgrade from the SGI UV 300 platform that HPE sought when it bought SGI for $275 million last year. As we outlined in The Next Platform at the time, the system can scale from four to 32 sockets, is powered by …
Cramming The Cosmos Into A Shared Memory Supercomputer was written by Jeffrey Burt at The Next Platform.