Author Archives: Jeffrey Burt
Author Archives: Jeffrey Burt
It is a renaissance for companies that sell GPU-dense systems and low-power clusters that are right for handling AI inference workloads, especially as they look to the healthcare market–one that for a while was moving toward increasing compute on medical devices.
The growth of production deep learning in medical imaging and diagnostics has spurred investments in hospitals and research centers, pushing high performance systems for medicine back to the forefront.
We have written quite a bit about some of the emerging use cases for deep learning in medicine with an eye on the systems angle in particular, and while these …
Deep Learning is the Next Platform for Pathology was written by Jeffrey Burt at The Next Platform.
Networking has always been the laggard in the enterprise datacenter. As servers and then storage appliances became increasingly virtualized and disaggregated over the past 15 years or so, the network stubbornly stuck with the appliance model, closed and proprietary. As other datacenter resources became faster, more agile and easier to manage, many of those efficiencies were hobbled by the network, which could take months to program and could require new hardware before making any significant changes.
However slowly, and thanks largely to the hyperscalers and now telcos and other communications service providers, that has begun to change. The rise of …
Networking With Intent was written by Jeffrey Burt at The Next Platform.
Here at The Next Platform, we’ve touched on the convergence of machine learning, HPC, and enterprise requirements looking at ways that vendors are trying to reduce the barriers to enable enterprises to leverage AI and machine learning to better address the rapid changes brought about by such emerging trends as the cloud, edge computing and mobility.
At the SC17 show in November 2017, Dell EMC unveiled efforts underway to bring AI, machine learning and deep learning into the mainstream, similar to how the company and other vendors in recent years have been working to make it easier for enterprises …
Google’s Vision for Mainstreaming Machine Learning was written by Jeffrey Burt at The Next Platform.
NVM-Express is the latest hot thing in storage, with server and storage array vendors big and small making a mad dash to bring the protocol into their products and get an advantage in what promises to be a fast-growing market.
With the rapid rise in the amount of data being generated and processed, and the growth of such technologies as artificial intelligence and machine learning in managing and processing the data, demand for faster speeds and lower latency in flash and other non-volatile memory will continue to increase in the coming years, and established companies like Dell EMC, NetApp …
A New Architecture For NVM-Express was written by Jeffrey Burt at The Next Platform.
The memory market can be a volatile one, swinging from tight availability and high prices one year to plenty of inventory and falling prices a couple of years later. The fortunes of vendors can similarly swing with the market changes, with Samsung recently displacing Intel at the top of the semiconductor space as a shortage in the market drove up prices and, with it, the company’s revenues.
High performance and high-speed memory is only going to grow in demand in the HPC and supercomputing arena with the rise of technologies like artificial intelligence (AI), machine learning and graphics processing, and …
Samsung Puts the Crunch on Emerging HBM2 Market was written by Jeffrey Burt at The Next Platform.
File systems have never been the flashiest segment of the IT space, which might explain why big shakeups and new entrants into the market don’t draw the attention they could.
Established vendors have rolled out offerings that primarily are based on GPFS or Lustre and enterprises and HPC organizations have embraced those products. However, changes in the IT landscape in recent years have convinced some companies and vendors to rethink file servers. Such changes as the rise of large-scale analytics and machine learning, the expansion of HPC into more mainstream enterprises and the growth of cloud storage all have brought …
Bringing a New HPC File System to Bear was written by Jeffrey Burt at The Next Platform.
Hyperscalers have billions of users who get access to their services for free, but the funny thing is that these users act like they are paying for it and expect for these services to be always available, no excuses.
Organizations and consumers also rely on Facebook, Google, Microsoft, Amazon, Alibaba, Baidu, and Tencent for services that they pay for, too, and they reasonably expect that their data will always be immediately accessible and secure, the services always available, their search returns always popping up milliseconds after their queries are entered, and the recommendations that come to them …
Machine Learning Drives Changing Disaster Recovery At Facebook was written by Jeffrey Burt at The Next Platform.
Every IT organization wants a more scalable, programmable, and adaptable platform with real-time applications that can chew on ever-increasing amounts and types of data. And it would be nice if it could run in the cloud, too.
Because of this, companies no longer think about databases, but rather are building or buying data platforms that are based on industry-standard technologies, big data tools like NoSQL and unified in a single place. It is a trend that started gaining momentum around 2010 and will accelerate this year, according to Ravi Mayuram, senior vice president of engineering and chief technology officer at …
Everyone Wants A Data Platform, Not A Database was written by Jeffrey Burt at The Next Platform.
The future of IT is in the cloud, but it will be a hybrid cloud. And that means things will by necessity get complicated.
Public clouds from the likes of Amazon Web Services, Microsoft, Google and IBM offer enterprises the ability to access massive, elastic and highly scalable infrastructure environments for many of their workloads without having to pay the cost of bringing those capabilities into their on-premises environments, but there will always be applications that businesses will want to keep behind the firewall for security and efficiency reasons. That reality is driving the demand not only for the …
Microsoft Boosts Azure Storage With Flashy Avere was written by Jeffrey Burt at The Next Platform.
It’s a familiar story arc for open source efforts started by vendors or vendor-led industry consortiums. The initiatives are launched and expanded, but eventually they find their way into independent open source organizations such as the Linux Foundation, where vendor control is lessened, communities are able to grow, and similar projects can cross-pollinate in hopes of driving greater standardization in the industry and adoption within enterprises.
It happened with Xen, the virtualization technology that initially started with XenSource and was bought by Citrix Systems but now is under the auspices of the Linux Foundation. The Linux kernel lives there, too, …
Juniper Flips OpenContrail To The Linux Foundation was written by Jeffrey Burt at The Next Platform.
Artificial intelligence and machine learning, which found solid footing among the hyperscalers and is now expanding into the HPC community, are at the top of the list of new technologies that enterprises want to embrace for all kinds of reasons. But it all boils down to the same problem: Sorting through the increasing amounts of data coming into their environments and finding patterns that will help them to run their businesses more efficiently, to make better businesses decisions, and ultimately to make more money.
Enterprises are increasingly experimenting with the various frameworks and tools that are on the market …
Enterprises Challenged By The Many Guises Of AI was written by Jeffrey Burt at The Next Platform.
NVM-Express holds the promise of accelerating the performance and lowering the latency of flash and other non-volatile storage. Every server and storage vendor we can think of is working to bring NVM-Express into the picture to get the full benefits of flash, but even six years after the first specification for the technology was released, NVM-Express is still very much a work in progress, with capabilities like stretching it over a network still a couple of years away.
Pure Storage launched eight years ago with the idea of selling only all-flash arrays and saw NVM-Express coming many years ago, and …
A Purified Implementation Of NVM-Express Storage was written by Jeffrey Burt at The Next Platform.
Burst buffers are carving out a significant space for themselves in the HPC arena as a way to improve data checkpointing and application performance at a time when traditional storage technologies are struggling to keep up with the increasingly large and complex workloads including traditional simulation and modeling and new things like as data analytics.
The fear has been that storage technologies such as parallel file systems could become the bottleneck that limits performance, and burst buffers have been designed to manage peak I/O situations so that organizations aren’t forced to scale their storage environments to be able to support …
Burst Buffers Blow Through I/O Bottlenecks was written by Jeffrey Burt at The Next Platform.
In his keynote at the recent AWS re:Invent conference, Amazon vice president and chief technology officer Werner Vogels said that the cloud had created a “egalitarian” computing environment where everyone has access to the same compute, storage, and analytics, and that the real differentiator for enterprises will be the data they generate, and more importantly, the value the enterprises derive from that data.
For Rob Thomas, general manager of IBM Analytics, data is the focus. The company is putting considerable muscle behind data analytics, machine learning, and what it calls more generally cognitive computing, much of it based …
Put Building Data Culture Ahead Of Buying Data Analytics was written by Jeffrey Burt at The Next Platform.
Object storage may not have been born in the cloud, but it was the major public cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud Platform that have been its biggest drivers.
The idea of object storage wasn’t new; it had been around for about two decades. But as the cloud service providers began building out their datacenters and platforms more than a decade ago, they were faced with the need to find a storage architecture that could scale to meet the demands brought on by the massive amounts of data being created, and as well as the …
Bridging Object Storage And NAS In The Enterprise was written by Jeffrey Burt at The Next Platform.
VMware jumped into burgeoning software-defined networking (SDN) field in a big way four years ago when it bought started Nicira for $1.26 billion, a deal that led to the launch of VMware’s NSX offering a year later. NSX put the company on a crash course with other networking vendors, particularly Cisco Systems, all of whom were trying to plot their strategies to deal with the rapid changes in what had been a relatively staid part of the industry.
Many of these vendors had made their billions over the years selling expensive appliance-style boxes filled with proprietary technologies, and now faced …
VMware Tweaks NSX Virtual Networks For Containers, Microservices was written by Jeffrey Burt at The Next Platform.
In many ways, public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform can be the great equalizers, giving enterprises access to computing and storage resources that they may not have the money to be able to bring into their on-premises environments. Given the new compute-intensive workloads like data analytics and machine learning, and the benefits they can bring to modern businesses, this access to cloud-based platforms is increasingly critical to large enterprises.
Cloudera for several years has been pushing its software offerings – such as Data Science Workbench, Analytic DB, Operational DB, and Enterprise Data Hub – …
Cloudera Puffs Up Analytics Database For Clouds was written by Jeffrey Burt at The Next Platform.
Hyperconverged infrastructure is a relatively small but fast-growing part of the datacenter market, driving in large part by enterprises looking to simplify and streamline their environments as they tackle increasingly complex workloads.
Like converged infrastructure, hyperconverged offerings are modular in nature, converging compute, storage, networking, virtualization and management software into a tightly integrated single solution that drives greater datacenter densities, smaller footprints, rapid deployment and lower costs. They are pre-built, pre-validated before shipping from the factory, eliminating the need for the user to do the necessary and time-consuming integration. Hyperconverged merges the compute and storage into a single unit, and …
Getting Hyper And Converged About Storage was written by Jeffrey Burt at The Next Platform.
The very first Superdome Flex shared memory system has gone out the door at Hewlett Packard Enterprise, and it is going to an HPC center that has big memory needs as it tries to understand the universe that we inhabit.
Earlier this month, Hewett Packard Enterprise unveiled the newest addition to NUMA iron, the Superdome Flex, which is an upgrade from the SGI UV 300 platform that HPE sought when it bought SGI for $275 million last year. As we outlined in The Next Platform at the time, the system can scale from four to 32 sockets, is powered by …
Cramming The Cosmos Into A Shared Memory Supercomputer was written by Jeffrey Burt at The Next Platform.
Building the first exascale systems continues to be a high-profile endeavor, with efforts underway worldwide in the United States, the European Union, and Asia – notably China and Japan – that focus on competition between regional powers, the technologies that are going into the architectures, and the promises that these supercomputers hold for everything from research and government to business and commerce.
The Chinese government is pouring money and resources into its roadmaps for both pre-exascale and exascale systems, Japan is moving forward with Fujitsu’s Post-K system that will use processors based on the Arm architecture rather than the …
Debating The Role Of Commodity Chips In Exascale was written by Jeffrey Burt at The Next Platform.