Jeffrey Burt

Author Archives: Jeffrey Burt

Samsung Puts the Crunch on Emerging HBM2 Market

The memory market can be a volatile one, swinging from tight availability and high prices one year to plenty of inventory and falling prices a couple of years later. The fortunes of vendors can similarly swing with the market changes, with Samsung recently displacing Intel at the top of the semiconductor space as a shortage in the market drove up prices and, with it, the company’s revenues.

High performance and high-speed memory is only going to grow in demand in the HPC and supercomputing arena with the rise of technologies like artificial intelligence (AI), machine learning and graphics processing, and

Samsung Puts the Crunch on Emerging HBM2 Market was written by Jeffrey Burt at The Next Platform.

Bringing a New HPC File System to Bear

File systems have never been the flashiest segment of the IT space, which might explain why big shakeups and new entrants into the market don’t draw the attention they could.

Established vendors have rolled out offerings that primarily are based on GPFS or Lustre and enterprises and HPC organizations have embraced those products. However, changes in the IT landscape in recent years have convinced some companies and vendors to rethink file servers. Such changes as the rise of large-scale analytics and machine learning, the expansion of HPC into more mainstream enterprises and the growth of cloud storage all have brought

Bringing a New HPC File System to Bear was written by Jeffrey Burt at The Next Platform.

Machine Learning Drives Changing Disaster Recovery At Facebook

Hyperscalers have billions of users who get access to their services for free, but the funny thing is that these users act like they are paying for it and expect for these services to be always available, no excuses.

Organizations and consumers also rely on Facebook, Google, Microsoft, Amazon, Alibaba, Baidu, and Tencent for services that they pay for, too, and they reasonably expect that their data will always be immediately accessible and secure, the services always available, their search returns always popping up milliseconds after their queries are entered, and the recommendations that come to them

Machine Learning Drives Changing Disaster Recovery At Facebook was written by Jeffrey Burt at The Next Platform.

Everyone Wants A Data Platform, Not A Database

Every IT organization wants a more scalable, programmable, and adaptable platform with real-time applications that can chew on ever-increasing amounts and types of data. And it would be nice if it could run in the cloud, too.

Because of this, companies no longer think about databases, but rather are building or buying data platforms that are based on industry-standard technologies, big data tools like NoSQL and unified in a single place. It is a trend that started gaining momentum around 2010 and will accelerate this year, according to Ravi Mayuram, senior vice president of engineering and chief technology officer at

Everyone Wants A Data Platform, Not A Database was written by Jeffrey Burt at The Next Platform.

Microsoft Boosts Azure Storage With Flashy Avere

The future of IT is in the cloud, but it will be a hybrid cloud. And that means things will by necessity get complicated.

Public clouds from the likes of Amazon Web Services, Microsoft, Google and IBM offer enterprises the ability to access massive, elastic and highly scalable infrastructure environments for many of their workloads without having to pay the cost of bringing those capabilities into their on-premises environments, but there will always be applications that businesses will want to keep behind the firewall for security and efficiency reasons. That reality is driving the demand not only for the

Microsoft Boosts Azure Storage With Flashy Avere was written by Jeffrey Burt at The Next Platform.

Juniper Flips OpenContrail To The Linux Foundation

It’s a familiar story arc for open source efforts started by vendors or vendor-led industry consortiums. The initiatives are launched and expanded, but eventually they find their way into independent open source organizations such as the Linux Foundation, where vendor control is lessened, communities are able to grow, and similar projects can cross-pollinate in hopes of driving greater standardization in the industry and adoption within enterprises.

It happened with Xen, the virtualization technology that initially started with XenSource and was bought by Citrix Systems but now is under the auspices of the Linux Foundation. The Linux kernel lives there, too,

Juniper Flips OpenContrail To The Linux Foundation was written by Jeffrey Burt at The Next Platform.

Enterprises Challenged By The Many Guises Of AI

Artificial intelligence and machine learning, which found solid footing among the hyperscalers and is now expanding into the HPC community, are at the top of the list of new technologies that enterprises want to embrace for all kinds of reasons. But it all boils down to the same problem: Sorting through the increasing amounts of data coming into their environments and finding patterns that will help them to run their businesses more efficiently, to make better businesses decisions, and ultimately to make more money.

Enterprises are increasingly experimenting with the various frameworks and tools that are on the market

Enterprises Challenged By The Many Guises Of AI was written by Jeffrey Burt at The Next Platform.

A Purified Implementation Of NVM-Express Storage

NVM-Express holds the promise of accelerating the performance and lowering the latency of flash and other non-volatile storage. Every server and storage vendor we can think of is working to bring NVM-Express into the picture to get the full benefits of flash, but even six years after the first specification for the technology was released, NVM-Express is still very much a work in progress, with capabilities like stretching it over a network still a couple of years away.

Pure Storage launched eight years ago with the idea of selling only all-flash arrays and saw NVM-Express coming many years ago, and

A Purified Implementation Of NVM-Express Storage was written by Jeffrey Burt at The Next Platform.

Burst Buffers Blow Through I/O Bottlenecks

Burst buffers are carving out a significant space for themselves in the HPC arena as a way to improve data checkpointing and application performance at a time when traditional storage technologies are struggling to keep up with the increasingly large and complex workloads including traditional simulation and modeling and new things like as data analytics.

The fear has been that storage technologies such as parallel file systems could become the bottleneck that limits performance, and burst buffers have been designed to manage peak I/O situations so that organizations aren’t forced to scale their storage environments to be able to support

Burst Buffers Blow Through I/O Bottlenecks was written by Jeffrey Burt at The Next Platform.

Put Building Data Culture Ahead Of Buying Data Analytics

In his keynote at the recent AWS re:Invent conference, Amazon vice president and chief technology officer Werner Vogels said that the cloud had created a “egalitarian” computing environment where everyone has access to the same compute, storage, and analytics, and that the real differentiator for enterprises will be the data they generate, and more importantly, the value the enterprises derive from that data.

For Rob Thomas, general manager of IBM Analytics, data is the focus. The company is putting considerable muscle behind data analytics, machine learning, and what it calls more generally cognitive computing, much of it based

Put Building Data Culture Ahead Of Buying Data Analytics was written by Jeffrey Burt at The Next Platform.

Bridging Object Storage And NAS In The Enterprise

Object storage may not have been born in the cloud, but it was the major public cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud Platform that have been its biggest drivers.

The idea of object storage wasn’t new; it had been around for about two decades. But as the cloud service providers began building out their datacenters and platforms more than a decade ago, they were faced with the need to find a storage architecture that could scale to meet the demands brought on by the massive amounts of data being created, and as well as the

Bridging Object Storage And NAS In The Enterprise was written by Jeffrey Burt at The Next Platform.

VMware Tweaks NSX Virtual Networks For Containers, Microservices

VMware jumped into burgeoning software-defined networking (SDN) field in a big way four years ago when it bought started Nicira for $1.26 billion, a deal that led to the launch of VMware’s NSX offering a year later. NSX put the company on a crash course with other networking vendors, particularly Cisco Systems, all of whom were trying to plot their strategies to deal with the rapid changes in what had been a relatively staid part of the industry.

Many of these vendors had made their billions over the years selling expensive appliance-style boxes filled with proprietary technologies, and now faced

VMware Tweaks NSX Virtual Networks For Containers, Microservices was written by Jeffrey Burt at The Next Platform.

Cloudera Puffs Up Analytics Database For Clouds

In many ways, public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform can be the great equalizers, giving enterprises access to computing and storage resources that they may not have the money to be able to bring into their on-premises environments. Given the new compute-intensive workloads like data analytics and machine learning, and the benefits they can bring to modern businesses, this access to cloud-based platforms is increasingly critical to large enterprises.

Cloudera for several years has been pushing its software offerings – such as Data Science Workbench, Analytic DB, Operational DB, and Enterprise Data Hub –

Cloudera Puffs Up Analytics Database For Clouds was written by Jeffrey Burt at The Next Platform.

Getting Hyper And Converged About Storage

Hyperconverged infrastructure is a relatively small but fast-growing part of the datacenter market, driving in large part by enterprises looking to simplify and streamline their environments as they tackle increasingly complex workloads.

Like converged infrastructure, hyperconverged offerings are modular in nature, converging compute, storage, networking, virtualization and management software into a tightly integrated single solution that drives greater datacenter densities, smaller footprints, rapid deployment and lower costs. They are pre-built, pre-validated before shipping from the factory, eliminating the need for the user to do the necessary and time-consuming integration. Hyperconverged merges the compute and storage into a single unit, and

Getting Hyper And Converged About Storage was written by Jeffrey Burt at The Next Platform.

Cramming The Cosmos Into A Shared Memory Supercomputer

The very first Superdome Flex shared memory system has gone out the door at Hewlett Packard Enterprise, and it is going to an HPC center that has big memory needs as it tries to understand the universe that we inhabit.

Earlier this month, Hewett Packard Enterprise unveiled the newest addition to NUMA iron, the Superdome Flex, which is an upgrade from the SGI UV 300 platform that HPE sought when it bought SGI for $275 million last year. As we outlined in The Next Platform at the time, the system can scale from four to 32 sockets, is powered by

Cramming The Cosmos Into A Shared Memory Supercomputer was written by Jeffrey Burt at The Next Platform.

Debating The Role Of Commodity Chips In Exascale

Building the first exascale systems continues to be a high-profile endeavor, with efforts underway worldwide in the United States, the European Union, and Asia – notably China and Japan – that focus on competition between regional powers, the technologies that are going into the architectures, and the promises that these supercomputers hold for everything from research and government to business and commerce.

The Chinese government is pouring money and resources into its roadmaps for both pre-exascale and exascale systems, Japan is moving forward with Fujitsu’s Post-K system that will use processors based on the Arm architecture rather than the

Debating The Role Of Commodity Chips In Exascale was written by Jeffrey Burt at The Next Platform.

Assessing The Tradeoffs Of NVM-Express Storage At Scale

NVM-Express isn’t new. Development on the interface, which provides lean and mean access to non-volatile memory, first came to light a decade ago, with technical work starting two years later through a work group that comprised more than 90 tech vendors. The first NVM-Express specification came out in 2011, and now the technology is going mainstream.

How quickly and pervasively remains to be seen. NVM-Express promises significant boosts in performance to SSDs while driving down the latency, which would be a boon to HPC organizations and the wider world of enterprises as prices for SSDs continue to fall and adoption

Assessing The Tradeoffs Of NVM-Express Storage At Scale was written by Jeffrey Burt at The Next Platform.

Green500 Drives Power Efficiency For Exascale

Each year, at the ISC and SC supercomputing conference shows every year, a central focus tends to be the release of the Top500 list of the world’s most powerful supercomputers. As we’ve noted in The Next Platform, the 25-year-old list may have some issues with it, but it still captures the imagination, with lineups of ever-more powerful systems that reflect the trend toward heterogeneity and accelerators and illustrate the growing competition between the United States and China for dominance in the HPC field, the continued strength of Japan’s supercomputing industry and the desire of European Union countries to

Green500 Drives Power Efficiency For Exascale was written by Jeffrey Burt at The Next Platform.

The Symmetry Of Putting Fluid Dynamics In The Cloud

There has been a lot of talk about taking HPC technologies mainstream, taking them out of realm of research, education and government institutions and making them available to enterprises that are being challenged by the need to manage and process the huge amounts of data being generated through the use of such compute- and storage-intensive workloads such as analytics, artificial intelligence and machine learning.

At The Next Platform, we have written about the efforts by systems OEMs likes IBM, Dell EMC, and Hewlett Packard Enterprise and software makers like Microsoft and SAP to develop offerings that are cost-efficient and

The Symmetry Of Putting Fluid Dynamics In The Cloud was written by Jeffrey Burt at The Next Platform.

HPE Aims HPC Servers, Storage At The Enterprise

Hewlett Packard Enterprise has been busy this year in the HPC space. The company in June unveiled three highly scalable systems optimized for parallel processing tasks and artificial intelligence workloads, including the first system developed from the vendor’s $275 million acquisition of supercomputer maker SGI last year. The liquid-cooled petascale HPE SGI 8600 system is based on SGI’s ICE XA architecture and is aimed at complex scientific and engineering applications. The system scales to more than 10,000 nodes and uses Nvidia’s Tesla GPU accelerators and high-speed NVLink interconnect technology.

At the same time, HPE introduced the Apollo 6000 Gen10,

HPE Aims HPC Servers, Storage At The Enterprise was written by Jeffrey Burt at The Next Platform.