Jeffrey Burt

Author Archives: Jeffrey Burt

Everyone Wants A Data Platform, Not A Database

Every IT organization wants a more scalable, programmable, and adaptable platform with real-time applications that can chew on ever-increasing amounts and types of data. And it would be nice if it could run in the cloud, too.

Because of this, companies no longer think about databases, but rather are building or buying data platforms that are based on industry-standard technologies, big data tools like NoSQL and unified in a single place. It is a trend that started gaining momentum around 2010 and will accelerate this year, according to Ravi Mayuram, senior vice president of engineering and chief technology officer at

Everyone Wants A Data Platform, Not A Database was written by Jeffrey Burt at The Next Platform.

Microsoft Boosts Azure Storage With Flashy Avere

The future of IT is in the cloud, but it will be a hybrid cloud. And that means things will by necessity get complicated.

Public clouds from the likes of Amazon Web Services, Microsoft, Google and IBM offer enterprises the ability to access massive, elastic and highly scalable infrastructure environments for many of their workloads without having to pay the cost of bringing those capabilities into their on-premises environments, but there will always be applications that businesses will want to keep behind the firewall for security and efficiency reasons. That reality is driving the demand not only for the

Microsoft Boosts Azure Storage With Flashy Avere was written by Jeffrey Burt at The Next Platform.

Juniper Flips OpenContrail To The Linux Foundation

It’s a familiar story arc for open source efforts started by vendors or vendor-led industry consortiums. The initiatives are launched and expanded, but eventually they find their way into independent open source organizations such as the Linux Foundation, where vendor control is lessened, communities are able to grow, and similar projects can cross-pollinate in hopes of driving greater standardization in the industry and adoption within enterprises.

It happened with Xen, the virtualization technology that initially started with XenSource and was bought by Citrix Systems but now is under the auspices of the Linux Foundation. The Linux kernel lives there, too,

Juniper Flips OpenContrail To The Linux Foundation was written by Jeffrey Burt at The Next Platform.

Enterprises Challenged By The Many Guises Of AI

Artificial intelligence and machine learning, which found solid footing among the hyperscalers and is now expanding into the HPC community, are at the top of the list of new technologies that enterprises want to embrace for all kinds of reasons. But it all boils down to the same problem: Sorting through the increasing amounts of data coming into their environments and finding patterns that will help them to run their businesses more efficiently, to make better businesses decisions, and ultimately to make more money.

Enterprises are increasingly experimenting with the various frameworks and tools that are on the market

Enterprises Challenged By The Many Guises Of AI was written by Jeffrey Burt at The Next Platform.

A Purified Implementation Of NVM-Express Storage

NVM-Express holds the promise of accelerating the performance and lowering the latency of flash and other non-volatile storage. Every server and storage vendor we can think of is working to bring NVM-Express into the picture to get the full benefits of flash, but even six years after the first specification for the technology was released, NVM-Express is still very much a work in progress, with capabilities like stretching it over a network still a couple of years away.

Pure Storage launched eight years ago with the idea of selling only all-flash arrays and saw NVM-Express coming many years ago, and

A Purified Implementation Of NVM-Express Storage was written by Jeffrey Burt at The Next Platform.

Burst Buffers Blow Through I/O Bottlenecks

Burst buffers are carving out a significant space for themselves in the HPC arena as a way to improve data checkpointing and application performance at a time when traditional storage technologies are struggling to keep up with the increasingly large and complex workloads including traditional simulation and modeling and new things like as data analytics.

The fear has been that storage technologies such as parallel file systems could become the bottleneck that limits performance, and burst buffers have been designed to manage peak I/O situations so that organizations aren’t forced to scale their storage environments to be able to support

Burst Buffers Blow Through I/O Bottlenecks was written by Jeffrey Burt at The Next Platform.

Put Building Data Culture Ahead Of Buying Data Analytics

In his keynote at the recent AWS re:Invent conference, Amazon vice president and chief technology officer Werner Vogels said that the cloud had created a “egalitarian” computing environment where everyone has access to the same compute, storage, and analytics, and that the real differentiator for enterprises will be the data they generate, and more importantly, the value the enterprises derive from that data.

For Rob Thomas, general manager of IBM Analytics, data is the focus. The company is putting considerable muscle behind data analytics, machine learning, and what it calls more generally cognitive computing, much of it based

Put Building Data Culture Ahead Of Buying Data Analytics was written by Jeffrey Burt at The Next Platform.

Bridging Object Storage And NAS In The Enterprise

Object storage may not have been born in the cloud, but it was the major public cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud Platform that have been its biggest drivers.

The idea of object storage wasn’t new; it had been around for about two decades. But as the cloud service providers began building out their datacenters and platforms more than a decade ago, they were faced with the need to find a storage architecture that could scale to meet the demands brought on by the massive amounts of data being created, and as well as the

Bridging Object Storage And NAS In The Enterprise was written by Jeffrey Burt at The Next Platform.

VMware Tweaks NSX Virtual Networks For Containers, Microservices

VMware jumped into burgeoning software-defined networking (SDN) field in a big way four years ago when it bought started Nicira for $1.26 billion, a deal that led to the launch of VMware’s NSX offering a year later. NSX put the company on a crash course with other networking vendors, particularly Cisco Systems, all of whom were trying to plot their strategies to deal with the rapid changes in what had been a relatively staid part of the industry.

Many of these vendors had made their billions over the years selling expensive appliance-style boxes filled with proprietary technologies, and now faced

VMware Tweaks NSX Virtual Networks For Containers, Microservices was written by Jeffrey Burt at The Next Platform.

Cloudera Puffs Up Analytics Database For Clouds

In many ways, public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform can be the great equalizers, giving enterprises access to computing and storage resources that they may not have the money to be able to bring into their on-premises environments. Given the new compute-intensive workloads like data analytics and machine learning, and the benefits they can bring to modern businesses, this access to cloud-based platforms is increasingly critical to large enterprises.

Cloudera for several years has been pushing its software offerings – such as Data Science Workbench, Analytic DB, Operational DB, and Enterprise Data Hub –

Cloudera Puffs Up Analytics Database For Clouds was written by Jeffrey Burt at The Next Platform.

Getting Hyper And Converged About Storage

Hyperconverged infrastructure is a relatively small but fast-growing part of the datacenter market, driving in large part by enterprises looking to simplify and streamline their environments as they tackle increasingly complex workloads.

Like converged infrastructure, hyperconverged offerings are modular in nature, converging compute, storage, networking, virtualization and management software into a tightly integrated single solution that drives greater datacenter densities, smaller footprints, rapid deployment and lower costs. They are pre-built, pre-validated before shipping from the factory, eliminating the need for the user to do the necessary and time-consuming integration. Hyperconverged merges the compute and storage into a single unit, and

Getting Hyper And Converged About Storage was written by Jeffrey Burt at The Next Platform.

Cramming The Cosmos Into A Shared Memory Supercomputer

The very first Superdome Flex shared memory system has gone out the door at Hewlett Packard Enterprise, and it is going to an HPC center that has big memory needs as it tries to understand the universe that we inhabit.

Earlier this month, Hewett Packard Enterprise unveiled the newest addition to NUMA iron, the Superdome Flex, which is an upgrade from the SGI UV 300 platform that HPE sought when it bought SGI for $275 million last year. As we outlined in The Next Platform at the time, the system can scale from four to 32 sockets, is powered by

Cramming The Cosmos Into A Shared Memory Supercomputer was written by Jeffrey Burt at The Next Platform.

Debating The Role Of Commodity Chips In Exascale

Building the first exascale systems continues to be a high-profile endeavor, with efforts underway worldwide in the United States, the European Union, and Asia – notably China and Japan – that focus on competition between regional powers, the technologies that are going into the architectures, and the promises that these supercomputers hold for everything from research and government to business and commerce.

The Chinese government is pouring money and resources into its roadmaps for both pre-exascale and exascale systems, Japan is moving forward with Fujitsu’s Post-K system that will use processors based on the Arm architecture rather than the

Debating The Role Of Commodity Chips In Exascale was written by Jeffrey Burt at The Next Platform.

Assessing The Tradeoffs Of NVM-Express Storage At Scale

NVM-Express isn’t new. Development on the interface, which provides lean and mean access to non-volatile memory, first came to light a decade ago, with technical work starting two years later through a work group that comprised more than 90 tech vendors. The first NVM-Express specification came out in 2011, and now the technology is going mainstream.

How quickly and pervasively remains to be seen. NVM-Express promises significant boosts in performance to SSDs while driving down the latency, which would be a boon to HPC organizations and the wider world of enterprises as prices for SSDs continue to fall and adoption

Assessing The Tradeoffs Of NVM-Express Storage At Scale was written by Jeffrey Burt at The Next Platform.

Green500 Drives Power Efficiency For Exascale

Each year, at the ISC and SC supercomputing conference shows every year, a central focus tends to be the release of the Top500 list of the world’s most powerful supercomputers. As we’ve noted in The Next Platform, the 25-year-old list may have some issues with it, but it still captures the imagination, with lineups of ever-more powerful systems that reflect the trend toward heterogeneity and accelerators and illustrate the growing competition between the United States and China for dominance in the HPC field, the continued strength of Japan’s supercomputing industry and the desire of European Union countries to

Green500 Drives Power Efficiency For Exascale was written by Jeffrey Burt at The Next Platform.

The Symmetry Of Putting Fluid Dynamics In The Cloud

There has been a lot of talk about taking HPC technologies mainstream, taking them out of realm of research, education and government institutions and making them available to enterprises that are being challenged by the need to manage and process the huge amounts of data being generated through the use of such compute- and storage-intensive workloads such as analytics, artificial intelligence and machine learning.

At The Next Platform, we have written about the efforts by systems OEMs likes IBM, Dell EMC, and Hewlett Packard Enterprise and software makers like Microsoft and SAP to develop offerings that are cost-efficient and

The Symmetry Of Putting Fluid Dynamics In The Cloud was written by Jeffrey Burt at The Next Platform.

HPE Aims HPC Servers, Storage At The Enterprise

Hewlett Packard Enterprise has been busy this year in the HPC space. The company in June unveiled three highly scalable systems optimized for parallel processing tasks and artificial intelligence workloads, including the first system developed from the vendor’s $275 million acquisition of supercomputer maker SGI last year. The liquid-cooled petascale HPE SGI 8600 system is based on SGI’s ICE XA architecture and is aimed at complex scientific and engineering applications. The system scales to more than 10,000 nodes and uses Nvidia’s Tesla GPU accelerators and high-speed NVLink interconnect technology.

At the same time, HPE introduced the Apollo 6000 Gen10,

HPE Aims HPC Servers, Storage At The Enterprise was written by Jeffrey Burt at The Next Platform.

MapR Gives Single View Of Big Data

The enormous amount of data being generated will do companies little good if they can’t more easily gather it from multiple sources, store it, analyze it and gain important insights into it that will help them drive better business decisions. There are myriad challenges to all this, starting with the sheer amount of data that is being created. The data also is coming from many different sources, is at rest and in motion, is created on-premises, in the cloud, and at the network edge, and is ruled by different data governance policies.

For the past several years, MapR Technologies has

MapR Gives Single View Of Big Data was written by Jeffrey Burt at The Next Platform.

Breaking Memory Free Of Compute With Gen-Z

Servers have become increasingly powerful in recent years, with more processing cores being added and accelerators like GPUs and field-programmable gate arrays (FPGAs) being added, and the amount of data that can be processed is growing rapidly.

However, a key problem has been the enabling interconnect technologies to keep pace with server evolution. It is a challenge that last year spawned the Gen-Z Consortium, a group founded by a dozen top-tier tech vendors including Hewlett Packard Enterprise, IBM, Dell EMC, AMD, Arm, and Cray that wanted to create the next-generation interconnect that can leverage existing tech while paving the way

Breaking Memory Free Of Compute With Gen-Z was written by Jeffrey Burt at The Next Platform.

Expanded Oracle Cloud Rains Down GPUs, Skylake Xeons

Oracle was late to the cloud game, but in recent years has moved aggressively to catch up. While still behind the top companies like Amazon Web Services, Microsoft Azure, and Google Cloud Platform, Oracle is seeing gains in revenue and customers to its cloud environment, thanks in large part due to the hundreds of thousands of enterprise customers that use its various operating system, middleware, database, and application software.

The cloud revenue jump at Oracle is pretty steep. In a conference call discussing the most recent quarterly financial numbers, Oracle co-CEO Safra Catz noted that cloud revenue for the quarter

Expanded Oracle Cloud Rains Down GPUs, Skylake Xeons was written by Jeffrey Burt at The Next Platform.