Jeffrey Burt

Author Archives: Jeffrey Burt

HPE Looks Ahead To Composable Infrastructure, Persistent Memory

Over the past several years, the server market has been roiled by the rise of cloud computing that run the applications created by companies and by services offered by hyperscalers that augment or replace such applications. This is a tougher and lumpier market, to be sure.

The top tier cloud providers like Amazon, Microsoft, and Google not only have become key drivers in server sales but also have turned to original design manufacturers (ODMs) from Taiwan and China for lower cost systems to help populate their massive datacenters. Overall, global server shipments have slowed, and top-tier OEMs are working to

HPE Looks Ahead To Composable Infrastructure, Persistent Memory was written by Jeffrey Burt at The Next Platform.

Unifying Oil and Gas Data at Scale

The oil and gas industry has been among the most aggressive in pursuing internet of things (IoT), cloud and big data technologies to collect, store, sort and analyze massive amounts of data in both the drilling and refining sectors to improve efficiencies and decision-making capabilities. Systems are increasingly becoming automated, and sensors are placed throughout processes to send back data on various the systems and software has been put in place to crunch the data to create useful information.

According to a group of researchers from Turkey, the oil and gas industry is well suited to embrace all the new

Unifying Oil and Gas Data at Scale was written by Jeffrey Burt at The Next Platform.

Logistics in Application Path of Neural Networks

Accurately forecasting resource demand within the supply chain has never been easy, particular given the constantly changing nature of the data over periods of time.

What may have been true in measurements around demand or logistics one minute might be entirely different an hour, day or week later, which can throw off a short-term load forecast (STLF) and lead to costly over- or under-estimations, which in turn can lead to too much or too little supply.

To improve such forecasts, there are multiple efforts underway to create new models that can more accurately predict load needs, and while they have

Logistics in Application Path of Neural Networks was written by Jeffrey Burt at The Next Platform.

A Health Check For Code And Infrastructure In The Cloud

As businesses continue their migration to the cloud, the issue of monitoring the performance and health of their applications gets more challenging as they try to track them across both on-premises environments and in both private and public clouds. At the same time, as they become more cloud-based, they have to keep an eye on the entire stack, from the customer-facing applications to the underlying infrastructure they run on.

Since its founding eight years ago, New Relic has steadily built upon its first product, a cloud-based application performance management (APM) tool that is designed to assess how well the

A Health Check For Code And Infrastructure In The Cloud was written by Jeffrey Burt at The Next Platform.

Dell EMC Upgrades Flash in High-End Storage While Eyeing NVMe

When Dell acquired EMC in its massive $60 billon-plus deal last year, it boasted that Dell was inheriting a boatload of new technologies that would help propel forward its capabilities and ambitions with larger enterprises.

That included offerings ranging from VMware’s NSX software-defined networking (SDN) platform to VirtuStream and its cloud technologies for running mission critical applications from the likes of Oracle, SAP and Microsoft off-premises. In particular, Dell was acquiring EMC’s broad and highly popular storage portfolio, in particular the high-end VMAX, XtremeIO, and newer ScaleIO lineups as well as its Isilon storage arrays for high performance workloads.

Dell

Dell EMC Upgrades Flash in High-End Storage While Eyeing NVMe was written by Jeffrey Burt at The Next Platform.

Red Hat Is The Gatekeeper For ARM In The Datacenter

If any new hardware technology is going to get traction in the datacenter, it has to have the software behind it. And as the dominant supplier of commercial Linux, Red Hat’s support of ARM-based servers gives the upstart chip makers like Applied Micro, Cavium, and Qualcomm the leverage to help pry the glasshouse doors open and get a slice of the server and storage business that is so utterly dominated by Intel’s Xeon processors today.

It is now or never for ARM in the datacenter, and that means Red Hat has to go all the way and not just support

Red Hat Is The Gatekeeper For ARM In The Datacenter was written by Jeffrey Burt at The Next Platform.

Red Hat Gears Up OpenShift For Developers

During the five years that Red Hat has been building out its OpenShift cloud applications platform, much of the focus has been on making it easier to use by customers looking to adapt to an increasingly cloud-centric world for both new and legacy applications. Just as it did with the Linux operating system through Red Hat Enterprise Linux and related middleware and tools, the vendor has worked to make it easier for enterprises to embrace OpenShift.

That has included a major reworking of the platform with the release of version 3.0 last year, which ditched Red Hat’s in-house technologies for

Red Hat Gears Up OpenShift For Developers was written by Jeffrey Burt at The Next Platform.

Swiss Army Knife File System Cuts Through Petabytes

Petabytes are in the future of every company, and luckily, the future is always being invented by the IT ecosystem to handle it.

Those wrestling with tens to hundreds of petabytes of data today are constantly challenged to find the best ways to store, search and manage it all. Qumulo was founded in 2012 and came out of the chute two years ago with the idea that a software-based file system that includes built-in analytics that enables the system to increase capacity as the amount of data grows. QSFS, now called Qumulo Core, also does it all: fast with big

Swiss Army Knife File System Cuts Through Petabytes was written by Jeffrey Burt at The Next Platform.

Machine Learning Storms Into Climate Research

The fields where machine learning and neural networks can have positive impacts seem almost limitless. From healthcare and genomics to pharmaceutical development, oil and gas exploration, retail, smart cities and autonomous vehicles, the ability to rapidly and automatically find patterns in massive amounts of data promises to help solve increasingly complex problems and speed up  discoveries that will improve lives, create a heathier world and make businesses more efficient.

Climate science is one of those fields that will see significant benefits from machine learning, and scientists in the field are pushing hard to see how the technology can help them

Machine Learning Storms Into Climate Research was written by Jeffrey Burt at The Next Platform.

Red Hat Tunes Up OpenShift For Legacy Code In Kubernetes

When Red Hat began building out its OpenShift cloud application platform more than five years ago, the open source software vendor found itself in a similar situation as others in the growing platform-as-a-service (PaaS) space: they were all using technologies developed in-house because there were no real standards in the industry that could be used to guide them.

That changed about three years ago, when Google officials decided to open source the technology – called Borg – they were using internally to manage the search giant’s clusters and make it available to the wider community. Thus was born Kubernetes,

Red Hat Tunes Up OpenShift For Legacy Code In Kubernetes was written by Jeffrey Burt at The Next Platform.

ARM Pioneer Sophie Wilson Also Thinks Moore’s Law Coming to an End

Intel might have its own thoughts about the trajectory of Moore’s Law, but many leaders in the industry have views that variate slightly from the tick-tock we keep hearing about.

Sophie Wilson, designer of the original Acorn Micro-Computer in the 1970s and later developer of the instruction set for ARM’s low-power processors that have come to dominate the mobile device world has such thoughts. And when Wilson talks about processors and the processor industry, people listen.

Wilson’s message is essentially that Moore’s Law, which has been the driving force behind chip development in particular and the computer industry

ARM Pioneer Sophie Wilson Also Thinks Moore’s Law Coming to an End was written by Jeffrey Burt at The Next Platform.

Memory And Logic In A Post Moore’s Law World

The future of Moore’s Law has become a topic of hot debate in recent years, as the challenge of continually shrinking transistors and other components has grown.

Intel, AMD, IBM, and others continue to drive the development of smaller electronic components as a way of ensuring advancements in compute performance while driving down the cost of that compute. Processors from Intel and others are moving now from 14 nanometer processes down to 10 nanometers, with plans to continue onto 7 nanometers and smaller.

For more than a decade, Intel had relied on a tick-tock manufacturing schedule to keep up with

Memory And Logic In A Post Moore’s Law World was written by Jeffrey Burt at The Next Platform.

Keeping The Blue Waters Supercomputer Busy For Three Years

After years of planning and delays after a massive architectural change, the Blue Waters supercomputer at the National Center for Supercomputing Applications at the University of Illinois finally went into production in 2013, giving scientists, engineers and researchers across the country a powerful tool to run and solve the most complex and challenging applications in a broad range of scientific areas, from astrophysics and neuroscience to biophysics and molecular research.

Users of the petascale system have been able to simulate the evolution of space, determine the chemical structure of diseases, model weather, and trace how virus infections propagate via air

Keeping The Blue Waters Supercomputer Busy For Three Years was written by Jeffrey Burt at The Next Platform.

Tuning Up Knights Landing For Gene Sequencing

The Smith-Waterman algorithm has become a linchpin in the rapidly expanding world of bioinformatics, the go-to computational model for DNA sequencing and local sequence alignments. With the growth in recent years in genome research, there has been a sharp increase in the amount of data around genes and proteins that needs to be collected and analyzed, and the 36-year-old Smith-Waterman algorithm is a primary way of sequencing the data.

The key to the algorithm is that rather than examining an entire DNA or protein sequence, Smith-Waterman uses a technique called dynamic programming in which the algorithm looks at segments of

Tuning Up Knights Landing For Gene Sequencing was written by Jeffrey Burt at The Next Platform.

3D Stacking Could Boost GPU Machine Learning

Nvidia has staked its growth in the datacenter on machine learning. Over the past few years, the company has rolled out features in its GPUs aimed neural networks and related processing, notably with the “Pascal” generation GPUs with features explicitly designed for the space, such as 16-bit half precision math.

The company is preparing its upcoming “Volta” GPU architecture, which promises to offer significant gains in capabilities. More details on the Volta chip are expected at Nvidia’s annual conference in May. CEO Jen-Hsun Huang late last year spoke to The Next Platform about what he called the upcoming “hyper-Moore’s Law”

3D Stacking Could Boost GPU Machine Learning was written by Jeffrey Burt at The Next Platform.

Google Courts Enterprise For Cloud Platform

Google has always been a company that thinks big. After all, its mission since Day One was to organize and make accessible all of the world’s information.

The company is going to have to take that same expansive and aggressive approach as it looks to grow in a highly competitive public cloud market that includes a dominant player (Amazon Web Services) and a host of other vendors, including Microsoft, IBM, and Oracle. That’s going to mean expanding its customer base beyond smaller businesses and startups and convincing larger enterprises to store their data and run their workloads on its ever-growing

Google Courts Enterprise For Cloud Platform was written by Jeffrey Burt at The Next Platform.

Applied Micro Renews ARM Assault On Intel Servers

The lineup of ARM server chip makers has been a somewhat fluid one over the years.

There have been some that have come and gone (pioneer Calxeda was among the first to the party but folded in 2013 after running out of money), some that apparently have looked at the battlefield and chose not to fight (Samsung and Broadcom, after its $37 billion merger with Avago), and others that have made the move into the space only to pull back a bit (AMD a year ago released its ARM-based Opteron A1100 systems-on-a-chip, or SOCs but has since shifted most of

Applied Micro Renews ARM Assault On Intel Servers was written by Jeffrey Burt at The Next Platform.

Google Expands Enterprise Cloud With Machine Learning

Google’s Cloud Platform is the relative newcomer on the public cloud block, and has a way to go before before it is in the same competitive sphere as Amazon Web Services and Microsoft Azure, both of which deliver a broader and deeper range of offerings and larger infrastructures.

Over the past year, Google has promised to rapidly grow the platform’s capabilities and datacenters and has hired a number of executives in hopes of enticing enterprises to bring more of their corporate workloads and data to the cloud.

One area Google is hoping to leverage is the decade-plus of work and

Google Expands Enterprise Cloud With Machine Learning was written by Jeffrey Burt at The Next Platform.

Microsoft, Stanford Researchers Tweak Cloud Economics Framework

Cloud computing makes a lot of sense for a rapidly growing number of larger enterprises and other organizations, and for any number of reasons. The increased application flexibility and agility engendered by creating a pool of shared infrastructure resources, the scalability and the cost efficiencies, are all key drivers in an era of ever-embiggening data.

With public and hybrid cloud environments, companies can offload the integration, deployment and management of the infrastructure to a third party, taking the pressure off their own IT staffs, and in private and hybrid cloud environments, they can keep their most business-critical data securely behind

Microsoft, Stanford Researchers Tweak Cloud Economics Framework was written by Jeffrey Burt at The Next Platform.

Making Remote NVM-Express Flash Look Local And Fast

Large enterprises are embracing NVM-Express flash as the storage technology of choice for their data intensive and often highly unpredictable workloads. NVM-Express devices bring with them high performance – up to 1 million I/O operations per second – and low latency – less than 100 microseconds. And flash storage now has high capacity, too, making it a natural fit for such datacenter applications.

As we have discussed here before, all-flash arrays are quickly becoming mainstream, particularly within larger enterprises, as an alternative to disk drives in environments where tens or hundreds of petabytes of data – rather than the

Making Remote NVM-Express Flash Look Local And Fast was written by Jeffrey Burt at The Next Platform.