James Cuff

Author Archives: James Cuff

Deep Learning In R: Documentation Drives Algorithms

Hard to believe, but the R programming language has been with us since 1993.

A quarter century has now passed since the authors Gentleman and Ihaka originally conceived the R platform as an implementation of the S programming language.

Continuous global software development has taken the original concepts originally inspired by John Chambers’ Scheme in 1975 to now include parallel computing, bioinformatics, social science and more recently complex AI and deep learning methods. Layers have been built on top of layers and today’s R looks nothing like 1990’s R.

So where are we at, especially with the emerging opportunities

Deep Learning In R: Documentation Drives Algorithms was written by James Cuff at The Next Platform.

New Approaches to Optimizing Workflow Automation

Workflow automation has been born of necessity and has evolved an increasingly sophisticated set of tools to manage the growing complexity of the automation itself.

The same theme keeps emerging across the broader spectrum of enterprise and research IT. For instance, we spoke recently about the need to profile software and algorithms when billions of events per iteration are generated from modern GPU systems. This is a similar challenge and fortunately, not all traditional or physical business processes fall into this scale bucket. Many are much less data intensive, but can have a such a critical impact in “time to

New Approaches to Optimizing Workflow Automation was written by James Cuff at The Next Platform.

Getting to the Heart of HPC and AI at the Edge in Healthcare

For more than a decade, GE has partnered with Nvidia to support their healthcare devices. Increasing demand for high quality medical imaging and mobile diagnostics alone has resulted in building a $4 billion segment of the $19 billion total life sciences budget within GE Healthcare.

This year at the GPU Technology Conference (GTC18), The Next Platform sat in as Keith Bigelow, GM & SVP of Analytics, and Erik Steen, Chief Engineer at GE Healthcare, discussed the challenges of deploying AI focusing on cardiovascular ultrasound imaging.

There are a wide range of GPU accelerated medical devices as well as those that

Getting to the Heart of HPC and AI at the Edge in Healthcare was written by James Cuff at The Next Platform.

Mounting Complexity Pushes New GPU Profiling Tools

The more things change, the more they remain the same — as do the two most critical issues for successful software execution. First, you remove the bugs, then you profile. And while debugging and profiling are not new, they are needed now more than ever, albeit in a modernized form.

The first performance analysis tools were first found on early IBM platforms in the early 1970s.  These performance profiles were based on timer interrupts that recorded “status words” set at predetermined specific intervals in an attempt to detect “hot spots” inside running code.  

Profiling is even more critical today,

Mounting Complexity Pushes New GPU Profiling Tools was written by James Cuff at The Next Platform.

There is No Such Thing as Easy AI — But We’re Getting Closer

The dark and mysterious art of artificial intelligence and machine learning is neither straightforward, or easy. AI systems have been termed “black boxes” for this reason for decades now. We desperately continue to present ever larger, more unwieldy datasets to increasingly sophisticated “mystery algorithms” in our attempts to rapidly infer and garner new knowledge.  

How can we try to make all of this just a little easier?

Hyperscalers with multi-million dollar analytics teams have access to vast, effectively unlimited compute and storage of all shapes and sizes. Huge teams of analysts, systems managers, resilience and reliability experts are standing up

There is No Such Thing as Easy AI — But We’re Getting Closer was written by James Cuff at The Next Platform.

Practical Computational Balance: Contending with Unplanned Data

In part one of our series on reaching computational balance, we described how computational complexity is increasing logarithmically. Unfortunately, data and storage follows an identical trend.

The challenge of balancing compute and data at scale remains constant. Because providers and consumers don’t have access to “the crystal ball of demand prediction”, the appropriate computational response to vast, unpredictable amounts of highly variable complex data becomes unintentionally unplanned.

We must address computational balance in a world barraged by vast and unplanned data.

Before starting any discussion of data balance, it is important to first remind ourselves of scale.  Small

Practical Computational Balance: Contending with Unplanned Data was written by James Cuff at The Next Platform.

Spinning the Bottleneck for Data, AI, Analytics and Cloud

High performance computing experts came together recently at Stanford for their annual HPC Advisory Council Meeting to share strategies after what has been an interesting year in supercomputing thus far. 

As always, there was a vast amount of material covering everything from interconnects to containerized compute. In the midst of this, The Next Platform noted an obvious and critical thread over the two days–how to best map infrastructure to software in order to reduce “computational back pressure” associated with new “data heavy” AI workloads.

In the “real world” back pressure results from a bottleneck as opposed to desired

Spinning the Bottleneck for Data, AI, Analytics and Cloud was written by James Cuff at The Next Platform.

Hardware as a Service: The New Missing Middle?

Computing used to be far away.

It was accessed via remote command terminals, through time sliced services. It was a pretty miserable experience. During the personal computing revolution, computing once again became local. It would fit under your desk, or in a small dedicated “computer rooms”. You could touch it. It was once more, a happy and contented time for computer users. The computer was personal again. There was a clue in the name.

However, as complexity grew, and as networks improved, computing was effectively taken away again and placed in cold dark rooms once more far, far away for

Hardware as a Service: The New Missing Middle? was written by James Cuff at The Next Platform.

Striking Practical Computational Balance

The phenomenal complexity of computing is not decreasing. Charts of growth, investment and scale continue to follow a logarithmic curve.

But how is computational balance to be maintained with any level of objectivity under such extreme circumstances? How do we plan for this known, and yet highly unknown challenge of building balanced systems to operate at scale? The ever more bewildering set of options (e.g. price lists now have APIs) may, if not managed with utmost care, result in chaos and confusion.

This first in a series of articles will set some background and perspective on the

Striking Practical Computational Balance was written by James Cuff at The Next Platform.