James Cuff

Author Archives: James Cuff

The Inevitability Of Death, Taxes, And Clouds

“Death and taxes” is a phrase that is usually attributed to Benjamin Franklin from a quote in a 1789 letter: “In this world nothing can be said to be certain, except death and taxes.” Public cloud computing providers didn’t exist back in the days of Franklin, but if they did, they would have no doubt made the list. Here’s why. Public clouds for large data analysis, just like death and taxes, are clearly inevitable because of two things. One simple and now rather worn out cliché. That would be scale and the slightly more subtle data.

Nation states are racing

The Inevitability Of Death, Taxes, And Clouds was written by James Cuff at The Next Platform.

A Revival in Custom Hardware For Accelerated Genomics

Building custom processors and systems to annotate chunks of DNA is not a new phenomenon but given the increasing complexity of genomics as well as explosion in demand, this trend is being revived.

Those that have been around in this area in the last couple of decades will recall that back in 2000, the then Celera Genomics acquired Paracel Genomics (an accelerator and software company) for $250 million who at the time had annual sales of $14.2 million. Paracel had a system they called GeneMatcher, who were able to fit 7,000 processors into a box that could compete with over

A Revival in Custom Hardware For Accelerated Genomics was written by James Cuff at The Next Platform.

Blending An Elixir Of Quantum And AI For Better Healthcare

Chocolate and peanut butter, tea and scones, gin and tonic, they’re all great combinations, and today we now have a new binary mixture — Quantum and AI. Do they actually mix well together? Quadrant, a new spin out from D-Wave Systems, certainly seems to think so.

D-Wave has been in the quantum computing business since 1999, raising in excess of $200 million from Goldman Sachs, Bezos Expeditions and others, they now list folks such as Google, NASA, Los Alamos National Laboratory and Volkswagen as examples of their signature customers. Quadrant is basically the new AI play from the

Blending An Elixir Of Quantum And AI For Better Healthcare was written by James Cuff at The Next Platform.

HPC Container Security: Fact, Myth, Rumor, And Kernels

It is fair to say that containers in HPC are a big deal. Nothing more clearly shows the critical nature of any technology than watching the community reaction when a new security issue is discovered and released.

In a recent announcement from the team over at Sylabs, they stated that multiple container systems on kernels that do not support PR_SET_NO_NEW_PRIVS were now vulnerable. This was big news, and it obviously spread like a proverbial wildfire through the HPC community, with many mostly voicing their upset that the initial announcement came out at the start of a long holiday weekend

HPC Container Security: Fact, Myth, Rumor, And Kernels was written by James Cuff at The Next Platform.

Intel Saffron AI: Faster Answers With Just A Hint Of Spice

To give hungry customers a high quality, gourmet AI experience new and exotic recipes are being constructed in a race to dream up ever more exciting and tasty concoctions from traditional software and hardware staples.

Over at Intel, AI is clearly the highlight of its current tasting menu. Intel has announced a new set of AI offerings that use associative memory learning and reasoning based on products from Saffron Technology, which Intel carried home from the market back in 2015 for an undisclosed sum.

Saffron adds an integrated software stack to the expanding portfolio of AI hardware, from traditional

Intel Saffron AI: Faster Answers With Just A Hint Of Spice was written by James Cuff at The Next Platform.

AI Software Writing AI Software For Healthcare?

At the World Medical Innovation Forum this week, participants were polled with a loaded question: “Do you think healthcare will become better or worse from the use of AI?”

Across the respondents, 98 percent said it would be either “Better” or “Much Better” and not a single one thought it would become “Much Worse.” This is an interesting statistic, and the results were not entirely surprising, especially given that artificial intelligence was the theme for the meeting.

This continual stream of adoption of new technologies in both clinical and post clinical settings is remarkable. Today, healthcare is a technology operation.

AI Software Writing AI Software For Healthcare? was written by James Cuff at The Next Platform.

Is Open Source The AI Nirvana for Intel?

Intel has been making some interesting moves in the community space recently, including free licenses for its compiler suite for educators and open source contributors can now be had, as can rotating 90 day licenses for its full System Studio environment for anyone who takes the time to sign up.

In the AI space, Intel recently announced that its nGraph code for managing AI graph APIs has also been opened to the community. After opening it up last month, Intel has been followed up on the initial work on MXNet with further improvements to TensorFlow.

The Next Platform

Is Open Source The AI Nirvana for Intel? was written by James Cuff at The Next Platform.

Containing the Complexity of the Long Tail

HPC software evolves continuously. Those now finding themselves on the frontlines of HPC support are having to invent and build new technologies just to keep up with deluge of layers and layers of software on top of software and software is only part of the bigger picture.

We have talked in the past about computational balance and the challenges of unplanned data, these are both real and tangible issues. However, now in addition to all of that, those in support roles living at the sharp end of having to support research are also faced by what is increasingly turning

Containing the Complexity of the Long Tail was written by James Cuff at The Next Platform.

GPUs Mine Astronomical Datasets For Golden Insight Nuggets

As humankind continues to stare into the dark abyss of deep space in an eternal quest to understand our origins, new computational tools and technologies are needed at unprecedented scales. Gigantic datasets from advanced high resolution telescopes and huge scientific instrumentation installations are overwhelming classical computational and storage techniques.

This is the key issue with exploring the Universe – it is very, very large. Combining advances in machine learning and high speed data storage are starting to provide hitherto unheard of levels of insight that were previously in the realm of pure science fiction. Using computer systems to infer knowledge

GPUs Mine Astronomical Datasets For Golden Insight Nuggets was written by James Cuff at The Next Platform.

Riding the AI Cycle Instead of Building It

We all remember learning to ride a bike. Those early wobbly moments with “experts” holding on to your seat while you furiously peddled and tugged away at the handlebars trying to find your own balance.

Training wheels were the obvious hardware choice for those unattended and slightly dangerous practice sessions. Training wheel hardware was often installed by your then “expert” in an attempt to avoid your almost inevitable trip to the ER. Eventually one day, often without planning you no longer needed the support, and you could make it all happen on your own.

Today, AI and ML needs this

Riding the AI Cycle Instead of Building It was written by James Cuff at The Next Platform.

Deep Learning In R: Documentation Drives Algorithms

Hard to believe, but the R programming language has been with us since 1993.

A quarter century has now passed since the authors Gentleman and Ihaka originally conceived the R platform as an implementation of the S programming language.

Continuous global software development has taken the original concepts originally inspired by John Chambers’ Scheme in 1975 to now include parallel computing, bioinformatics, social science and more recently complex AI and deep learning methods. Layers have been built on top of layers and today’s R looks nothing like 1990’s R.

So where are we at, especially with the emerging opportunities

Deep Learning In R: Documentation Drives Algorithms was written by James Cuff at The Next Platform.

New Approaches to Optimizing Workflow Automation

Workflow automation has been born of necessity and has evolved an increasingly sophisticated set of tools to manage the growing complexity of the automation itself.

The same theme keeps emerging across the broader spectrum of enterprise and research IT. For instance, we spoke recently about the need to profile software and algorithms when billions of events per iteration are generated from modern GPU systems. This is a similar challenge and fortunately, not all traditional or physical business processes fall into this scale bucket. Many are much less data intensive, but can have a such a critical impact in “time to

New Approaches to Optimizing Workflow Automation was written by James Cuff at The Next Platform.

Getting to the Heart of HPC and AI at the Edge in Healthcare

For more than a decade, GE has partnered with Nvidia to support their healthcare devices. Increasing demand for high quality medical imaging and mobile diagnostics alone has resulted in building a $4 billion segment of the $19 billion total life sciences budget within GE Healthcare.

This year at the GPU Technology Conference (GTC18), The Next Platform sat in as Keith Bigelow, GM & SVP of Analytics, and Erik Steen, Chief Engineer at GE Healthcare, discussed the challenges of deploying AI focusing on cardiovascular ultrasound imaging.

There are a wide range of GPU accelerated medical devices as well as those that

Getting to the Heart of HPC and AI at the Edge in Healthcare was written by James Cuff at The Next Platform.

Mounting Complexity Pushes New GPU Profiling Tools

The more things change, the more they remain the same — as do the two most critical issues for successful software execution. First, you remove the bugs, then you profile. And while debugging and profiling are not new, they are needed now more than ever, albeit in a modernized form.

The first performance analysis tools were first found on early IBM platforms in the early 1970s.  These performance profiles were based on timer interrupts that recorded “status words” set at predetermined specific intervals in an attempt to detect “hot spots” inside running code.  

Profiling is even more critical today,

Mounting Complexity Pushes New GPU Profiling Tools was written by James Cuff at The Next Platform.

There is No Such Thing as Easy AI — But We’re Getting Closer

The dark and mysterious art of artificial intelligence and machine learning is neither straightforward, or easy. AI systems have been termed “black boxes” for this reason for decades now. We desperately continue to present ever larger, more unwieldy datasets to increasingly sophisticated “mystery algorithms” in our attempts to rapidly infer and garner new knowledge.  

How can we try to make all of this just a little easier?

Hyperscalers with multi-million dollar analytics teams have access to vast, effectively unlimited compute and storage of all shapes and sizes. Huge teams of analysts, systems managers, resilience and reliability experts are standing up

There is No Such Thing as Easy AI — But We’re Getting Closer was written by James Cuff at The Next Platform.

Practical Computational Balance: Contending with Unplanned Data

In part one of our series on reaching computational balance, we described how computational complexity is increasing logarithmically. Unfortunately, data and storage follows an identical trend.

The challenge of balancing compute and data at scale remains constant. Because providers and consumers don’t have access to “the crystal ball of demand prediction”, the appropriate computational response to vast, unpredictable amounts of highly variable complex data becomes unintentionally unplanned.

We must address computational balance in a world barraged by vast and unplanned data.

Before starting any discussion of data balance, it is important to first remind ourselves of scale.  Small

Practical Computational Balance: Contending with Unplanned Data was written by James Cuff at The Next Platform.

Spinning the Bottleneck for Data, AI, Analytics and Cloud

High performance computing experts came together recently at Stanford for their annual HPC Advisory Council Meeting to share strategies after what has been an interesting year in supercomputing thus far. 

As always, there was a vast amount of material covering everything from interconnects to containerized compute. In the midst of this, The Next Platform noted an obvious and critical thread over the two days–how to best map infrastructure to software in order to reduce “computational back pressure” associated with new “data heavy” AI workloads.

In the “real world” back pressure results from a bottleneck as opposed to desired

Spinning the Bottleneck for Data, AI, Analytics and Cloud was written by James Cuff at The Next Platform.

Hardware as a Service: The New Missing Middle?

Computing used to be far away.

It was accessed via remote command terminals, through time sliced services. It was a pretty miserable experience. During the personal computing revolution, computing once again became local. It would fit under your desk, or in a small dedicated “computer rooms”. You could touch it. It was once more, a happy and contented time for computer users. The computer was personal again. There was a clue in the name.

However, as complexity grew, and as networks improved, computing was effectively taken away again and placed in cold dark rooms once more far, far away for

Hardware as a Service: The New Missing Middle? was written by James Cuff at The Next Platform.

Striking Practical Computational Balance

The phenomenal complexity of computing is not decreasing. Charts of growth, investment and scale continue to follow a logarithmic curve.

But how is computational balance to be maintained with any level of objectivity under such extreme circumstances? How do we plan for this known, and yet highly unknown challenge of building balanced systems to operate at scale? The ever more bewildering set of options (e.g. price lists now have APIs) may, if not managed with utmost care, result in chaos and confusion.

This first in a series of articles will set some background and perspective on the

Striking Practical Computational Balance was written by James Cuff at The Next Platform.