Archive

Category Archives for "The Next Platform"

Weather Forecasting Gets A Big Lift In Japan

It has been a long time since the Japan Meteorological Agency has deployed the kind of supercomputing oomph for weather forecasting that the island nation would seem to need to improve its forecasts. But JMA, like its peers in the United States, Europe, and India, is investing heavily in new supercomputers to get caught up, and specifically, has just done a deal with Cray to get a pair of XC50 systems that will have 18.2 petaflops of aggregate performance.

This is a lot more compute capacity than JMA has had available to do generic weather forecasting as well as do

Weather Forecasting Gets A Big Lift In Japan was written by Timothy Prickett Morgan at The Next Platform.

The Inevitability Of Death, Taxes, And Clouds

“Death and taxes” is a phrase that is usually attributed to Benjamin Franklin from a quote in a 1789 letter: “In this world nothing can be said to be certain, except death and taxes.” Public cloud computing providers didn’t exist back in the days of Franklin, but if they did, they would have no doubt made the list. Here’s why. Public clouds for large data analysis, just like death and taxes, are clearly inevitable because of two things. One simple and now rather worn out cliché. That would be scale and the slightly more subtle data.

Nation states are racing

The Inevitability Of Death, Taxes, And Clouds was written by James Cuff at The Next Platform.

It’s Called Distributed Computing, Even When It Shouldn’t Be

Success can be its own kind of punishment in this world.

Since the dawn of modern computing 130 years ago with tabulating machines derived from looms, there have always been issues of scale when it comes to compute and storage. While all modern businesses worry about the IT infrastructure and how dependent they are on it, there are special classes of systems that are at organizations that have intense computing and storage demands, and usually also severe networking requirements, and they of necessity push the boundaries of what can be done simply because things need to be done.

They have

It’s Called Distributed Computing, Even When It Shouldn’t Be was written by Timothy Prickett Morgan at The Next Platform.

A Hard Rain’s A-Gonna Fall In Public Cloud

Way back in the early days of the commercial Internet, when we all logged into what seemed to be new but what was actually a quite old service used by academic institutions and government agencies that rode on the backbones of the telecommunications network, there were many, many thousands of Internet service providers who provided the interface between our computers and the network capacity that was the onramp of the information superhighway.

Most of these ISPs are gone today, and have been replaced by a few major telco, cable, and wireless network operators who provide us with our Internet service.

A Hard Rain’s A-Gonna Fall In Public Cloud was written by Timothy Prickett Morgan at The Next Platform.

Forging Composable Infrastructure For Memory-Centric Systems

For years, enterprises have wanted to pool and then carve up the myriad resources of the datacenter to enable them to more efficiently run their workloads, reduce power consumption, and improve utilization rates. It takes what seems like an endless series of technologies advances to move towards this goal. But, ever so slowly, we are getting there.

Virtualization that started in the servers flowed into the storage realm and eventually into the network, and converged systems mashing up virtual compute and virtual networking soon begat hyperconverged infrastructure, which added in virtual storage – one of the fastest growing segments

Forging Composable Infrastructure For Memory-Centric Systems was written by Jeffrey Burt at The Next Platform.

Getting Logical About Cavium ThunderX2 Versus Intel Skylake

Any processor that hopes to displace the Xeon as the engine of choice for general purpose compute has to do one of two things, and we would argue both: It has to be a relatively seamless replacement for a Xeon processor inside of existing systems, much as the Opteron was back in the early 2000s, and it has to offer compelling advantages that yield better performance per dollar per watt per unit of space in a rack.

The “Vulcan” ThunderX2 chips, at least based on the initial information that is available in the wake of their launch, appear to do

Getting Logical About Cavium ThunderX2 Versus Intel Skylake was written by Timothy Prickett Morgan at The Next Platform.

A Revival in Custom Hardware For Accelerated Genomics

Building custom processors and systems to annotate chunks of DNA is not a new phenomenon but given the increasing complexity of genomics as well as explosion in demand, this trend is being revived.

Those that have been around in this area in the last couple of decades will recall that back in 2000, the then Celera Genomics acquired Paracel Genomics (an accelerator and software company) for $250 million who at the time had annual sales of $14.2 million. Paracel had a system they called GeneMatcher, who were able to fit 7,000 processors into a box that could compete with over

A Revival in Custom Hardware For Accelerated Genomics was written by James Cuff at The Next Platform.

HPE Buys Its Way Into Virtual Networking With Plexxi

It is safe to say that companies that have traditionally built server, storage, and switch hardware have had a tough time finding their place in a world that is increasingly allergic to appliances and wants everything to come as software that customers have more control over. Even those vendors that are innovating at the hardware level have a heavy software hook, and no hardware vendor can leave itself in the position of just shifting boxes if it hopes to have a profitable business.

Hence the recent acquisitions by both Dell and Hewlett Packard Enterprise. Dell, of course, shelled out a

HPE Buys Its Way Into Virtual Networking With Plexxi was written by Jeffrey Burt at The Next Platform.

CoreOS Is New Linux, Not A RHEL Classic Killer

One of the most important lessons in marketing is that you don’t change something that is working, but that you also have to be able to carefully and cautiously innovate to protect against changing tastes or practices that might also spell doom for the business.

Two and a half decades ago, Coca-Cola famously made a mistake in trying to push New Coke, a different formula that was sweeter and more like Pepsi, replacing the original Coke, which had to be brought back as Coke Classic and which, eventually, killed off New Coke completely. Coca-Cola was not just changing things for

CoreOS Is New Linux, Not A RHEL Classic Killer was written by Timothy Prickett Morgan at The Next Platform.

Chewing A Billion By Billion Matrix Crammed Into Gigabytes Of Memory

Sometimes, to appreciate a new technology or technique, we have to get into the weeds a bit. As such, this article is somewhat more technical than usual. But the key message that new libraries called ExaFMM and HiCMA gives researchers the ability to operate on billion by billion matrices using machines containing only gigabytes of memory, which gives scientists a rather extraordinary new ability to run on really big data problems.

King Abdullah University of Science and Technology (KAUST) has been enhancing the ecosystem of numerical tools for multi-core and many-core processors. The effort, which is a collaboration between KAUST,

Chewing A Billion By Billion Matrix Crammed Into Gigabytes Of Memory was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Hitting On All GPU Cylinders

Even if Nvidia had not pursued a GPU compute strategy in the datacenter a decade and a half ago, the company would have turned in one of the best periods in its history as the first quarter of fiscal 2019 came to a close on April 29.

As it turns out, though, the company has a fast-growing HPC, AI, and cryptocurrency compute business that runs alongside of its core gaming GPU, visualization, and professional graphics businesses, and Nvidia is booming. That is a six cylinder engine of commerce, unless you break AI into training and inference (which is sensible), and

Nvidia Hitting On All GPU Cylinders was written by Timothy Prickett Morgan at The Next Platform.

Tearing Apart Google’s TPU 3.0 AI Coprocessor

Google did its best to impress this week at its annual IO conference. While Google rolled out a bunch of benchmarks that were run on its current Cloud TPU instances, based on TPUv2 chips, the company divulged a few skimpy details about its next generation TPU chip and its systems architecture. The company changed from version notation (TPUv2) to revision notation (TPU 3.0) with the update, but ironically the detail we have assembled shows that the step from TPUv2 to what we will call TPUv3 probably isn’t that big; it should probably be called TPU v2r5 or something like that.

Tearing Apart Google’s TPU 3.0 AI Coprocessor was written by Timothy Prickett Morgan at The Next Platform.

What Qualcomm’s Exit From Arm Server Chips Means

Broadcom may not have wanted to be in the Arm server chip business any more, but its machinations since it was acquired by Avago Technology two years ago have certainly sent ripples through that nascent market. It did it in the wake of buying Broadcom, and now it looks like it is doing it again with Qualcomm.

Before it shelled out a stunning $37 billion to buy Broadcom, best known for its datacenter switch ASICs but also an Arm server chip wannabe at the time, Avago was a conglomerate that made chips for optical networking, server networking, and storage controllers

What Qualcomm’s Exit From Arm Server Chips Means was written by Timothy Prickett Morgan at The Next Platform.

Hitachi Pulls Itself Together In The Datacenter

Hitachi is a massive multi-national conglomerate that has more than 300,000 employees and 950 subsidiaries and a reach that extends into a wide array of industries, from aircraft and automotive systems to telecommunications, construction, defense and financial services. It also is among the world’s largest IT companies, nestled in there among the likes of Apple, Amazon, Microsoft, Google, Samsung. Hitachi’s sprawling technology capabilities ranges from compute and storage appliances in its well-known Hitachi Data Systems (HDS) unit to datacenter management software, data management and business intelligence, and the Internet of Things.

For the past several years, the company

Hitachi Pulls Itself Together In The Datacenter was written by Jeffrey Burt at The Next Platform.

IBM Rounds Out Power9 Systems For HPC, Analytics

Back in the early 1990s, when IBM has having its near-death experience as the mainframe business faltered, Unix systems were making huge inroads into the datacenter, and client/server computing was pulling work off central systems and onto PCs, the company was on the ropes and probably close to bankruptcy. At the time, the Wall Street Journal ran a central A1 column story, where a bunch of CIOs who were unhappy with Big Blue were brutally honest about how they felt.

One of them – and we have never been able to forget this quote – who had moved to other

IBM Rounds Out Power9 Systems For HPC, Analytics was written by Timothy Prickett Morgan at The Next Platform.

The Pantheon Of Services In Nutanix Acropolis

While there is a battle of sorts going on between hyperconverged architectures and disaggregated ones, it is probably safe to assume that at the scale that most enterprises run, they could care less about which one they choose so long as either architecture does what they need to support applications. Enterprises will find some use for both hyperconverged platforms that merge virtual compute and virtual storage for many years to come, but we would also bet that over the long haul, compute and storage will be disaggregated and connected by fast and vast networks because, frankly, that is how Google

The Pantheon Of Services In Nutanix Acropolis was written by Timothy Prickett Morgan at The Next Platform.

AI Redefines Performance Requirements At The Edge

In a broad sense, the history of computing is the constant search for the ideal system architecture. Over the last few decades system architects have continually shifted back and forth from centralized configurations where computational resources are located far from the user to distributed architectures where processing resources were located closer to the individual user.

Early systems used a highly centralized model to deliver increased computational power and storage capabilities to users spread across the enterprise. During the 1980s and 1990s, those centralized architectures gave way to the rise of low cost PCs and the emergence of LANs and then

AI Redefines Performance Requirements At The Edge was written by Timothy Prickett Morgan at The Next Platform.

Blending An Elixir Of Quantum And AI For Better Healthcare

Chocolate and peanut butter, tea and scones, gin and tonic, they’re all great combinations, and today we now have a new binary mixture — Quantum and AI. Do they actually mix well together? Quadrant, a new spin out from D-Wave Systems, certainly seems to think so.

D-Wave has been in the quantum computing business since 1999, raising in excess of $200 million from Goldman Sachs, Bezos Expeditions and others, they now list folks such as Google, NASA, Los Alamos National Laboratory and Volkswagen as examples of their signature customers. Quadrant is basically the new AI play from the

Blending An Elixir Of Quantum And AI For Better Healthcare was written by James Cuff at The Next Platform.

AI Frameworks And Hardware: Who Is Using What?

The world of AI software is quickly evolving. New applications are coming on the scene on almost a daily basis, and now is a good time to try to get a handle on what people are really doing with machine learning and other AI techniques and where they might be headed.

In our first two articles trying to assess what is happening out there in the enterprise when it comes to AI – Lagging In AI? Don’t Worry, It’s Still Early and New AI Being Mostly Used To Solve Old Problems – we discussed how real-world users are approaching AI

AI Frameworks And Hardware: Who Is Using What? was written by Timothy Prickett Morgan at The Next Platform.

ThunderX2 Arms Hyperscale And HPC Compute

In the long run, networking chip giant and one-time server chip wannabe Broadcom might regret selling off its “Vulcan” 64-bit Arm chip business to Cavium, soon to be part of Marvell. The ThunderX2 processors based on the Vulcan designs have been tweaked by Cavium and have been enthusiastically tire-kicked by hyperscalers and HPC centers alike, and are looking like the front runner as a competitor to the X86 architecture for these customers.

The 32-core Vulcan variants of the ThunderX2, which we detailed last November, are getting their own coming out party in San Francisco now that they

ThunderX2 Arms Hyperscale And HPC Compute was written by Timothy Prickett Morgan at The Next Platform.