Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Getting Logical About Cavium ThunderX2 Versus Intel Skylake

Any processor that hopes to displace the Xeon as the engine of choice for general purpose compute has to do one of two things, and we would argue both: It has to be a relatively seamless replacement for a Xeon processor inside of existing systems, much as the Opteron was back in the early 2000s, and it has to offer compelling advantages that yield better performance per dollar per watt per unit of space in a rack.

The “Vulcan” ThunderX2 chips, at least based on the initial information that is available in the wake of their launch, appear to do

Getting Logical About Cavium ThunderX2 Versus Intel Skylake was written by Timothy Prickett Morgan at The Next Platform.

CoreOS Is New Linux, Not A RHEL Classic Killer

One of the most important lessons in marketing is that you don’t change something that is working, but that you also have to be able to carefully and cautiously innovate to protect against changing tastes or practices that might also spell doom for the business.

Two and a half decades ago, Coca-Cola famously made a mistake in trying to push New Coke, a different formula that was sweeter and more like Pepsi, replacing the original Coke, which had to be brought back as Coke Classic and which, eventually, killed off New Coke completely. Coca-Cola was not just changing things for

CoreOS Is New Linux, Not A RHEL Classic Killer was written by Timothy Prickett Morgan at The Next Platform.

Chewing A Billion By Billion Matrix Crammed Into Gigabytes Of Memory

Sometimes, to appreciate a new technology or technique, we have to get into the weeds a bit. As such, this article is somewhat more technical than usual. But the key message that new libraries called ExaFMM and HiCMA gives researchers the ability to operate on billion by billion matrices using machines containing only gigabytes of memory, which gives scientists a rather extraordinary new ability to run on really big data problems.

King Abdullah University of Science and Technology (KAUST) has been enhancing the ecosystem of numerical tools for multi-core and many-core processors. The effort, which is a collaboration between KAUST,

Chewing A Billion By Billion Matrix Crammed Into Gigabytes Of Memory was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Hitting On All GPU Cylinders

Even if Nvidia had not pursued a GPU compute strategy in the datacenter a decade and a half ago, the company would have turned in one of the best periods in its history as the first quarter of fiscal 2019 came to a close on April 29.

As it turns out, though, the company has a fast-growing HPC, AI, and cryptocurrency compute business that runs alongside of its core gaming GPU, visualization, and professional graphics businesses, and Nvidia is booming. That is a six cylinder engine of commerce, unless you break AI into training and inference (which is sensible), and

Nvidia Hitting On All GPU Cylinders was written by Timothy Prickett Morgan at The Next Platform.

Tearing Apart Google’s TPU 3.0 AI Coprocessor

Google did its best to impress this week at its annual IO conference. While Google rolled out a bunch of benchmarks that were run on its current Cloud TPU instances, based on TPUv2 chips, the company divulged a few skimpy details about its next generation TPU chip and its systems architecture. The company changed from version notation (TPUv2) to revision notation (TPU 3.0) with the update, but ironically the detail we have assembled shows that the step from TPUv2 to what we will call TPUv3 probably isn’t that big; it should probably be called TPU v2r5 or something like that.

Tearing Apart Google’s TPU 3.0 AI Coprocessor was written by Timothy Prickett Morgan at The Next Platform.

What Qualcomm’s Exit From Arm Server Chips Means

Broadcom may not have wanted to be in the Arm server chip business any more, but its machinations since it was acquired by Avago Technology two years ago have certainly sent ripples through that nascent market. It did it in the wake of buying Broadcom, and now it looks like it is doing it again with Qualcomm.

Before it shelled out a stunning $37 billion to buy Broadcom, best known for its datacenter switch ASICs but also an Arm server chip wannabe at the time, Avago was a conglomerate that made chips for optical networking, server networking, and storage controllers

What Qualcomm’s Exit From Arm Server Chips Means was written by Timothy Prickett Morgan at The Next Platform.

IBM Rounds Out Power9 Systems For HPC, Analytics

Back in the early 1990s, when IBM has having its near-death experience as the mainframe business faltered, Unix systems were making huge inroads into the datacenter, and client/server computing was pulling work off central systems and onto PCs, the company was on the ropes and probably close to bankruptcy. At the time, the Wall Street Journal ran a central A1 column story, where a bunch of CIOs who were unhappy with Big Blue were brutally honest about how they felt.

One of them – and we have never been able to forget this quote – who had moved to other

IBM Rounds Out Power9 Systems For HPC, Analytics was written by Timothy Prickett Morgan at The Next Platform.

The Pantheon Of Services In Nutanix Acropolis

While there is a battle of sorts going on between hyperconverged architectures and disaggregated ones, it is probably safe to assume that at the scale that most enterprises run, they could care less about which one they choose so long as either architecture does what they need to support applications. Enterprises will find some use for both hyperconverged platforms that merge virtual compute and virtual storage for many years to come, but we would also bet that over the long haul, compute and storage will be disaggregated and connected by fast and vast networks because, frankly, that is how Google

The Pantheon Of Services In Nutanix Acropolis was written by Timothy Prickett Morgan at The Next Platform.

AI Redefines Performance Requirements At The Edge

In a broad sense, the history of computing is the constant search for the ideal system architecture. Over the last few decades system architects have continually shifted back and forth from centralized configurations where computational resources are located far from the user to distributed architectures where processing resources were located closer to the individual user.

Early systems used a highly centralized model to deliver increased computational power and storage capabilities to users spread across the enterprise. During the 1980s and 1990s, those centralized architectures gave way to the rise of low cost PCs and the emergence of LANs and then

AI Redefines Performance Requirements At The Edge was written by Timothy Prickett Morgan at The Next Platform.

AI Frameworks And Hardware: Who Is Using What?

The world of AI software is quickly evolving. New applications are coming on the scene on almost a daily basis, and now is a good time to try to get a handle on what people are really doing with machine learning and other AI techniques and where they might be headed.

In our first two articles trying to assess what is happening out there in the enterprise when it comes to AI – Lagging In AI? Don’t Worry, It’s Still Early and New AI Being Mostly Used To Solve Old Problems – we discussed how real-world users are approaching AI

AI Frameworks And Hardware: Who Is Using What? was written by Timothy Prickett Morgan at The Next Platform.

ThunderX2 Arms Hyperscale And HPC Compute

In the long run, networking chip giant and one-time server chip wannabe Broadcom might regret selling off its “Vulcan” 64-bit Arm chip business to Cavium, soon to be part of Marvell. The ThunderX2 processors based on the Vulcan designs have been tweaked by Cavium and have been enthusiastically tire-kicked by hyperscalers and HPC centers alike, and are looking like the front runner as a competitor to the X86 architecture for these customers.

The 32-core Vulcan variants of the ThunderX2, which we detailed last November, are getting their own coming out party in San Francisco now that they

ThunderX2 Arms Hyperscale And HPC Compute was written by Timothy Prickett Morgan at The Next Platform.

Successful Machine Learning With A Global Data Fabric

One of the most common misconceptions about machine learning is that success is solely due to its dynamic algorithms. In reality, the learning potential of those algorithms and their models are driven by the data preparation, staging and delivery. When suitably fed, machine learning algorithms work wonders. Their success, however, is ultimately rooted in the data logistics.

Data logistics are integral to how sufficient training data is accessed. They determine how easily new models are deployed. They specify how changes in data content can be isolated to compare models. And, they facilitate how multiple models are effectively used as part

Successful Machine Learning With A Global Data Fabric was written by Timothy Prickett Morgan at The Next Platform.

Intel Teaches Quantum Computing 101

A team at Intel, in collaboration with QuTech in the Netherlands, is researching the possibilities of quantum computing to better understand how practical quantum computers can be programmed to impact our lives. Given the research nature and current limitations of quantum computers, particularly in terms of I/O, researchers are focusing on specific types of algorithms.

As you might expect, Intel Labs is focused on applications such as material science and quantum chemistry. Other possible algorithms include parameterized simulations and various combinatorial optimization problems that have a global optimum. It is also worth noting that this research may never come to

Intel Teaches Quantum Computing 101 was written by Timothy Prickett Morgan at The Next Platform.

Capitalizing On Hybrid Cloud In HPC

Cloud computing became an essential infrastructure strategy for nearly every business. Last year Gartner predicted that demand for infrastructure as a service would increase by 36.8 percent. A 2018 McAfee survey found that 97 percent of organizations are using cloud services from public, private or both. Similarly, Rightscale’s 2018 cloud survey showed that 95 percent of enteprises have a cloud strategy, including 51 percent with a hybrid cloud strategy.

Yet, despite the cloud’s ubiquity, and the fact that HPC in the cloud has been possible for more than a decade – Univa commissioned the very first HPC cluster in AWS

Capitalizing On Hybrid Cloud In HPC was written by Timothy Prickett Morgan at The Next Platform.

New AI Being Mostly Used To Solve Old Problems

In the first article outlining some of the results from our AI survey, we discussed how most customers are just beginning their journey into AI and that very few have actual AI applications in production. In this article, we are going to talk about the whats and whys behind AI. In other words, why customers are looking into AI, what problems they are trying to solve, what they expect to get out of it, and what sort of data they are analyzing.

One of the more interesting aspects of the survey is that it shows how real-world customers are

New AI Being Mostly Used To Solve Old Problems was written by Timothy Prickett Morgan at The Next Platform.

Feeding The Insatiable Bandwidth Beast

Breaking into any part of the IT stack against incumbents with vast installed bases is not easy task. Cutting edge technology is table stakes, and targeting precise customers with specific needs is the only way to get a toehold. It also takes money. Lots of money. Innovium, the upstart Ethernet switch chip maker, has all three and is set to make some inroads among the hyperscalers and cloud builders.

We told you all about Innovium back in March last year, when the company, founded by former networking executives and engineers from Intel and Broadcom, dropped out of stealth and

Feeding The Insatiable Bandwidth Beast was written by Timothy Prickett Morgan at The Next Platform.

Sluggish Moore’s Law Doesn’t Impede Intel One Bit

The demand for compute is so strong among the hyperscalers and cloud builders that nothing seems to be slowing down Intel’s datacenter business. Not delays in processor rollouts due to the difficulties in ramping 14 nanometer and 10 nanometer processes as the pace of Moore’s Law increases in transistor density and the lowering of the cost of chips slows. Not what is a pretty substantial price increase that accompanies the core scale out and feature expansion in the “Skylake” Xeon SP processors. Not the credible competition from IBM, AMD, Cavium, and Qualcomm.

The sun is shining on the datacenter, and

Sluggish Moore’s Law Doesn’t Impede Intel One Bit was written by Timothy Prickett Morgan at The Next Platform.

The Slow But Sure Return Of AMD In The Datacenter

It has been more than a decade since AMD was a force in computing in the datacenter. For that reason, we have not wasted a lot of time going over the ins and outs of its quarterly financials. But now that the Epyc CPUs and Radeon Instinct GPU accelerators are getting traction among hyperscalers, cloud builders, and selected enterprises, it is time to start keeping an eye on how AMD is doing financially.

With most of the financial analysis that we do here at The Next Platform, we use the middle of the Great Recession, in the first quarter

The Slow But Sure Return Of AMD In The Datacenter was written by Timothy Prickett Morgan at The Next Platform.

Recruiting The Puppet Masters Of Infrastructure

All kinds of convergence is going on in infrastructure these days, with the mashing up of servers and storage or servers and networking, or sometimes all three at once. This convergence is not just occurring at the system hardware or basic system software level. It is also happening up and down the software stack, with a lot of codebases branching out from various starting points and building platforms of one kind or another.

Some platforms stay down at the server hardware level – think of the Cisco Systems UCS blade server, which mashes up servers and networking – while others

Recruiting The Puppet Masters Of Infrastructure was written by Timothy Prickett Morgan at The Next Platform.

Red Hat Gets Serious About Selling Open Source Storage

If there is one consistent complaint about open source software over the past three decades that it has been on the rise, it is that it is too difficult to integrate various components to solve a particular problem because the software is not really enterprise grade stuff. Well, that is two complaints, and really, there are three because even if you can get the stuff integrated and running well, that doesn’t mean you can keep it in that state as you patch and update it. So now we are up to three complaints.

Eventually, all software needs to be packaged

Red Hat Gets Serious About Selling Open Source Storage was written by Timothy Prickett Morgan at The Next Platform.

1 56 57 58 59 60 79