Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

The Interplay Of HPC Interconnects And CPU Utilization

Choosing the right interconnect for high-performance compute and storage platforms is critical for achieving the highest possible system performance and overall return on investment.

Over time, interconnect technologies have become more sophisticated and include more intelligent capabilities (offload engines), which enable the interconnect to do more than just transferring data. Intelligent interconnect can increase system efficiency; interconnect with offload engines (offload interconnect) dramatically reduces CPU overhead, allowing more CPU cycles to be dedicated to applications and therefore enabling higher application performance and user productivity.

Today, the interconnect technology has become even more critical than ever before, due to a number

The Interplay Of HPC Interconnects And CPU Utilization was written by Timothy Prickett Morgan at The Next Platform.

What Is SMP Without Shared Memory?

This is the second in the series on the essentials of multiprocessor programming. This time around we are going to look at some of the normally little considered effects of having memory being shared by a lot of processors and by the work concurrently executing there.

We start, though, by observing that there has been quite a market for parallelizing applications even when they do not share data. There has been remarkable growth of applications capable of using multiple distributed memory systems for parallelism, and interestingly that very nicely demonstrates the opportunity that exists for using massive compute capacity

What Is SMP Without Shared Memory? was written by Timothy Prickett Morgan at The Next Platform.

Why Google Is Driving Compute Diversity

In the ideal hyperscaler and cloud world, there would be one processor type with one server configuration and it would run any workload that could be thrown at it. Earth is not an ideal world, though, and it takes different machines to run different kinds of workloads.

In fact, if Google is any measure – and we believe that it is – then the number of different types of compute that needs to be deployed in the datacenter to run an increasingly diverse application stack is growing, not shrinking. It is the end of the General Purpose Era, which began

Why Google Is Driving Compute Diversity was written by Timothy Prickett Morgan at The Next Platform.

HPE Powers Up The Machine Architecture

Hewlett Packard Enterprise is not just a manufacturer that takes components from Intel and assembles them into systems. The company also has a heritage of innovating, and it was showing off its new datacenter architecture research and development testbed, dubbed The Machine, as 2016 came to a close.

While The Machine had originally attracted considerable attention as a vehicle for HPE to commercialize memristors, it is a much broader architectural testbed. This first generation of hardware can use any standard DDR4 DIMM-based memories, volatile or non-volatile. And while large, non-volatile memory pools are interesting research targets, HPE realizes that it

HPE Powers Up The Machine Architecture was written by Timothy Prickett Morgan at The Next Platform.

The Essentials Of Multiprocessor Programming

One near constant that you have been seeing in the pages of The Next Platform is that the downside of having a slowing rate at which the speed of new processors is increasing is offset by the upside of having a lot more processing elements in a device. While the performance of programs running on individual processors might not be speeding up like we might like, we instead get huge capacity increases via today’s systems by having the tens through many thousands of processors.

You have also seen in these pages that our usual vanilla view of a processor or

The Essentials Of Multiprocessor Programming was written by Timothy Prickett Morgan at The Next Platform.

Arista Gives Tomahawk 25G Ethernet Some XPliant Competition

Processing for server compute has gotten more general purpose for the past two decades and is seeing resurgence in built-for-purpose chips. Network equipment makers have made their own specialized chips as well as buying merchant chips of varying kinds to meet very specific switching and routing needs.

Of the chip upstarts that are competing against industry juggernaut Cisco Systems, Arista Networks stands out as the company that decided from its founding in 2009 to rely only on merchant silicon for switches and to differentiate on speed to market and software functionality and commonality across many different switch ASICs with its

Arista Gives Tomahawk 25G Ethernet Some XPliant Competition was written by Timothy Prickett Morgan at The Next Platform.

All Flash No Longer Has To Compete With Disk

All-flash arrays are still new enough to be somewhat exotic but are – finally – becoming mainstream. As we have been saying for years, there will come a point where enterprises, which have much more complex and unpredictable workloads than hyperscalers and do not need the exabytes of capacity of cloud builders, just throw in the towel with disk drives and move to all flash and be done with it.

To be sure, disk drives will persist for many years to come, but perhaps not for as long as many as disk drive makers and flash naysayers had expected –

All Flash No Longer Has To Compete With Disk was written by Timothy Prickett Morgan at The Next Platform.

Buffers Sometimes Beat Bandwidth For Networks

Just like every kind of compute job cannot be handled by a single type of microprocessor, the diversity of networking tasks in the datacenters of the world require a variety of different switch and router ASICs to best manage those tasks.

As the volume leader in the switching arena, Broadcom comes under intense competitive pressure and has to keep on its toes to provide enough variety in its switch chips to keep its rivals at bay. One way that Broadcom does this is by having two distinct switch ASIC lines.

The StrataXGS line of chips have the famous and ubiquitous

Buffers Sometimes Beat Bandwidth For Networks was written by Timothy Prickett Morgan at The Next Platform.

Qualcomm Fires ARM Server Salvo, Broadcom Silences Guns

Depending on how you want to look at it, the half dozen companies that have aspired to bring ARM architecture to the datacenter through chips designed specifically to run server workloads are either very late to market or very early. The opportunity to take on Intel was arguably many years ago, when the world’s largest chip maker was weaker, and yet despite all of the excitement and hype, no one could get an ARM chip into the field that clearly and cleanly competed against Intel’s Xeons and did so publicly with design wins that generated real volumes that took a

Qualcomm Fires ARM Server Salvo, Broadcom Silences Guns was written by Timothy Prickett Morgan at The Next Platform.

Is This A Server Slowdown, Or Increasing Efficiency?

If you happen to believe that spending on core IT infrastructure is a leading indicator of the robustness of national economies and the global one that is stitched, somewhat piecemeal like a patchwork quilt. From them, then the third quarter sales and shipments of servers is probably sounding a note of caution for you.

It certainly does for us here at The Next Platform. But it is important, particularly if we have in fact hit the peak of the X86 server market as we mused about three months ago, to not get carried away. A slowdown in spending

Is This A Server Slowdown, Or Increasing Efficiency? was written by Timothy Prickett Morgan at The Next Platform.

BSC Keeps Its HPC Options Open With MareNostrum 4

When it comes to supercomputing, you don’t only have to strike while the iron is hot, you have to spend while the money is available. And that fact is what often determines the technologies that HPC centers deploy as they expand the processing and storage capacity of their systems.

A good case in point is the MareNostrum 4 hybrid cluster that the Barcelona Supercomputing Center, one of the flagship research and computing institutions in Europe, has just commissioned IBM to build with the help of partners Lenovo and Fujitsu. The system balances the pressing need for more general purpose computing

BSC Keeps Its HPC Options Open With MareNostrum 4 was written by Timothy Prickett Morgan at The Next Platform.

Lenovo Drives HPC From The Middle Ground

While we all spend a lot of time talking about the massive supercomputers that cultivate new architectures, and precisely for that reason, it is the more modest cluster that makes use of these technologies many years hence that actually cultivates a healthy and vibrant HPC market.

Lenovo picked up a substantial HPC business when it acquired IBM’s System x server division two years ago and also licensed key software, such as the Platform Computing stack and the GPFS file system, to drive its own HPC agenda. The Sino-American system maker has been buoyed by higher volume manufacturing thanks to the

Lenovo Drives HPC From The Middle Ground was written by Timothy Prickett Morgan at The Next Platform.

HPE Takes On The High End With SGI Expertise

SGI has always had scalable technology that should have been deployed more broadly in the academic, government, and enterprise datacenters of the world. But fighting for those budget dollars at the high end of the market always came down to needing more feet on the street, a larger global footprint for service and support, and broader certification of software stacks to exploit that SGI iron.

Now that Hewlett Packard Enterprise owns SGI – or more precisely, owns its operations in the United States and will finish off its acquisition, announced in August, probably sometime in the middle of next

HPE Takes On The High End With SGI Expertise was written by Timothy Prickett Morgan at The Next Platform.

OpenHPC Pedal Put To The Compute Metal

The ultimate success of any platform depends on the seamless integration of diverse components into a synergistic whole – well, as much as is possible in the real world – while at the same time being flexible enough to allow for components to be swapped out and replaced by others to suit personal preferences.

Is OpenHPC, the open source software stack aimed at simulation and modeling workloads that was spearheaded by Intel a year ago, going to be the dominant and unifying platform for high performance computing? Will OpenHPC be analogous to the Linux distributions that grew up around

OpenHPC Pedal Put To The Compute Metal was written by Timothy Prickett Morgan at The Next Platform.

As The Server Goes, So Goes The World

At this point in the 21st Century, a surprisingly large portion of the manufacturing, warehousing, distribution, marketing, and retailing of every good and service known to humankind is dependent on a piece of circuit board with two Xeon processors welded to it, wrapped in some bent sheet metal with a few blinky lights peeking out of the darkness.

Back in 1885, as the United States was beginning its rise to power, Reverend Josiah Strong declared in his populist book, Our Country: “As America goes, so goes the world.” National borders and national interests still exist, but networks cross boundaries

As The Server Goes, So Goes The World was written by Timothy Prickett Morgan at The Next Platform.

Details Emerge On “Summit” Power Tesla AI Supercomputer

The future “Summit” pre-exascale supercomputer that is being built out in late 2017 and early 2018 for the US Department of Energy for its Oak Ridge National Laboratory looks like a giant cluster of systems that might be used for training neural networks. And that is an extremely convenient development.

More than once during the SC16 supercomputing conference this week in Salt Lake City, the Summit system and its companion “Sierra” system that will be deployed at Lawrence Livermore National Laboratory, were referred to as “AI supercomputers.” This is a reflection of the fact that the national labs around the

Details Emerge On “Summit” Power Tesla AI Supercomputer was written by Timothy Prickett Morgan at The Next Platform.

Networks Drive HPC Harder Than Compute

It is hard to tell which part of the HPC market is more competitive: compute, networking, or storage. From where we sit, there is an increasing level of competition on all three fronts and the pressure to perform, both financially and technically, has never been higher. This is great for customers, of course, who are being presented with lots of technology to choose from. But HPC customers tend to pick architectures for several generations, so there is also pressure on them to make the right choices – whatever that means.

In a sense, enterprises and hyperscalers and cloud builders, who

Networks Drive HPC Harder Than Compute was written by Timothy Prickett Morgan at The Next Platform.

Stacking Up Software To Drive FPGAs Into The Datacenter

Every new hardware device that offers some kind of benefit compared to legacy devices faces the task of overcoming the immense inertia that is imparted to a platform by the software that runs upon it. While FPGAs have so many benefits compared to general purpose CPUs and even GPUs because of their malleability, the inertia seems even heavier.

Any substantial change requires a big payoff to be implemented because change is inherently risky. It doesn’t help that the method of programming FPGAs efficiently using VHDL and Verilog are so alien to Java and C programmers, and that the tools for

Stacking Up Software To Drive FPGAs Into The Datacenter was written by Timothy Prickett Morgan at The Next Platform.

The Business Of HPC Is Evolving

With most of the year finished and a new one coming up fast, and a slew of new compute and networking technologies ramping for the past year and more on the horizon for a very exciting 2017, now is the natural time to take stock of what has happened in the HPC business and what is expected to happen in the coming years.

The theme of the SC16 supercomputing conference this year is that HPC matters, and of course, we have all known this since the first such machines were distinct from enterprise-class electronic computers back in the 1960s. HPC

The Business Of HPC Is Evolving was written by Timothy Prickett Morgan at The Next Platform.

Intel Sets Up Skylake Xeon For HPC, Knights Mill Xeon Phi For AI

With the “Skylake” Xeon E5 v5 processors not slated until the middle of next year and the “Knights Landing” Xeon Phi processors and Omni-Path interconnect still ramping after entering the HPC space a year ago, there are no blockbuster announcements coming out of Intel this year at the SC16 supercomputing conference in Salt Lake City. But there are some goodies for HPC shops that were unveiled at the event and the chip giant also set the stage for big changes in the coming year in both traditional HPC and its younger and fast-growing sibling, machine learning.

Speaking ahead of the

Intel Sets Up Skylake Xeon For HPC, Knights Mill Xeon Phi For AI was written by Timothy Prickett Morgan at The Next Platform.

1 71 72 73 74 75 81