Author Archives: Timothy Prickett Morgan
Author Archives: Timothy Prickett Morgan
Every successive processor generation presents its own challenges to all chip makers, and the ramp of 14 nanometer processes that will be used in the future “Skylake” Xeon processors, due in the second half of this year, cut into the operating profits of its Data Center Group in the final quarter of 2016. Intel also apparently had an issue with one of its chip lines – it did not say if it was a Xeon or Xeon Phi, or detail what that issue was – that needed to be fixed and that hurt Data Center Group’s middle line, too.
Still, …
Skylake Xeon Ramp Cuts Into Intel’s Datacenter Profits was written by Timothy Prickett Morgan at The Next Platform.
It is the first month of a new year, and this is the time that IBM traditionally does reorganizations of its business lines and plays musical chairs with its executives to reconfigure itself for the coming year. And just like clockwork, late last week the top brass at Big Blue did internal announcements explaining the changes it is making to transform its wares into a platform better suited to the times.
The first big change, and one that may have precipitated all of the others that have been set in place, is Robert LeBlanc, who is the senior vice president …
IBM Reorg Forges Cognitive Systems, Merges Cloud And Analytics was written by Timothy Prickett Morgan at The Next Platform.
For a long time now, researchers have been working on automating the process of breaking up otherwise single-threaded code to run on multiple processors by way of multiple threads. Results, although occasionally successful, eluded anything approaching a unified theory of everything.
Still, there appears to be some interesting success via OpenMP. The good thing about OpenMP is that its developers realized that what is really necessary is for the C or Fortran programmer to provide just enough hints to the compiler that say “Hey, this otherwise single-threaded loop, this sequence of code, might benefit from being split amongst multiple …
Multi-Threaded Programming By Hand Versus OpenMP was written by Timothy Prickett Morgan at The Next Platform.
The HPC industry has been waiting a long time for the ARM ecosystem to mature enough to yield real-world clusters, with hundreds or thousands of nodes and running a full software stack, as a credible alternative to clusters based on X86 processors. But the wait is almost over, particularly if the Mont-Blanc 3 system that will be installed by the Barcelona Supercomputer Center is any indication.
BSC has been shy about trying new architectures in its clusters, and the original Mare Nostrum super that was installed a decade ago and that ranked fifth on the Top 500 list when it …
BSC’s Mont Blanc 3 Puts ARM Inside Bull Sequana Supers was written by Timothy Prickett Morgan at The Next Platform.
It takes an incredible amount of resilience for any company to make it decades, much less more than a century, in any industry. IBM has taken big risks to create new markets, first with time clocks and meat slicers and tabulating machines early in the last century, and some decades later it created the modern computer industry with the System/360 mainframe. It survived a near-death experience in the middle 1990s when the IT industry was changing faster than it was, and now it is trying to find its footing in cognitive computing and public and private clouds as its legacy …
The New IBM Glass Is Almost Half Full was written by Timothy Prickett Morgan at The Next Platform.
Rumors have been running around for months that Hewlett Packard Enterprise was shopping around for a way to be a bigger player in the hyperconverged storage arena, and the recent scuttlebutt was that HPE was considering paying close to $4 billion for one of the larger players in server-storage hybrids. This turns out to not be true. HPE is paying only $650 million to snap up what was, until now, thought to be one of Silicon Valley’s dozen or so unicorns with over a $1 billion valuation.
It is refreshing to see that HPE is not overpaying for an …
HPE Gets Serious About Hyperconverged Storage With SimpliVity Buy was written by Timothy Prickett Morgan at The Next Platform.
In the prior two articles in this series, we have gone through the theory behind programming multi-threaded applications, with the management of shared memory being accessed by multiple threads, and of even creating those threads in the first place. Now, we need to put one such multi-threaded application together and see how it works. You will find that the pieces fall together remarkably easily.
If we wanted to build a parallel application using multiple threads, we would likely first think of one where we split up a loop amongst the threads. We will be looking at such later in a …
The Essence Of Multi-Threaded Applications was written by Timothy Prickett Morgan at The Next Platform.
Object storage is not a new concept, but this type of storage architecture is beginning to garner more attention from large organisations as they grapple with the difficulties of managing increasingly large volumes of unstructured data gathered from applications, social media, and a myriad other sources.
The properties of object-based storage systems mean that they can scale easily to handle hundreds or even thousands of petabytes of capacity if required. Throw in the fact that object storage can be less costly in terms of management overhead (somewhere around 20 percent so that means needing to buy 20 percent less capacity …
On Premises Object Storage Mimics Big Public Clouds was written by Timothy Prickett Morgan at The Next Platform.
Choosing the right interconnect for high-performance compute and storage platforms is critical for achieving the highest possible system performance and overall return on investment.
Over time, interconnect technologies have become more sophisticated and include more intelligent capabilities (offload engines), which enable the interconnect to do more than just transferring data. Intelligent interconnect can increase system efficiency; interconnect with offload engines (offload interconnect) dramatically reduces CPU overhead, allowing more CPU cycles to be dedicated to applications and therefore enabling higher application performance and user productivity.
Today, the interconnect technology has become even more critical than ever before, due to a number …
The Interplay Of HPC Interconnects And CPU Utilization was written by Timothy Prickett Morgan at The Next Platform.
This is the second in the series on the essentials of multiprocessor programming. This time around we are going to look at some of the normally little considered effects of having memory being shared by a lot of processors and by the work concurrently executing there.
We start, though, by observing that there has been quite a market for parallelizing applications even when they do not share data. There has been remarkable growth of applications capable of using multiple distributed memory systems for parallelism, and interestingly that very nicely demonstrates the opportunity that exists for using massive compute capacity …
What Is SMP Without Shared Memory? was written by Timothy Prickett Morgan at The Next Platform.
In the ideal hyperscaler and cloud world, there would be one processor type with one server configuration and it would run any workload that could be thrown at it. Earth is not an ideal world, though, and it takes different machines to run different kinds of workloads.
In fact, if Google is any measure – and we believe that it is – then the number of different types of compute that needs to be deployed in the datacenter to run an increasingly diverse application stack is growing, not shrinking. It is the end of the General Purpose Era, which began …
Why Google Is Driving Compute Diversity was written by Timothy Prickett Morgan at The Next Platform.
Hewlett Packard Enterprise is not just a manufacturer that takes components from Intel and assembles them into systems. The company also has a heritage of innovating, and it was showing off its new datacenter architecture research and development testbed, dubbed The Machine, as 2016 came to a close.
While The Machine had originally attracted considerable attention as a vehicle for HPE to commercialize memristors, it is a much broader architectural testbed. This first generation of hardware can use any standard DDR4 DIMM-based memories, volatile or non-volatile. And while large, non-volatile memory pools are interesting research targets, HPE realizes that it …
HPE Powers Up The Machine Architecture was written by Timothy Prickett Morgan at The Next Platform.
One near constant that you have been seeing in the pages of The Next Platform is that the downside of having a slowing rate at which the speed of new processors is increasing is offset by the upside of having a lot more processing elements in a device. While the performance of programs running on individual processors might not be speeding up like we might like, we instead get huge capacity increases via today’s systems by having the tens through many thousands of processors.
You have also seen in these pages that our usual vanilla view of a processor or …
The Essentials Of Multiprocessor Programming was written by Timothy Prickett Morgan at The Next Platform.
Processing for server compute has gotten more general purpose for the past two decades and is seeing resurgence in built-for-purpose chips. Network equipment makers have made their own specialized chips as well as buying merchant chips of varying kinds to meet very specific switching and routing needs.
Of the chip upstarts that are competing against industry juggernaut Cisco Systems, Arista Networks stands out as the company that decided from its founding in 2009 to rely only on merchant silicon for switches and to differentiate on speed to market and software functionality and commonality across many different switch ASICs with its …
Arista Gives Tomahawk 25G Ethernet Some XPliant Competition was written by Timothy Prickett Morgan at The Next Platform.
All-flash arrays are still new enough to be somewhat exotic but are – finally – becoming mainstream. As we have been saying for years, there will come a point where enterprises, which have much more complex and unpredictable workloads than hyperscalers and do not need the exabytes of capacity of cloud builders, just throw in the towel with disk drives and move to all flash and be done with it.
To be sure, disk drives will persist for many years to come, but perhaps not for as long as many as disk drive makers and flash naysayers had expected – …
All Flash No Longer Has To Compete With Disk was written by Timothy Prickett Morgan at The Next Platform.
Just like every kind of compute job cannot be handled by a single type of microprocessor, the diversity of networking tasks in the datacenters of the world require a variety of different switch and router ASICs to best manage those tasks.
As the volume leader in the switching arena, Broadcom comes under intense competitive pressure and has to keep on its toes to provide enough variety in its switch chips to keep its rivals at bay. One way that Broadcom does this is by having two distinct switch ASIC lines.
The StrataXGS line of chips have the famous and ubiquitous …
Buffers Sometimes Beat Bandwidth For Networks was written by Timothy Prickett Morgan at The Next Platform.
Depending on how you want to look at it, the half dozen companies that have aspired to bring ARM architecture to the datacenter through chips designed specifically to run server workloads are either very late to market or very early. The opportunity to take on Intel was arguably many years ago, when the world’s largest chip maker was weaker, and yet despite all of the excitement and hype, no one could get an ARM chip into the field that clearly and cleanly competed against Intel’s Xeons and did so publicly with design wins that generated real volumes that took a …
Qualcomm Fires ARM Server Salvo, Broadcom Silences Guns was written by Timothy Prickett Morgan at The Next Platform.
If you happen to believe that spending on core IT infrastructure is a leading indicator of the robustness of national economies and the global one that is stitched, somewhat piecemeal like a patchwork quilt. From them, then the third quarter sales and shipments of servers is probably sounding a note of caution for you.
It certainly does for us here at The Next Platform. But it is important, particularly if we have in fact hit the peak of the X86 server market as we mused about three months ago, to not get carried away. A slowdown in spending …
Is This A Server Slowdown, Or Increasing Efficiency? was written by Timothy Prickett Morgan at The Next Platform.
When it comes to supercomputing, you don’t only have to strike while the iron is hot, you have to spend while the money is available. And that fact is what often determines the technologies that HPC centers deploy as they expand the processing and storage capacity of their systems.
A good case in point is the MareNostrum 4 hybrid cluster that the Barcelona Supercomputing Center, one of the flagship research and computing institutions in Europe, has just commissioned IBM to build with the help of partners Lenovo and Fujitsu. The system balances the pressing need for more general purpose computing …
BSC Keeps Its HPC Options Open With MareNostrum 4 was written by Timothy Prickett Morgan at The Next Platform.
While we all spend a lot of time talking about the massive supercomputers that cultivate new architectures, and precisely for that reason, it is the more modest cluster that makes use of these technologies many years hence that actually cultivates a healthy and vibrant HPC market.
Lenovo picked up a substantial HPC business when it acquired IBM’s System x server division two years ago and also licensed key software, such as the Platform Computing stack and the GPFS file system, to drive its own HPC agenda. The Sino-American system maker has been buoyed by higher volume manufacturing thanks to the …
Lenovo Drives HPC From The Middle Ground was written by Timothy Prickett Morgan at The Next Platform.