Author Archives: Timothy Prickett Morgan
Author Archives: Timothy Prickett Morgan
An increasing amount of the world’s data is encapsulated in images and video and by its very nature it is difficult and extremely compute intensive to do any kind of index and search against this data compared to the relative ease with which we can do so with the textual information that heretofore has dominated both our corporate and consumer lives.
Initially, we had to index images by hand and it is with these datasets that the hyperscalers pushed the envelope with their image recognition algorithms, evolving neural network software on CPUs and radically improving it with a jump to …
Facebook Pushes The Search Envelope With GPUs was written by Timothy Prickett Morgan at The Next Platform.
Increasing parallelism is the only way to get more work out of a system. Architecting for that parallelism required requires a lot of rethinking of each and every component in a system to make everything hum along as efficiently as possible.
There are lots of ways to skin the parallelism cats and squeeze more performance and less energy out of the system, and for DRAM memory, just stacking things up helps, but according to some research done at Stanford University, the University of Texas, and GPU maker Nvidia, there is another way to boost performance and lower energy consumption. The …
Squeezing The Joules Out Of DRAM, Possibly Without Stacking was written by Timothy Prickett Morgan at The Next Platform.
Ethernet switching has its own kinds of Moore’s Law barriers. The transition from 10 Gb/sec to 100 Gb/sec devices over the past decade has been anything but smooth, and a lot of compromises had to be made to even get to the interim – and unusual – 40 Gb/sec stepping stone towards the 100 Gb/sec devices that are ramping today in the datacenter.
While 10 Gb/sec Ethernet switching is fine for a certain class of enterprise applications that are not bandwidth hungry, for the hyperscalers and cloud builders, 100 Gb/sec is nowhere near enough bandwidth, and 200 Gb/sec, which is …
Upstart Switch Chip Maker Tears Up The Ethernet Roadmap was written by Timothy Prickett Morgan at The Next Platform.
ARM has rearchitected its multi-core chips to they can better compete in a world where computing needs are becoming more specialized.
The new DynamIQ architecture will provide flexible compute, with up to eight different cores in a single cluster on a system on a chip. Each core can run at a different clock speed so a company making an ARM SoC can tailor the silicon to handle multiple workloads at varying power efficiencies. The DynamIQ architecture also adds faster access to accelerators for artificial intelligence or networking jobs, and a resiliency that allows it to be used in robotics, autonomous …
New ARM Architecture Offers A DynamIQ Response To Compute was written by Timothy Prickett Morgan at The Next Platform.
In the datacenter, flash memory took off first as a caching layer between processors and their cache memories and main memory and the ridiculously slow disk drives that hang off the PCI-Express bus on the systems. It wasn’t until the price of flash came way down and the capacities of flash card and drives came down that companies could think about going completely to flash for some, much less all of their workloads.
So it will be with Intel’s Optane 3D XPoint non-volatile memory, which Intel is starting to roll out in its initial datacenter-class SSDs and will eventually deliver …
Like Flash, 3D XPoint Enters The Datacenter As Cache was written by Timothy Prickett Morgan at The Next Platform.
The hyperscalers of the world are increasingly dependent on machine learning algorithms for providing a significant part of the user experience and operations of their massive applications, so it is not much of a surprise that they are also pushing the envelope on machine learning frameworks and systems that are used to deploy those frameworks. Facebook and Microsoft were showing off their latest hybrid CPU-GPU designs at the Open Compute Summit, and they provide some insight into how to best leverage Nvidia’s latest “Pascal” Tesla accelerators.
Not coincidentally, the specialized systems that have been created for supporting machine learning workloads …
Open Hardware Pushes GPU Computing Envelope was written by Timothy Prickett Morgan at The Next Platform.
With over 700 employees, Cineca is Italy’s largest and most advanced high performance computing (HPC) center, channeling their systems expertise to benefit organizations across the nation. Comprised of six Italian research institutions, 70 Italian universities, and the Italian Ministry of Education, Cineca is a privately held, non-profit consortium.
The team at Cineca dedicates itself to tackling the greatest computational challenges faced by public and private companies, and research institutions. With so many organizations depending on Italy’s HPC centers, Cineca relies on Intel® technologies to reliably and efficiently further the country’s innovations in scientific computing, web and networking-based services, big data …
Cineca’s HPC Systems Tackle Italy’s Biggest Computing Challenges was written by Timothy Prickett Morgan at The Next Platform.
The HPC community is trying to solve the critical compute challenges of next generation high performance computing and ARM considers itself well-positioned to act as a catalyst in this regard. Applications like machine learning and scientific computing are driving demands for orders of magnitude improvements in capacity, capability and efficiency to achieve exascale computing for next generation deployments.
ARM has been taking a co-design approach with the ecosystem from silicon to system design to application development to provide innovative solutions that address this challenge. The recent Allinea acquisition is one example of ARM’s commitment to HPC, but ARM has worked …
ARM Antes Up For An HPC Software Stack was written by Timothy Prickett Morgan at The Next Platform.
Having a proliferation of server makes and models over a span of years in the datacenter is not a huge deal for most enterprises. They cope with the diversity because they support a diversity of application and can kind of keep things isolated and, moreover, IT may be integral to their product or service, but it is usually not the actual product or service that they sell.
Not so with hyperscalers and cloud builders. For them, the IT is the product, and keeping things as monolithic and consistent as possible lowers the cost of goods purchased through higher volumes and …
A Peek Inside Facebook’s Server Fleet Upgrade was written by Timothy Prickett Morgan at The Next Platform.
The rumors have been running around for years, and they turned out to be true. Microsoft, the world’s largest operating system supplier and still the dominant seller of systems software for the datacenter, has indeed been working for years on a port of its Windows Server 2016 operating system to the ARM server chip architecture.
The rumors about Windows Server on ARM started in earnest back in October 2014, which just before Qualcomm threw its hat into the ARM server ring and when Cavium and Applied Micro were in the market and starting to plan the generation of chips …
Windows Server Comes To ARM Chips, But Only For Azure was written by Timothy Prickett Morgan at The Next Platform.
If you want real competition among vendors who supply stuff to you, then sometimes you have to make it happen by yourself. The hyperscalers and big cloud builders of the world can do that, and increasingly they are taking the initiative and fostering such competition for compute.
With its first generation of Open Cloud Servers, which were conceptualized in 2012, put into production for its Azure public cloud in early 2013, and open sourced through the Open Compute Project in January 2014, Microsoft decided to leverage the power of the open source hardware community to make its own server …
ARM And AMD X86 Server Chips Get Mainstream Lift From Microsoft was written by Timothy Prickett Morgan at The Next Platform.
Ever so slowly, and not so fast as to give competitor Intel too much information about what it is up to, but just fast enough to build interest in the years of engineering smarts that has gone into its forthcoming “Naples” X86 server processor, AMD is lifting the veil on the product that will bring it back into the datacenter and that will bring direct competition to the Xeon platform that dominates modern computing infrastructure.
It has been a bit of a rolling thunder revelation of information about the Zen core used in the “Naples” server chip, the brand of …
How AMD’s Naples X86 Server Chip Stacks Up To Intel’s Xeons was written by Timothy Prickett Morgan at The Next Platform.
There are not a lot of second chances in the IT racket. AMD wants one, and we think, has earned one.
Such second chances are hard to come by, and we can rattle off a few of them because they are so rare. Intel pivoted from a memory maker to a processor maker in the mid-1980s, and has come to dominate compute in everything but handheld devices. In the mid-1990s, IBM failed to understand the RISC/Unix and X86 server waves swamping the datacenter and nearly went bankrupt and salvaged itself as software and services provider to glass houses. A decade …
Naples Opterons Give AMD A Second Chance In Servers was written by Timothy Prickett Morgan at The Next Platform.
In the server virtualization era, there were a couple of virtual machine formats and hypervisors to match them, and despite the desire for a common VM format, the virtual server stacks got siloed into ESXi, KVM, Xen, and Hyper-V stacks with some spicing of PowerVM, Solaris containers and LDOMs, and VM/ESA partitions sprinkled on.
With containers, the consensus has been largely to support the Docker format that was inspired by the foundational Linux container work done by Google, and Docker, the company, was the early and enthusiastic proponent of its way of the Docker way of doing containers.
Now, Docker …
Docker Reaches The Enterprise Milestone was written by Timothy Prickett Morgan at The Next Platform.
So, who was the biggest revenue generator, and showing the largest growth in sales, for servers in the final quarter of 2016? Was it Hewlett Packard Enterprise? Was it Dell Technologies? Was it IBM or Cisco Systems or one of the ODMs? Nope. It was the Others category comprised of dozens of vendors that sit outside of the top tier OEMs we know by name and the collective ODMs of the world who some of us know by name.
This is a sign that the server ecosystem is getting more diverse under pressure as the technical and economic climate changes …
Server Makers Try To Adapt To A Harsher Climate was written by Timothy Prickett Morgan at The Next Platform.
There is an adage, not quite yet old, suggesting that compute is free but storage is not. Perhaps a more accurate and, as far as public clouds are concerned, apt adaptation of this saying might be that computing and storage are free, and so are inbound networking within a region, but moving data across regions in a public cloud is brutally expensive, and it is even more costly spanning regions.
So much so that, at a certain scale, it makes sense to build your own datacenter and create your own infrastructure hardware and software stack that mimics the salient characteristics …
Bouncing Back To Private Clouds With OpenStack was written by Timothy Prickett Morgan at The Next Platform.
With a new generation of Xeon processors coming out later this year from Intel and AMD trying to get back in the game with its own X86 server chips – they probably will not be called Opterons – it is not a surprise to us that server makers are having a bit of trouble making their numbers in recent months. But we are beginning to wonder if something else might be going on here than the usual pause before a big set of processor announcements.
In many ways, server spending is a leading indicator because when companies are willing to …
Mixed Signals From Server Land was written by Timothy Prickett Morgan at The Next Platform.
There is an old joke that in the post-apocalyptic world that comes about because of plague or nuclear war, only two things will be left alive: cockroaches and Keith Richards, the guitarist for the Rolling Stones. As it hails from New York City, you can understand why Cockroach Labs, the upstart software company that is cloning Google’s Spanner distributed relational database, chose that particular bug to epitomize a system that will stay alive no matter what. But, they could have just as easily called it RichardsDB.
When discussing Google’s cloud implementation of Spanner, which launched in beta earlier this …
Google Spanner Inspires CockroachDB To Outrun It was written by Timothy Prickett Morgan at The Next Platform.
The Global Scientific Information and Computing Center at the Tokyo Institute of Technology has been at the forefront of accelerated computing, and well before GPUs came along and made acceleration not only cool but affordable and normal. But its latest system, Tsubame 3.0, being installed later this year, the Japanese supercomputing center is going to lay the hardware foundation for a new kind of HPC application that brings together simulation and modeling and machine learning workloads.
The hot new idea in HPC circles is not just being able to run machine learning workloads side by side with simulations, but to …
Japan Keeps Accelerating With Tsubame 3.0 AI Supercomputer was written by Timothy Prickett Morgan at The Next Platform.
As the world’s dominant supplier of switches and routers into the datacenter and one of the big providers of servers (with a hope of transforming part of that server businesses into a sizeable hyperconverged storage business), Cisco Systems provides a kind of lens into the glass houses of the world. You can see what companies are doing – and what they are not doing – and watch how Cisco reacts to try to give them what they need while trying to extract the maximum profit out of its customers.
Say what you will, but Cisco has spent the last …
What Bellwether Cisco Reveals About Datacenter Spending was written by Timothy Prickett Morgan at The Next Platform.