Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Fujitsu Takes On IBM Power9 With Sparc64-XII

While a lot of the applications in the world run on clusters of systems with a relatively modest amount of compute and memory compared to NUMA shared memory systems, big iron persists and large enterprises want to buy it. That is why IBM, Fujitsu, Oracle, Hewlett Packard Enterprise, Inspur, NEC, Unisys, and a few others are still in the big iron racket.

Fujitsu and its reseller partner – server maker, database giant, and application powerhouse Oracle – have made a big splash at the high end of the systems space with a very high performance processor, the Sparc64-XII, and a

Fujitsu Takes On IBM Power9 With Sparc64-XII was written by Timothy Prickett Morgan at The Next Platform.

Intel “Kaby Lake” Xeon E3 Sets The Server Cadence

The tick-tock-clock three step dance that Intel will be using to progress its Core client and Xeon server processors in the coming years is on full display now that the Xeon E3-1200 v6 processors based on the “Kaby Lake” have been unveiled.

The Kaby Lake chips are Intel’s third generation of Xeon processors that are based on its 14 nanometer technologies, and as our naming convention for Intel’s new way of rolling out chips suggests, it is a refinement of both the architecture and the manufacturing process that, by and large, enables Intel to ramp up the clock speed on

Intel “Kaby Lake” Xeon E3 Sets The Server Cadence was written by Timothy Prickett Morgan at The Next Platform.

Weaving Together Flash For Nearly Unlimited Scale

It is almost a foregone conclusion that when it comes to infrastructure, the industry will follow the lead of the big hyperscalers and cloud builders, building a foundation of standardized hardware for serving, storing, and switching and implementing as much functionality and intelligence as possible in the software on top of that to allow it to scale up and have costs come down as it does.

The reason this works is that these companies have complete control of their environments, from the processors and memory in the supply chain to the Linux kernel and software stack maintained by hundreds to

Weaving Together Flash For Nearly Unlimited Scale was written by Timothy Prickett Morgan at The Next Platform.

Intel Vigorously Defends Chip Innovation Progress

With absolute dominance in datacenter and desktop compute, considerable sway in datacenter storage, a growing presence in networking, and profit margins that are the envy of the manufacturing and tech sectors alike, it is not a surprise that companies are gunning for Intel. They all talk about how Moore’s Law is dead and how that removes a significant advantage for the world’s largest – and most profitable – chip maker.

After years of this, the top brass in Intel’s Technology and Manufacturing Group as well as its former chief financial officer, who is now in charge of its manufacturing, operations,

Intel Vigorously Defends Chip Innovation Progress was written by Timothy Prickett Morgan at The Next Platform.

Use Optane Memory Like A Hyperscaler

The ramp for Intel’s Optane 3D XPoint memory, which sits between DDR4 main memory and flash or disk storage, or beside main memory, in the storage hierarchy, is going to shake up the server market. And maybe not in the ways that Intel and its partner, Micron Technology, anticipate.

Last week, Intel unveiled its first Optane 3D XPoint solid state cards and drives, which are now being previewed by selected hyperscalers and which will be rolling out in various capacities and form factors in the coming quarters. As we anticipated, and as Intel previewed last fall, the company is

Use Optane Memory Like A Hyperscaler was written by Timothy Prickett Morgan at The Next Platform.

Facebook Pushes The Search Envelope With GPUs

An increasing amount of the world’s data is encapsulated in images and video and by its very nature it is difficult and extremely compute intensive to do any kind of index and search against this data compared to the relative ease with which we can do so with the textual information that heretofore has dominated both our corporate and consumer lives.

Initially, we had to index images by hand and it is with these datasets that the hyperscalers pushed the envelope with their image recognition algorithms, evolving neural network software on CPUs and radically improving it with a jump to

Facebook Pushes The Search Envelope With GPUs was written by Timothy Prickett Morgan at The Next Platform.

Squeezing The Joules Out Of DRAM, Possibly Without Stacking

Increasing parallelism is the only way to get more work out of a system. Architecting for that parallelism required requires a lot of rethinking of each and every component in a system to make everything hum along as efficiently as possible.

There are lots of ways to skin the parallelism cats and squeeze more performance and less energy out of the system, and for DRAM memory, just stacking things up helps, but according to some research done at Stanford University, the University of Texas, and GPU maker Nvidia, there is another way to boost performance and lower energy consumption. The

Squeezing The Joules Out Of DRAM, Possibly Without Stacking was written by Timothy Prickett Morgan at The Next Platform.

Upstart Switch Chip Maker Tears Up The Ethernet Roadmap

Ethernet switching has its own kinds of Moore’s Law barriers. The transition from 10 Gb/sec to 100 Gb/sec devices over the past decade has been anything but smooth, and a lot of compromises had to be made to even get to the interim – and unusual – 40 Gb/sec stepping stone towards the 100 Gb/sec devices that are ramping today in the datacenter.

While 10 Gb/sec Ethernet switching is fine for a certain class of enterprise applications that are not bandwidth hungry, for the hyperscalers and cloud builders, 100 Gb/sec is nowhere near enough bandwidth, and 200 Gb/sec, which is

Upstart Switch Chip Maker Tears Up The Ethernet Roadmap was written by Timothy Prickett Morgan at The Next Platform.

New ARM Architecture Offers A DynamIQ Response To Compute

ARM has rearchitected its multi-core chips to they can better compete in a world where computing needs are becoming more specialized.

The new DynamIQ architecture will provide flexible compute, with up to eight different cores in a single cluster on a system on a chip. Each core can run at a different clock speed so a company making an ARM SoC can tailor the silicon to handle multiple workloads at varying power efficiencies. The DynamIQ architecture also adds faster access to accelerators for artificial intelligence or networking jobs, and a resiliency that allows it to be used in robotics, autonomous

New ARM Architecture Offers A DynamIQ Response To Compute was written by Timothy Prickett Morgan at The Next Platform.

Like Flash, 3D XPoint Enters The Datacenter As Cache

In the datacenter, flash memory took off first as a caching layer between processors and their cache memories and main memory and the ridiculously slow disk drives that hang off the PCI-Express bus on the systems. It wasn’t until the price of flash came way down and the capacities of flash card and drives came down that companies could think about going completely to flash for some, much less all of their workloads.

So it will be with Intel’s Optane 3D XPoint non-volatile memory, which Intel is starting to roll out in its initial datacenter-class SSDs and will eventually deliver

Like Flash, 3D XPoint Enters The Datacenter As Cache was written by Timothy Prickett Morgan at The Next Platform.

Open Hardware Pushes GPU Computing Envelope

The hyperscalers of the world are increasingly dependent on machine learning algorithms for providing a significant part of the user experience and operations of their massive applications, so it is not much of a surprise that they are also pushing the envelope on machine learning frameworks and systems that are used to deploy those frameworks. Facebook and Microsoft were showing off their latest hybrid CPU-GPU designs at the Open Compute Summit, and they provide some insight into how to best leverage Nvidia’s latest “Pascal” Tesla accelerators.

Not coincidentally, the specialized systems that have been created for supporting machine learning workloads

Open Hardware Pushes GPU Computing Envelope was written by Timothy Prickett Morgan at The Next Platform.

Cineca’s HPC Systems Tackle Italy’s Biggest Computing Challenges

With over 700 employees, Cineca is Italy’s largest and most advanced high performance computing (HPC) center, channeling their systems expertise to benefit organizations across the nation. Comprised of six Italian research institutions, 70 Italian universities, and the Italian Ministry of Education, Cineca is a privately held, non-profit consortium.

The team at Cineca dedicates itself to tackling the greatest computational challenges faced by public and private companies, and research institutions.  With so many organizations depending on Italy’s HPC centers, Cineca relies on Intel® technologies to reliably and efficiently further the country’s innovations in scientific computing, web and networking-based services, big data

Cineca’s HPC Systems Tackle Italy’s Biggest Computing Challenges was written by Timothy Prickett Morgan at The Next Platform.

ARM Antes Up For An HPC Software Stack

The HPC community is trying to solve the critical compute challenges of next generation high performance computing and ARM considers itself well-positioned to act as a catalyst in this regard. Applications like machine learning and scientific computing are driving demands for orders of magnitude improvements in capacity, capability and efficiency to achieve exascale computing for next generation deployments.

ARM has been taking a co-design approach with the ecosystem from silicon to system design to application development to provide innovative solutions that address this challenge. The recent Allinea acquisition is one example of ARM’s commitment to HPC, but ARM has worked

ARM Antes Up For An HPC Software Stack was written by Timothy Prickett Morgan at The Next Platform.

A Peek Inside Facebook’s Server Fleet Upgrade

Having a proliferation of server makes and models over a span of years in the datacenter is not a huge deal for most enterprises. They cope with the diversity because they support a diversity of application and can kind of keep things isolated and, moreover, IT may be integral to their product or service, but it is usually not the actual product or service that they sell.

Not so with hyperscalers and cloud builders. For them, the IT is the product, and keeping things as monolithic and consistent as possible lowers the cost of goods purchased through higher volumes and

A Peek Inside Facebook’s Server Fleet Upgrade was written by Timothy Prickett Morgan at The Next Platform.

Windows Server Comes To ARM Chips, But Only For Azure

The rumors have been running around for years, and they turned out to be true. Microsoft, the world’s largest operating system supplier and still the dominant seller of systems software for the datacenter, has indeed been working for years on a port of its Windows Server 2016 operating system to the ARM server chip architecture.

The rumors about Windows Server on ARM started in earnest back in October 2014, which just before Qualcomm threw its hat into the ARM server ring and when Cavium and Applied Micro were in the market and starting to plan the generation of chips

Windows Server Comes To ARM Chips, But Only For Azure was written by Timothy Prickett Morgan at The Next Platform.

ARM And AMD X86 Server Chips Get Mainstream Lift From Microsoft

If you want real competition among vendors who supply stuff to you, then sometimes you have to make it happen by yourself. The hyperscalers and big cloud builders of the world can do that, and increasingly they are taking the initiative and fostering such competition for compute.

With its first generation of Open Cloud Servers, which were conceptualized in 2012, put into production for its Azure public cloud in early 2013, and open sourced through the Open Compute Project in January 2014, Microsoft decided to leverage the power of the open source hardware community to make its own server

ARM And AMD X86 Server Chips Get Mainstream Lift From Microsoft was written by Timothy Prickett Morgan at The Next Platform.

How AMD’s Naples X86 Server Chip Stacks Up To Intel’s Xeons

Ever so slowly, and not so fast as to give competitor Intel too much information about what it is up to, but just fast enough to build interest in the years of engineering smarts that has gone into its forthcoming “Naples” X86 server processor, AMD is lifting the veil on the product that will bring it back into the datacenter and that will bring direct competition to the Xeon platform that dominates modern computing infrastructure.

It has been a bit of a rolling thunder revelation of information about the Zen core used in the “Naples” server chip, the brand of

How AMD’s Naples X86 Server Chip Stacks Up To Intel’s Xeons was written by Timothy Prickett Morgan at The Next Platform.

Naples Opterons Give AMD A Second Chance In Servers

There are not a lot of second chances in the IT racket. AMD wants one, and we think, has earned one.

Such second chances are hard to come by, and we can rattle off a few of them because they are so rare. Intel pivoted from a memory maker to a processor maker in the mid-1980s, and has come to dominate compute in everything but handheld devices. In the mid-1990s, IBM failed to understand the RISC/Unix and X86 server waves swamping the datacenter and nearly went bankrupt and salvaged itself as software and services provider to glass houses. A decade

Naples Opterons Give AMD A Second Chance In Servers was written by Timothy Prickett Morgan at The Next Platform.

Docker Reaches The Enterprise Milestone

In the server virtualization era, there were a couple of virtual machine formats and hypervisors to match them, and despite the desire for a common VM format, the virtual server stacks got siloed into ESXi, KVM, Xen, and Hyper-V stacks with some spicing of PowerVM, Solaris containers and LDOMs, and VM/ESA partitions sprinkled on.

With containers, the consensus has been largely to support the Docker format that was inspired by the foundational Linux container work done by Google, and Docker, the company, was the early and enthusiastic proponent of its way of the Docker way of doing containers.

Now, Docker

Docker Reaches The Enterprise Milestone was written by Timothy Prickett Morgan at The Next Platform.

Server Makers Try To Adapt To A Harsher Climate

So, who was the biggest revenue generator, and showing the largest growth in sales, for servers in the final quarter of 2016? Was it Hewlett Packard Enterprise? Was it Dell Technologies? Was it IBM or Cisco Systems or one of the ODMs? Nope. It was the Others category comprised of dozens of vendors that sit outside of the top tier OEMs we know by name and the collective ODMs of the world who some of us know by name.

This is a sign that the server ecosystem is getting more diverse under pressure as the technical and economic climate changes

Server Makers Try To Adapt To A Harsher Climate was written by Timothy Prickett Morgan at The Next Platform.

1 61 62 63 64 65 73