Archive

Category Archives for "IT Industry"

FICO CIO on the Costs, Concerns of Cloud Transition

Moving large-scale enterprise operations into the cloud is not a decision to be made lightly. There are engineering and financial considerations, and the process of determining the costs pros and cons of such a move is significantly more complex than simply comparing the expense of running a workload on-premises or in a public cloud.

Still, the trend is toward businesses making the move to one degree or another, driven by the easy ability to scale up or down depending on the workload and paying only for the infrastructure resources they use, not having to put up the capital expense to

FICO CIO on the Costs, Concerns of Cloud Transition was written by Nicole Hemsoth at The Next Platform.

Faster Machine Learning in a World with Limited Memory

Striking acceptable training times for GPU accelerated machine learning on very large datasets has long-since been a challenge, in part because there are limited options with constrained on-board GPU memory.

For those who are working on training against massive volumes (in the many millions to billions of examples) using cloud infrastructure, the impetus is greater than ever to pare down training time given the per-hour instance costs and for cloud-based GPU acceleration on hardware with more memory (the more expensive Nvidia P100 with 16 GB memory over a more standard 8 GB memory GPU instance). Since hardware limitations are not

Faster Machine Learning in a World with Limited Memory was written by Nicole Hemsoth at The Next Platform.

Disaggregated Or Hyperconverged, What Storage Will Win The Enterprise?

It has been a long time coming, but hyperconverged storage pioneer Nutanix is finally letting go of hardware, shifting from being an a server-storage hybrid appliance maker to a company that sells software that provides hyperconverged functionality on whatever hardware large enterprises typically buy.

The move away from selling appliances was something that The Next Platform has been encouraging Nutanix to do to broaden its market appeal, but until the company reached a certain level of demand from customers, Nutanix had to restrict its hardware support matrix so it could affordably put a server-storage stack in the field and not

Disaggregated Or Hyperconverged, What Storage Will Win The Enterprise? was written by Timothy Prickett Morgan at The Next Platform.

When POSIX I/O Meets Exascale, Do the Old Rules Apply?

We’ve all grown up in a world of digital filing cabinets. POSIX I/O has enabled code portability and extraordinary advances in computation, but it is limited by its design and the way it mirrors the paper offices that it has replaced.

The POSIX API and its implementation assumes that we know roughly where our data is, that accessing it is reasonably quick and that all versions of the data are the same. As we move to exascale, we need to let go of this model and embrace a sea of data and a very different way of handling it.

In

When POSIX I/O Meets Exascale, Do the Old Rules Apply? was written by Nicole Hemsoth at The Next Platform.

Cloudera Puffs Up Analytics Database For Clouds

In many ways, public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform can be the great equalizers, giving enterprises access to computing and storage resources that they may not have the money to be able to bring into their on-premises environments. Given the new compute-intensive workloads like data analytics and machine learning, and the benefits they can bring to modern businesses, this access to cloud-based platforms is increasingly critical to large enterprises.

Cloudera for several years has been pushing its software offerings – such as Data Science Workbench, Analytic DB, Operational DB, and Enterprise Data Hub –

Cloudera Puffs Up Analytics Database For Clouds was written by Jeffrey Burt at The Next Platform.

The Booming Server Market In The Wake Of Skylake

The slowdown in server sales ahead of Intel’s July launch of the “Skylake” Xeon SP was real, and if the figures from the third quarter of this year are any guide, then it looks like that slump is over. Plenty of customers wanted the shiny new Skylake gear, and we think a fair number of them also wanted to buy older-generation “Broadwell” Xeons and the “Grantley” server platform given the premium that Intel is charging for Skylake processors and their “Purley” platform.

Server makers with older Broadwell machinery in the barn were no doubt happy to oblige customers and clear

The Booming Server Market In The Wake Of Skylake was written by Timothy Prickett Morgan at The Next Platform.

Getting Hyper And Converged About Storage

Hyperconverged infrastructure is a relatively small but fast-growing part of the datacenter market, driving in large part by enterprises looking to simplify and streamline their environments as they tackle increasingly complex workloads.

Like converged infrastructure, hyperconverged offerings are modular in nature, converging compute, storage, networking, virtualization and management software into a tightly integrated single solution that drives greater datacenter densities, smaller footprints, rapid deployment and lower costs. They are pre-built, pre-validated before shipping from the factory, eliminating the need for the user to do the necessary and time-consuming integration. Hyperconverged merges the compute and storage into a single unit, and

Getting Hyper And Converged About Storage was written by Jeffrey Burt at The Next Platform.

The Systems of the Future Will Be Conversational

It’d be difficult to downplay the impact Amazon Web Services has had on the computing industry over the past decade. Since launching in 2006, Amazon’s cloud computing division has become the set the pace in the public cloud market, rapidly growing out its capabilities from the first service – Simple Storage Service (S3) – it rolled out to now offering thousands of services that touch on everything from compute instances to databases, storage, application development and emerging technologies like machine learning and data analytics.

The company has become dominant by offering organizations of all sizes a way of simply accessing

The Systems of the Future Will Be Conversational was written by Nicole Hemsoth at The Next Platform.

Reinventing the FPGA Programming Wheel

For several years, GPU acceleration matched with Intel Xeon processors were the dominating news items in hardware at the annual Supercomputing Conference. However, this year that trend shifted in earnest, with a major coming-out party for ARM servers in HPC and more attention than ever paid to FPGAs as potential accelerators for future exascale systems.

The SC series held two days of lightening-round presentations on the state of FPGAs for future supercomputers, with insight from both academia, vendors, and end users at scale, including Microsoft. To say Microsoft is an FPGA user is a bit of an understatement, however, since

Reinventing the FPGA Programming Wheel was written by Nicole Hemsoth at The Next Platform.

Inside Nvidia’s Next-Gen Saturn V AI Cluster

Generally speaking, the world’s largest chip makers have been pretty secretive about the giant supercomputers they use to design and test their devices, although occasionally, Intel and AMD have provided some insight into their clusters.

We have no idea what kind of resources Nvidia has for its EDA systems – we are trying to get some insight into that – but we do know that it has just upgraded a very powerful supercomputer to advance the state of the art in artificial intelligence that is also doing double duty on some aspects of its chip design business.

As part of

Inside Nvidia’s Next-Gen Saturn V AI Cluster was written by Timothy Prickett Morgan at The Next Platform.

The Redemption Of NFS

If thinking of NFS v4 puts a bad taste in your mouth, you are not alone. Or wrong. NFS v4.0 and v4.1 have had some valid, well-documented growing pains that include limited bandwidth and scalability. These issues were a result of a failure to properly address performance issues in the v4.0 release.

File systems are the framework upon which the entire house is built, so these performance issues were not trivial problems for us in IT. Thanks to the dedication of the NFS developer community, NFS v4.2 solves the problems of v4.0 and v4.1 and also introduces a host of

The Redemption Of NFS was written by Timothy Prickett Morgan at The Next Platform.

Hospitals Untangling Infrastructure from Deep Learning Projects

Medical imaging is one areas where hospitals have invested significantly in on-premises infrastructure to support diagnostic analysis.

These investments have been stepped up in recent years with ever-more complex frameworks for analyzing scans, but as cloud continues to mature, the build versus buy hardware question gets more complicated. This is especially true with the addition to deep learning for medical images into more hospital settings—something that adds more hardware and software heft to an already top-heavy stack.

Earlier this week, we talked about the medical imaging revolution that is being driven forward by GPU accelerated deep learning, but as it

Hospitals Untangling Infrastructure from Deep Learning Projects was written by Nicole Hemsoth at The Next Platform.

AWS Flexes Cloud Muscles with Host of New Additions

Tech vendors often like to boast about being first movers in a particular market, saying that leading the charge puts them at a great advantage over their competitors. It doesn’t always work that way, but sometimes it does.

A case in point is Amazon Web Services (AWS), which officially launched in 2006 with the release of the Simple Storage Service (S3) after several years of development and with it kicked off what is now the fast-growing and increasingly crowded public cloud space. Eleven years later, AWS owns just over 44 percent of the market, according to CEO Andy Jassy, pointing

AWS Flexes Cloud Muscles with Host of New Additions was written by Nicole Hemsoth at The Next Platform.

The Battle Of The InfiniBands

When it comes to HPC, compute is like the singers in a rock band, making all of the noise and soaking up most of the attention. But the network that lashes the compute together is literally the beat of the drums and the thump of the bass that keeps everything in synch and allows for the harmonies of the singers to come together at all.

In this analogy, it is not clear what HPC storage is. It might be the van that moves the instruments from town to town, plus the roadies who live in the van that set up

The Battle Of The InfiniBands was written by Timothy Prickett Morgan at The Next Platform.

Cramming The Cosmos Into A Shared Memory Supercomputer

The very first Superdome Flex shared memory system has gone out the door at Hewlett Packard Enterprise, and it is going to an HPC center that has big memory needs as it tries to understand the universe that we inhabit.

Earlier this month, Hewett Packard Enterprise unveiled the newest addition to NUMA iron, the Superdome Flex, which is an upgrade from the SGI UV 300 platform that HPE sought when it bought SGI for $275 million last year. As we outlined in The Next Platform at the time, the system can scale from four to 32 sockets, is powered by

Cramming The Cosmos Into A Shared Memory Supercomputer was written by Jeffrey Burt at The Next Platform.

The Ecosystem Expands For AMD Epyc Servers

The “Naples” Epyc server processors do not exactly present a new architecture from a new processor maker, but given the difficulties that AMD had at the tail end of the Opteron line a decade ago and its long dormancy in the server space, it is almost like AMD had to start back at the beginning to gain the trust of potential server buyers.

Luckily for AMD, and its Arm server competitors Qualcomm and Cavium, there is intense pressure from all aspects of high-end computing – internal applications and external ones at hyperscalers and some cloud builders as well as enterprises

The Ecosystem Expands For AMD Epyc Servers was written by Timothy Prickett Morgan at The Next Platform.

Debating The Role Of Commodity Chips In Exascale

Building the first exascale systems continues to be a high-profile endeavor, with efforts underway worldwide in the United States, the European Union, and Asia – notably China and Japan – that focus on competition between regional powers, the technologies that are going into the architectures, and the promises that these supercomputers hold for everything from research and government to business and commerce.

The Chinese government is pouring money and resources into its roadmaps for both pre-exascale and exascale systems, Japan is moving forward with Fujitsu’s Post-K system that will use processors based on the Arm architecture rather than the

Debating The Role Of Commodity Chips In Exascale was written by Jeffrey Burt at The Next Platform.

Julia Language Delivers Petascale HPC Performance

 

Written in the productivity language Julia, the Celeste project—which aims to catalogue all of the telescope data for the stars and galaxies in in the visible universe—demonstrated the first Julia application to exceed 1 PF/s of double-precision floating-point performance (specifically 1.54 PF/s).

The project took advantage of all 9300 Intel Xeon Phi Phase II nodes on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer.

Even in HPC terms, the Celeste project is big, as it created the first comprehensive catalog of visible objects in our universe by processing 178 terabytes of SDSS (Sloan Digital

Julia Language Delivers Petascale HPC Performance was written by Nicole Hemsoth at The Next Platform.

Intel Stacks Up Xeons Against AMD Epyc Systems

Apparently, it’s Rivalry Week in the compute section of the datacenter here at The Next Platform. There are a slew of vendors ramping up their processors and their ecosystems to do battle for 2018 budget dollars, and many of them are talking up the performance and bang for the buck of their architectures.

We have just discussed the “Vulcan” variant of the ThunderX2 processor from Cavium and how that company thinks it ranks against the new “Skylake” Xeon SP processors from Intel, which made their debut in July. AMD was talking up its Epyc processors at the recent SC17

Intel Stacks Up Xeons Against AMD Epyc Systems was written by Timothy Prickett Morgan at The Next Platform.

Assessing The Tradeoffs Of NVM-Express Storage At Scale

NVM-Express isn’t new. Development on the interface, which provides lean and mean access to non-volatile memory, first came to light a decade ago, with technical work starting two years later through a work group that comprised more than 90 tech vendors. The first NVM-Express specification came out in 2011, and now the technology is going mainstream.

How quickly and pervasively remains to be seen. NVM-Express promises significant boosts in performance to SSDs while driving down the latency, which would be a boon to HPC organizations and the wider world of enterprises as prices for SSDs continue to fall and adoption

Assessing The Tradeoffs Of NVM-Express Storage At Scale was written by Jeffrey Burt at The Next Platform.