Archive

Category Archives for "IT Industry"

A Snapshot Of Big Blue’s Systems Business

It is the job of the chief financial officer and the rest of the top brass of every public company in the world to present the financial results of their firms in the best possible light every thirteen weeks when the numbers are compiled and presented to Wall Street for grading. Money is how we all keep score, and how we decide we will invest and therefore live in our old age, hopefully with a certain amount of peace.

Starting this year, IBM has been presenting its financial results in a new format, which helps it emphasize its cognitive computing

A Snapshot Of Big Blue’s Systems Business was written by Timothy Prickett Morgan at The Next Platform.

AWS And VMware Acquaint As Strange Cloudfellows

What happens when the world’s largest public cloud and the biggest peddler of server virtualization in the enterprise team up to create a hybrid cloud?

A few things. First, the many VMware partners who have built clouds based on the ESXi hypervisor get nervous. And second, VMware very delicately and carefully prices its software low enough that it can have a scalable public cloud play but not so low that Amazon Web Services doesn’t end up having the pricing leverage that its parent company had with the book business a decade ago. And third, AWS uses the might of a

AWS And VMware Acquaint As Strange Cloudfellows was written by Timothy Prickett Morgan at The Next Platform.

Opening Up The Server Bus For Coherent Acceleration

When IBM started to use the word “open” in conjunction with its Power architecture more than three years with the formation of the OpenPower Foundation three years ago, Big Blue was not confused about what that term meant. If the Power architecture was to survive, it would do so by having open specifications that would allow third parties to not only make peripherals, but also to license the technology and make clones of Power8 or Power9 processors.

One of the key technologies that IBM wove into the Power8 chip that differentiates it from Xeon, Opteron, ARM, and Sparc processors is

Opening Up The Server Bus For Coherent Acceleration was written by Timothy Prickett Morgan at The Next Platform.

Adapteva Joins The Kilocore Club With Epiphany-V

The computing industry is facing a number of challenges as Moore’s Law improvements in circuitry slow down, and they don’t all have to do with transistor counts and memory bandwidth and such. Another problem is that it has gotten progressively more costly to design chips at the same time that mass customization seems to be the way to provide unique processing capability specifically for precise workloads.

In recent decades, the US Defense Advanced Research Project Agency pumped huge sums of money into designing and manufacturing gigascale, terascale, and petascale systems, but in recent years this development arm of the US

Adapteva Joins The Kilocore Club With Epiphany-V was written by Timothy Prickett Morgan at The Next Platform.

IEEE Reboots, Scans for Future Architectures

If there is any organization on the planet that has had a closer view of the coming demise of Moore’s Law, it is the Institute of Electrical and Electronics Engineers (IEEE). Since its inception in the 1960s, the wide range of industry professionals have been able to trace a steady trajectory for semiconductors, but given the limitations ahead, it is time to look to a new path—or several forks, to be more accurate.

This realization about the state of computing for the next decade and beyond has spurred action from a subgroup, led by Georgia Tech professor Tom Conte and

IEEE Reboots, Scans for Future Architectures was written by Nicole Hemsoth at The Next Platform.

Raising The Standard For Storage Memory Fabrics

People tend to obsess about processing when it comes to system design, but ultimately an application and its data lives in memory and anything that can improve the capacity, throughput, and latency of memory will make all the processing you throw at it result in useful work rather than wasted clock cycles.

This is why flash has been such a boon for systems. But we can do better, and the Gen-Z consortium announced this week is going to create a new memory fabric standard that it hopes will break down the barriers between main memory and other storage-class memories on

Raising The Standard For Storage Memory Fabrics was written by Timothy Prickett Morgan at The Next Platform.

Disruptive Technologies on the Post Exascale Horizon

Although the timeline for reaching exascale class computing continues to stretch farther into the future, research teams are keeping an eye on what technologies will shape the machines of the post-exascale timeframe, which is in the 2022-2030 timeframe.

While many of the technologies stated in a comprehensive report about post-exascale supercomputers are already in process, albeit in various stages of development and adoption, there is little consensus about which mode of computing will lead us into an era of unprecedented data and simulation potential. Still, the effort on behalf of the EuroLab-4-HPC program is notable in its divisions between where

Disruptive Technologies on the Post Exascale Horizon was written by Nicole Hemsoth at The Next Platform.

Ganging up Accelerators to Beat Scale Limits

It is not news that offloading work from CPUs to GPUs can grant radical speedups, but what can come as a surprise is that scaling of these workloads doesn’t change just because they run faster. Moving beyond a single node means encountering a performance wall, that is, unless something can glue everything together so it can scale at will.

There are already technologies that can take multiple units of compute and have them share work from supercomputing and other areas (consider ScaleMP, for instance) but there are limitations to these approaches and thus far, they haven’t extended to meet the

Ganging up Accelerators to Beat Scale Limits was written by Nicole Hemsoth at The Next Platform.

Igneous Melds ARM Chips And Disks For Private S3 Storage

The only companies that want – and expect – all compute and storage to move to the public cloud are those public clouds that do not have a compelling private cloud story to tell. But the fact remains that for many enterprises, their most sensitive data and workloads cannot – and will not – move to the public cloud.

This almost demands, as we have discussed before, the creation of private versions of public cloud infrastructure, which interestingly enough, is not as easy as it might seem. Scaling infrastructure down so it is still cost effective and usable by

Igneous Melds ARM Chips And Disks For Private S3 Storage was written by Timothy Prickett Morgan at The Next Platform.

Memory is the Next Platform

A new crop of applications is driving the market along some unexpected routes, in some cases bypassing the processor as the landmark for performance and efficiency. While there is no end in sight for the CPUs dominant role, at least not until Moore’s Law has been buried along the roadside, there is another path—this time, down memory lane.

Just as machine learning oriented applications represent the next development platform, memory appears to be the next platform for compute. While this won’t extend to all application areas, given the thrust of machine learning and memory bandwidth and capacity-strained applications, the more

Memory is the Next Platform was written by Nicole Hemsoth at The Next Platform.

The Emergence Of Data-Centric Computing

As data grows, a shift in computing paradigm is underway. I started my professional career in the 1990s, during massive shift from mainframe computing to the heyday of client/server computing and enterprise applications such as ERP, CRM, and human resources software. Relational databases like Oracle, DB2, SQL Server, and Informix offered improvements to managing data, and the technique of combining a new class of midrange servers from Sun Microsystems, Digital Equipment Corporation, IBM, and Hewlett-Packard with storage tiers from EMC and IBM reduced costs and complexity over traditional mainframes.

However, what remained was that these new applications continued to operate

The Emergence Of Data-Centric Computing was written by Timothy Prickett Morgan at The Next Platform.

Accelerating Slow Databases That Wear People Down

Todd Mostak, the creator of the MapD GPU-accelerated database and visualization system, made that database because he was a frustrated user of other database technologies, and as a user, he is adamant that accelerating databases and making visualization of queried data is about more than just being a speed freak.

“Analytics is ultimately a creative exercise,” Mostak tells The Next Platform during a conversation that was supposed to be about benchmark results but that, as often happens here, wandered far and wide. “Analysts start from some place, and where they go is a function of the resources that are

Accelerating Slow Databases That Wear People Down was written by Timothy Prickett Morgan at The Next Platform.

Applied Micro Finds ARM Server Footing, Reaches Higher

One of the frustrating facts about peddling any new technology is that the early adopters that discover a strategic advantage in that technology want to keep that secret all to themselves. Word of mouth and real-world use cases are big factors in the adoption of any new technology, and anything that hampers this actually causes the adoption to move slower than it otherwise might.

But eventually, despite all of the secrecy, there comes a time when the critical mass is reached and adoption proceeds apace. We have been waiting for that moment for a long time now for 64-bit ARM

Applied Micro Finds ARM Server Footing, Reaches Higher was written by Timothy Prickett Morgan at The Next Platform.

Making The Case For Containers

Linux container technology is IT’s shiny new thing. Containers promise to ease application development and deployment, a necessity in a business environment where getting ahead of application demand can mean the difference between staying in business or not. Containers offer many benefits, but they are not a panacea, and it’s important to understand why, where and when to use them.

Most IT pros recognize that application containers can provide a technological edge, one that translates into a clear business advantage. Containers unify and streamline application components – including the libraries and binaries upon which individual applications depend. Combining isolation with

Making The Case For Containers was written by Timothy Prickett Morgan at The Next Platform.

Server Encryption With An FPGA Offload Boost

Everyone talks about security on infrastructure, but it comes at a heavy cost. While datacenters have been securing their perimeters with firewalls for decades, this is far from sufficient for modern applications.

Back in the early days of the Internet, all traffic was from the client in through the web and application servers to the back-end database that fed the applications – what is known as north-south traffic in the datacenter lingo. But these days, an application is a collection of multiple services that are assembled on the fly from all over the datacenter, across untold server nodes, in what

Server Encryption With An FPGA Offload Boost was written by Timothy Prickett Morgan at The Next Platform.

Exascale Code Performance and Portability in the Tune of C

Among the many challenges ahead for programming in the exascale era is the portability and performance of codes on heterogeneous machines.

Since the future plan for architectures includes new memory and accelerator capabilities, along with advances in general purpose cores, developing on a solid base that offers flexibility and support for many hardware architectures is a priority. Some contend that the best place to start is with C++, which has been gathering steam in HPC in recent years.

As our own Douglas Eadline noted back in January, choosing a programming language for HPC used to be an easy task. Select

Exascale Code Performance and Portability in the Tune of C was written by Nicole Hemsoth at The Next Platform.

Amazon Gets Serious About GPU Compute On Clouds

In the public cloud business, scale is everything – hyper, in fact – and having too many different kinds of compute, storage, or networking makes support more complex and investment in infrastructure more costly. So when a big public cloud like Amazon Web Services invests in a non-standard technology, that means something. In the case of Nvidia’s Tesla accelerators, it means that GPU compute has gone mainstream.

It may not be obvious, but AWS tends to hang back on some of the Intel Xeon compute on its cloud infrastructure, at least compared to the largest supercomputer centers and hyperscalers like

Amazon Gets Serious About GPU Compute On Clouds was written by Timothy Prickett Morgan at The Next Platform.

Windows Server 2016: End Of One Era, Start Of Another

What constitutes an operating system changes with the work a system performs and the architecture that defines how that work is done. All operating systems tend to expand out from their initial core functionality, embedding more and more functions. And then, every once in a while, there is a break, a shift in technology that marks a fundamental change in how computing gets done.

It is fair to say that Windows Server 2016, which made it formal debut at Microsoft’s Ignite conference today and which starts shipping on October 1, is at the fulcrum of a profound change where an

Windows Server 2016: End Of One Era, Start Of Another was written by Timothy Prickett Morgan at The Next Platform.

Exascale Capabilities Underpin Future of Energy Sector

Oil and natural resource discovery and production is an incredibly risky endeavor, with the cost of simply finding a new barrel of oil tripling over the last ten years. Discovery teams want to ensure they are only drilling in the most lucrative locations, which these days means looking to increasingly inaccessible (for a bevy of reasons) sources for hydrocarbons.

Even with renewable resources like wind, there are still major financial risks. An accurate prediction of shifting output and location for expensive turbines are two early-stage challenges, and maintaining, monitoring, and optimizing those turbines is an ongoing pressure.

The common thread

Exascale Capabilities Underpin Future of Energy Sector was written by Nicole Hemsoth at The Next Platform.

Baidu’s New Yardstick for Deep Learning Hardware Makers

When it comes to deep learning innovation on the hardware front, few other research centers have been as forthcoming with their results as Baidu. Specifically, the company’s Silicon Valley AI Lab (SVAIL) has been the center of some noteworthy work on GPU-based deep learning as well as exploratory efforts using novel architectures specifically for ultra-fast training and inference.

It stands to reason that teams at SVAIL don’t simply throw hardware at the wall to see what sticks, even though they seem to have more to toss around than most. Over the last couple of years, they have broken down

Baidu’s New Yardstick for Deep Learning Hardware Makers was written by Nicole Hemsoth at The Next Platform.