Archive

Category Archives for "IT Industry"

Flare Gives Spark SQL a Performance Boost

Spark has grown rapidly over the past several years to become a significant tool in the big data world. Since emerging from the AMPLab at the University of California at Berkeley, Spark adoption has increased quickly as the open-source in-memory processing platform has become a key framework for handling workloads for machine learning, graph processing and other emerging technologies.

Developers continue to add more capabilities to Spark, including a SQL front-end for SQL processing and APIs for relational query optimization to build upon the basic Spark RDD API. The addition of the Spark SQL module promises greater performance and opens

Flare Gives Spark SQL a Performance Boost was written by Nicole Hemsoth at The Next Platform.

Machine Learning, Analytics Play Growing Role in US Exascale Efforts

Exascale computing promises to bring significant changes to both the high-performance computing space and eventually enterprise datacenter infrastructures.

The systems, which are being developed in multiple countries around the globe, promise 50 times the performance of current 20 petaflop-capable systems that are now among the fastest in the world, and that bring corresponding improvements in such areas as energy efficiency and physical footprint. The systems need to be powerful run the increasingly complex applications being used by engineers and scientists, but they can’t be so expensive to acquire or run that only a handful of organizations can use them.

At

Machine Learning, Analytics Play Growing Role in US Exascale Efforts was written by Nicole Hemsoth at The Next Platform.

Intel “Kaby Lake” Xeon E3 Sets The Server Cadence

The tick-tock-clock three step dance that Intel will be using to progress its Core client and Xeon server processors in the coming years is on full display now that the Xeon E3-1200 v6 processors based on the “Kaby Lake” have been unveiled.

The Kaby Lake chips are Intel’s third generation of Xeon processors that are based on its 14 nanometer technologies, and as our naming convention for Intel’s new way of rolling out chips suggests, it is a refinement of both the architecture and the manufacturing process that, by and large, enables Intel to ramp up the clock speed on

Intel “Kaby Lake” Xeon E3 Sets The Server Cadence was written by Timothy Prickett Morgan at The Next Platform.

Google Researchers Measure the Future in Microseconds

The IT industry has gotten good at developing computer systems that can easily work at the nanosecond and millisecond scales.

Chip makers have developed multiple techniques that have helped drive the creation of nanosecond-scale devices, while primarily software-based solutions have been rolled out for slower millisecond-scale devices. For a long time, that has been enough to address the various needs of high-performance computing environments, where performance is a key metric and issues such as the simplicity of the code and the level of programmer productivity are not as great of concerns. Given that, programming at the microsecond level as not

Google Researchers Measure the Future in Microseconds was written by Nicole Hemsoth at The Next Platform.

Argonne National Lab Lead Details Exascale Balancing Act

It’s easy when talking about the ongoing push toward exascale computing to focus on the hardware architecture that will form the foundation of the upcoming supercomputers. Big systems packed with the latest chips and server nodes and storage units still hold a lot of fascination, and the names of those vendors involved – like Intel, IBM and Nvidia – still resonate broadly across the population. And that interest will continue to hold as exascale systems move from being objects of discussion now to deployed machines over the next several years.

However, the development and planning of these systems is a

Argonne National Lab Lead Details Exascale Balancing Act was written by Nicole Hemsoth at The Next Platform.

An Opaque Alternative to Oblivious Cloud Analytics

Data security has always been a key concern as organizations look to leverage the operational and cost efficiencies that come with cloud computing. Huge volumes of critical and sensitive data often are in transit and distributed among multiple systems, and increasingly are being collected and analyzed in cloud-based big data platforms, putting them at higher risk of being hacked and compromised.

Even as encryption methods and security procedures have improved, the data is still at risk of being attacked through such vulnerabilities as access pattern leakage through memory or the network. It’s the threat of an attack via access pattern

An Opaque Alternative to Oblivious Cloud Analytics was written by Nicole Hemsoth at The Next Platform.

Weaving Together Flash For Nearly Unlimited Scale

It is almost a foregone conclusion that when it comes to infrastructure, the industry will follow the lead of the big hyperscalers and cloud builders, building a foundation of standardized hardware for serving, storing, and switching and implementing as much functionality and intelligence as possible in the software on top of that to allow it to scale up and have costs come down as it does.

The reason this works is that these companies have complete control of their environments, from the processors and memory in the supply chain to the Linux kernel and software stack maintained by hundreds to

Weaving Together Flash For Nearly Unlimited Scale was written by Timothy Prickett Morgan at The Next Platform.

Neuromorphic, Quantum, Supercomputing Mesh for Deep Learning

It is difficult to shed a tear for Moore’s Law when there are so many interesting architectural distractions on the systems horizon.

While the steady tick-tock of the tried and true is still audible, the last two years have ushered a fresh wave of new architectures targeting deep learning and other specialized workloads, as well as a bevy of forthcoming hybrids with FPGAs, zippier GPUs, and swiftly emerging open architectures. None of this has been lost on system architects at the bleeding edge, where the rush is on to build systems that can efficiently chew through ever-growing datasets with

Neuromorphic, Quantum, Supercomputing Mesh for Deep Learning was written by Nicole Hemsoth at The Next Platform.

Intel Vigorously Defends Chip Innovation Progress

With absolute dominance in datacenter and desktop compute, considerable sway in datacenter storage, a growing presence in networking, and profit margins that are the envy of the manufacturing and tech sectors alike, it is not a surprise that companies are gunning for Intel. They all talk about how Moore’s Law is dead and how that removes a significant advantage for the world’s largest – and most profitable – chip maker.

After years of this, the top brass in Intel’s Technology and Manufacturing Group as well as its former chief financial officer, who is now in charge of its manufacturing, operations,

Intel Vigorously Defends Chip Innovation Progress was written by Timothy Prickett Morgan at The Next Platform.

Scaling Deep Learning on an 18,000 GPU Supercomputer

It is one thing to scale a neural network on a single GPU or even a single system with four or eight GPUs. But it is another thing entirely to push it across thousands of nodes. Most centers doing deep learning have relatively small GPU clusters for training and certainly nothing on the order of the Titan supercomputer at Oak Ridge National Laboratory.

The emphasis on machine learning scalability has often been focused on node counts in the past for single-model runs. This is useful for some applications, but as neural networks become more integrated into existing workflows, including those

Scaling Deep Learning on an 18,000 GPU Supercomputer was written by Nicole Hemsoth at The Next Platform.

Use Optane Memory Like A Hyperscaler

The ramp for Intel’s Optane 3D XPoint memory, which sits between DDR4 main memory and flash or disk storage, or beside main memory, in the storage hierarchy, is going to shake up the server market. And maybe not in the ways that Intel and its partner, Micron Technology, anticipate.

Last week, Intel unveiled its first Optane 3D XPoint solid state cards and drives, which are now being previewed by selected hyperscalers and which will be rolling out in various capacities and form factors in the coming quarters. As we anticipated, and as Intel previewed last fall, the company is

Use Optane Memory Like A Hyperscaler was written by Timothy Prickett Morgan at The Next Platform.

Stanford Brainstorm Chip to Hints at Neuromorphic Computing Future

If the name Kwabena Boahen sounds familiar, you might remember silicon that emerged in the late 1990s that emulated the human retina.

This retinomorphic vision system, which Boahen developed while at Caltech under VLSI and neuromorphic computing pioneer, Carver Meade, introduced ideas that are just coming around into full view again in the last couple of years—computer vision, artificial intelligence, and of course, brain-inspired architectures that route for efficiency and performance. The rest of his career has been focused on bringing bioinspired engineering to a computing industry that is hitting a major wall in coming years—and at a time

Stanford Brainstorm Chip to Hints at Neuromorphic Computing Future was written by Nicole Hemsoth at The Next Platform.

Facebook Pushes The Search Envelope With GPUs

An increasing amount of the world’s data is encapsulated in images and video and by its very nature it is difficult and extremely compute intensive to do any kind of index and search against this data compared to the relative ease with which we can do so with the textual information that heretofore has dominated both our corporate and consumer lives.

Initially, we had to index images by hand and it is with these datasets that the hyperscalers pushed the envelope with their image recognition algorithms, evolving neural network software on CPUs and radically improving it with a jump to

Facebook Pushes The Search Envelope With GPUs was written by Timothy Prickett Morgan at The Next Platform.

China Making Swift, Competitive Quantum Computing Gains

Chinese officials have made no secret out of their desire to become the world’s dominant player in the technology industry. As we’ve written about before at The Next Platform, China has accelerated its investments in IT R&D over the past several years, spending tens of billions of dollars to rapidly expand the capabilities of its own technology companies to better compete with their American counterparts, while at the same time forcing U.S. tech vendors to clear various hurdles in their efforts to access the fast-growing China market.

This is being driven by a combination of China’s desire to increase

China Making Swift, Competitive Quantum Computing Gains was written by Nicole Hemsoth at The Next Platform.

Rapid GPU Evolution at Chinese Web Giant Tencent

Like other major hyperscale web companies, China’s Tencent, which operates a massive network of ad, social, business, and media platforms, is increasingly reliant on two trends to keep pace.

The first is not surprising—efficient, scalable cloud computing to serve internal and user demand. The second is more recent and includes a wide breadth of deep learning applications, including the company’s own internally developed Mariana platform, which powers many user-facing services.

When the company introduced its deep learning platform back in 2014 (at a time when companies like Baidu, Google, and others were expanding their GPU counts for speech and

Rapid GPU Evolution at Chinese Web Giant Tencent was written by Nicole Hemsoth at The Next Platform.

Squeezing The Joules Out Of DRAM, Possibly Without Stacking

Increasing parallelism is the only way to get more work out of a system. Architecting for that parallelism required requires a lot of rethinking of each and every component in a system to make everything hum along as efficiently as possible.

There are lots of ways to skin the parallelism cats and squeeze more performance and less energy out of the system, and for DRAM memory, just stacking things up helps, but according to some research done at Stanford University, the University of Texas, and GPU maker Nvidia, there is another way to boost performance and lower energy consumption. The

Squeezing The Joules Out Of DRAM, Possibly Without Stacking was written by Timothy Prickett Morgan at The Next Platform.

Fujitsu Looks to 3D ICs, Silicon Photonics to Drive Future Systems

The rise of public and private clouds, the growth of the Internet of Things, the proliferation of mobile devices and the massive amounts of data that need to be collected, stored, moved and analyzed that are being generated by such fast-growing emerging trends promise to drive significant changes in both software and hardware development in the coming years.

Depending on who you’re talking to, there could be anywhere from 10 billion to 25 billion connected devices worldwide, self-driving cars are expected to rapidly grow in use in the next decade and corporate data is no longer housed primarily in stationary

Fujitsu Looks to 3D ICs, Silicon Photonics to Drive Future Systems was written by Nicole Hemsoth at The Next Platform.

KAUST Hackathon Shows OpenACC Global Appeal

OpenACC’s global attraction can be seen in the recent February 2017 OpenACC mini-hackathon and GPU conference at KAUST (King Abdullah University of Science & Technology) in Saudi Arabia. OpenACC was created so programmers can insert pragmas to provide information to the compiler about parallelization opportunities and data movement operations to and from accelerators. Programmers use pragmas to work in concert with the compiler to create, tune and optimize parallel codes to achieve high performance.

Demand was so high to attend this mini-hackathon that the organizers had to scramble to find space for ten teams, even though the hackathon was originally

KAUST Hackathon Shows OpenACC Global Appeal was written by Nicole Hemsoth at The Next Platform.

Roadblocks, Fast Lanes for China’s Enterprise IT Spending

The 13th Five Year Plan and other programs to bolster cloud adoption among Chinese businesses like the Internet Plus effort have lit a fire under China’s tech and industrial sectors to modernize IT operations.

However, the growth of China’s cloud and overall enterprise IT market is far slower than in other nations. while there is a robust hardware business in the country, the traditional view of enterprise-class software is still sinking in, leaving a gap between hardware and software spending. Further, the areas that truly drive tech spending, including CPUs and enterprise software and services, are the key areas

Roadblocks, Fast Lanes for China’s Enterprise IT Spending was written by Nicole Hemsoth at The Next Platform.

Memory And Logic In A Post Moore’s Law World

The future of Moore’s Law has become a topic of hot debate in recent years, as the challenge of continually shrinking transistors and other components has grown.

Intel, AMD, IBM, and others continue to drive the development of smaller electronic components as a way of ensuring advancements in compute performance while driving down the cost of that compute. Processors from Intel and others are moving now from 14 nanometer processes down to 10 nanometers, with plans to continue onto 7 nanometers and smaller.

For more than a decade, Intel had relied on a tick-tock manufacturing schedule to keep up with

Memory And Logic In A Post Moore’s Law World was written by Jeffrey Burt at The Next Platform.