Jeffrey Burt

Author Archives: Jeffrey Burt

Keeping The Blue Waters Supercomputer Busy For Three Years

After years of planning and delays after a massive architectural change, the Blue Waters supercomputer at the National Center for Supercomputing Applications at the University of Illinois finally went into production in 2013, giving scientists, engineers and researchers across the country a powerful tool to run and solve the most complex and challenging applications in a broad range of scientific areas, from astrophysics and neuroscience to biophysics and molecular research.

Users of the petascale system have been able to simulate the evolution of space, determine the chemical structure of diseases, model weather, and trace how virus infections propagate via air

Keeping The Blue Waters Supercomputer Busy For Three Years was written by Jeffrey Burt at The Next Platform.

Tuning Up Knights Landing For Gene Sequencing

The Smith-Waterman algorithm has become a linchpin in the rapidly expanding world of bioinformatics, the go-to computational model for DNA sequencing and local sequence alignments. With the growth in recent years in genome research, there has been a sharp increase in the amount of data around genes and proteins that needs to be collected and analyzed, and the 36-year-old Smith-Waterman algorithm is a primary way of sequencing the data.

The key to the algorithm is that rather than examining an entire DNA or protein sequence, Smith-Waterman uses a technique called dynamic programming in which the algorithm looks at segments of

Tuning Up Knights Landing For Gene Sequencing was written by Jeffrey Burt at The Next Platform.

3D Stacking Could Boost GPU Machine Learning

Nvidia has staked its growth in the datacenter on machine learning. Over the past few years, the company has rolled out features in its GPUs aimed neural networks and related processing, notably with the “Pascal” generation GPUs with features explicitly designed for the space, such as 16-bit half precision math.

The company is preparing its upcoming “Volta” GPU architecture, which promises to offer significant gains in capabilities. More details on the Volta chip are expected at Nvidia’s annual conference in May. CEO Jen-Hsun Huang late last year spoke to The Next Platform about what he called the upcoming “hyper-Moore’s Law”

3D Stacking Could Boost GPU Machine Learning was written by Jeffrey Burt at The Next Platform.

Google Courts Enterprise For Cloud Platform

Google has always been a company that thinks big. After all, its mission since Day One was to organize and make accessible all of the world’s information.

The company is going to have to take that same expansive and aggressive approach as it looks to grow in a highly competitive public cloud market that includes a dominant player (Amazon Web Services) and a host of other vendors, including Microsoft, IBM, and Oracle. That’s going to mean expanding its customer base beyond smaller businesses and startups and convincing larger enterprises to store their data and run their workloads on its ever-growing

Google Courts Enterprise For Cloud Platform was written by Jeffrey Burt at The Next Platform.

Applied Micro Renews ARM Assault On Intel Servers

The lineup of ARM server chip makers has been a somewhat fluid one over the years.

There have been some that have come and gone (pioneer Calxeda was among the first to the party but folded in 2013 after running out of money), some that apparently have looked at the battlefield and chose not to fight (Samsung and Broadcom, after its $37 billion merger with Avago), and others that have made the move into the space only to pull back a bit (AMD a year ago released its ARM-based Opteron A1100 systems-on-a-chip, or SOCs but has since shifted most of

Applied Micro Renews ARM Assault On Intel Servers was written by Jeffrey Burt at The Next Platform.

Google Expands Enterprise Cloud With Machine Learning

Google’s Cloud Platform is the relative newcomer on the public cloud block, and has a way to go before before it is in the same competitive sphere as Amazon Web Services and Microsoft Azure, both of which deliver a broader and deeper range of offerings and larger infrastructures.

Over the past year, Google has promised to rapidly grow the platform’s capabilities and datacenters and has hired a number of executives in hopes of enticing enterprises to bring more of their corporate workloads and data to the cloud.

One area Google is hoping to leverage is the decade-plus of work and

Google Expands Enterprise Cloud With Machine Learning was written by Jeffrey Burt at The Next Platform.

Microsoft, Stanford Researchers Tweak Cloud Economics Framework

Cloud computing makes a lot of sense for a rapidly growing number of larger enterprises and other organizations, and for any number of reasons. The increased application flexibility and agility engendered by creating a pool of shared infrastructure resources, the scalability and the cost efficiencies, are all key drivers in an era of ever-embiggening data.

With public and hybrid cloud environments, companies can offload the integration, deployment and management of the infrastructure to a third party, taking the pressure off their own IT staffs, and in private and hybrid cloud environments, they can keep their most business-critical data securely behind

Microsoft, Stanford Researchers Tweak Cloud Economics Framework was written by Jeffrey Burt at The Next Platform.

Making Remote NVM-Express Flash Look Local And Fast

Large enterprises are embracing NVM-Express flash as the storage technology of choice for their data intensive and often highly unpredictable workloads. NVM-Express devices bring with them high performance – up to 1 million I/O operations per second – and low latency – less than 100 microseconds. And flash storage now has high capacity, too, making it a natural fit for such datacenter applications.

As we have discussed here before, all-flash arrays are quickly becoming mainstream, particularly within larger enterprises, as an alternative to disk drives in environments where tens or hundreds of petabytes of data – rather than the

Making Remote NVM-Express Flash Look Local And Fast was written by Jeffrey Burt at The Next Platform.

AMD Researchers Eye APUs For Exascale

Exascale computing, which has been long talked about, is now – if everything remains on track – only a few years away. Billions of dollars are being spent worldwide to develop systems capable of an exaflops of computation, which is 50 times the performance of the most capacious systems the current Top500 supercomputer rankings and will usher in the next generation of HPC workloads.

As we have talked about at The Next Platform, China is pushing ahead with three projects aimed at delivering exascale systems to the market, with a prototype – dubbed the Tianhe-3 – being prepped for

AMD Researchers Eye APUs For Exascale was written by Jeffrey Burt at The Next Platform.

Financial Institutions Weigh Risks, Benefits of Cloud Migration

Cloud computing in its various forms is often pitched as a panacea of sorts for organizations that are looking to increase the flexibility of their data and to drive down costs associated with their IT infrastructures. And for many, the benefits are real.

By offloading many of their IT tasks – from processing increasingly large amounts of data to storing all that data – to cloud providers, companies can take the money normally spent in building out and managing their internal IT infrastructures and put it toward other important business efforts. In addition, by having their data in an easily

Financial Institutions Weigh Risks, Benefits of Cloud Migration was written by Jeffrey Burt at The Next Platform.

A Glimmer of Light Against Dark Silicon

Moore’s Law has been the driving force behind computer evolution for more than five decades, fueling the relentless innovation that led to more transistors being added to increasingly smaller processors that rapidly increased the performance of computing while at the same time driving down the cost.

Fifty-plus years later, as the die continues to shrink, there are signs that Moore’s Law is getting more difficult to keep up with. For example, Intel – the keeper of the Moore’s Law flame – has pushed back the transition from 14-nanometers to 10nm by more than a year as it worked through issues

A Glimmer of Light Against Dark Silicon was written by Jeffrey Burt at The Next Platform.

Promises, Challenges Ahead for Near-Memory, In-Memory Processing

The idea of bringing compute and memory functions in computers closer together physically within the systems to accelerate the processing of data is not a new one.

Some two decades ago, vendors and researchers began to explore the idea of processing-in-memory (PIM), the concept of placing compute units like CPUs and GPUs closer together to help reduce to the latency and cost inherent in transferring data, and building prototypes with names like EXECUBE, IRAM, DIVA and FlexRAM. For HPC environments that relied on data-intensive applications, the idea made a lot of sense. Reduce the distance between where data was

Promises, Challenges Ahead for Near-Memory, In-Memory Processing was written by Jeffrey Burt at The Next Platform.

Wrapping Kubernetes Around Applications Old And New

Kubernetes, the software container management system born out of Google, has seen its popularity in the datacenter soar in recent years as datacenter admins look to gain greater control of highly distributed computing environments and to take advantage of the advantages that virtualization, containers, and other technologies offer.

Open sourced by Google three years ago, Kubernetes is derived from the Borg and Omega controllers that the search engine giant created for its own clusters and has become an important part of the management tool ecosystem that includes OpenStack, Mesos, and Docker Swarm. These all try to bring order to what

Wrapping Kubernetes Around Applications Old And New was written by Jeffrey Burt at The Next Platform.

Large-Scale Quantum Computing Prototype on Horizon

What supercomputers will look like in the future, post-Moore’s Law, is still a bit hazy. As exascale computing comes into focus over the next several years, system vendors, universities and government agencies are all trying to get a gauge on what will come after that. Moore’s Law, which has driven the development of computing systems for more than five decades, is coming to an end as the challenge of making smaller chips loaded with more and more features is becoming increasingly difficult to do.

While the rise of accelerators, like GPUs, FPGAs and customized ASICs, silicon photonics and faster interconnects

Large-Scale Quantum Computing Prototype on Horizon was written by Jeffrey Burt at The Next Platform.

Memristor Research Highlights Neuromorphic Device Future

Much of the talk around artificial intelligence these days focuses on software efforts – various algorithms and neural networks – and such hardware devices as custom ASICs for those neural networks and chips like GPUs and FPGAs that can help the development of reprogrammable systems. A vast array of well-known names in the industry – from Google and Facebook to Nvidia, Intel, IBM and Qualcomm – is pushing hard in this direction, and those and other organizations are making significant gains thanks to new AI methods as deep learning.

All of this development is happening at a time when the

Memristor Research Highlights Neuromorphic Device Future was written by Jeffrey Burt at The Next Platform.

Juggling Applications On Intel Knights Landing Xeon Phi Chips

Intel’s many-core “Knights Landing” Xeon Phi processor is just a glimpse of what can be expected of supercomputers in the not-so-distant future of high performance computing. As the industry continues its march to exascale computing, systems will become more complex, and evolution that will include processors that not only sport a rapidly increasing number of cores but also a broad array of on-chip resources ranging from memory to I/O. Workloads ranging from simulation and modeling applications to data analytics and deep learning algorithms are all expected to benefit from what these new systems will offer in terms of processing capabilities.

Juggling Applications On Intel Knights Landing Xeon Phi Chips was written by Jeffrey Burt at The Next Platform.

ARM Gains Stronger Foothold In China With AI And IoT

China represents a huge opportunity for chip designer ARM as it looks to extend its low-power system-on-a-chip (SoC) architecture beyond the mobile and embedded devices spaces and into new areas, such as the datacenter and emerging markets like autonomous vehicles, drones and the Internet of Things. China is a massive, fast-growing market with tech companies – including such giants as Baidu, Alibaba, and Tencent – looking to leverage such technologies as artificial intelligence to help expand their businesses deeper into the global market and turning to vendors like ARM that can help them fuel that growth.

ARM Holdings, which designs

ARM Gains Stronger Foothold In China With AI And IoT was written by Jeffrey Burt at The Next Platform.

Top Chinese Supercomputer Blazes Real-World Application Trail

China’s massive Sunway TaihuLight supercomputer sent ripples through the computing world last year when it debuted in the number-one spot on the Top500 list of the world’s fastest supercomputers. Delivering 93 teraflops of performance – and a peak of more than 125,000 teraflops – the system is nearly three times faster than the second supercomputer on the list (the Tianhe-2, also a Chinese system) and dwarfs the Titan system Oak Ridge National Laboratory, a Cray-based machine that is the world’s third-fastest system, and the fastest in the United States.

However, it wasn’t only the system’s performance that garnered a lot

Top Chinese Supercomputer Blazes Real-World Application Trail was written by Jeffrey Burt at The Next Platform.

Getting Down To Bare Metal On The Cloud

When you think of the public cloud, the tendency is to focus on the big ones, like Amazon Web Services, Microsoft Azure, or Google Cloud Platform. They’re massive, dominating the public cloud skyline with huge datacenters filled with thousands of highly virtualized servers, not to mention virtualized storage and networking. Capacity is divvied up among corporate customers that are increasingly looking to run and store their workloads on someone else’s infrastructure, hardware that they don’t have to set up, deploy, manage or maintain themselves.

But as we’ve talked about before here at The Next Platform, not all workloads run

Getting Down To Bare Metal On The Cloud was written by Jeffrey Burt at The Next Platform.

1 20 21 22