After years of shrinking sales, the server market is suddenly hot, very hot. According to the latest figures from IDC, worldwide server shipments increased 20.7% year over year to 2.7 million units in Q1 of 2018, and revenue rose 38.6%.This is the third consecutive quarter of double-digit growth, and it’s being driven by a number of factors. They include a marketwide enterprise refresh cycle, strong demand from cloud service providers, increased use of servers as the core building blocks for software-defined infrastructure, broad demand for newer CPUs, and growing deployments of next-generation workloads.Average selling prices (ASP) increased during the quarter due to richer configurations and increased component costs. The increased ASPs also contributed to revenue growth. Volume server revenue grew by 40.9%, to $15.9 billion, while midrange server revenue grew 34%, to $1.7 billion, and high-end systems grew 20.1%, to $1.2 billion.To read this article in full, please click here
Cisco this week broadened its server family with a high-density box aimed at handling compute intensive data center workloads and distributed edge computing environments.The Cisco C-Series C4200 multinode rack server is a 2RU box comprised of the C4200 chassis and C125 server nodes which Cisco says brings up to 128% higher processor core density and 33% more memory compared to its existing two-socket UCS M5 rack servers. The C4200 chassis can house up to four server nodes.[ Now see who's developing quantum computers.]
“As computing demand shifts from large, traditional data centers to include smaller, more distributed environments at the edge, the ability to mix form factors seamlessly in ‘micro data centers,’ and to manage and automate operations from the cloud becomes vitally important,” wrote Kaustubh Das, Cisco vice president of strategy and product development, storage in its Computing Systems Product Group in a blog about the new server.To read this article in full, please click here
It’s a good thing AMD had the sense not to rub Intel’s nose in the Meltdown/Spectre vulnerability, because it would be getting it right back for this one: Researchers from the Fraunhofer Institute for Applied and Integrated Safety in Germany have published a paper detailing how to compromise a virtual machine encrypted by AMD's Secure Encrypted Virtualization (SEV).The news is a bit of a downer for AMD, since it just added Cisco to its list of customers for the EPYC processor. Cisco announced today plans to use EPYC in its density-optimized Cisco UCS C4200 Series Rack Server Chassis and the Cisco UCS C125 M5 Rack Server Node.To read this article in full, please click here
GPU market leader Nvidia holds several GPU Technology Conferences (GTC) annually around the globe. It seems every show has some sort of major announcement where the company is pushing the limits of GPU computing and creating more options for customers. For example, at GTC San Jose, the company announced its NVSwitch architecture, which connects up to 16 GPUs over a single fabric, creating one massive, virtual GPU. This week at GTC Taiwan, it announced its HGX-2 server platform, which is a reference architecture enabling other server manufacturers to build their own systems. The DGX-2 server announced at GTC San Jose is built on the HGX-2 architecture.To read this article in full, please click here
Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it's no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers. The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.To read this article in full, please click here
Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it's no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers. The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.To read this article in full, please click here
Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters.
AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here
Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ]
DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here
Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters.
AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here
Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ]
DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here
There are two main camps in the quantum computing development, says Ashish Nadkarni, Program Vice President of Computing Platforms, Worldwide Infrastructure at IDC. In the first camp are entrenched players from the world of classical computing. And in the second are quantum computing startups.“It’s a highly fragmented landscape,” Nadkarni says. “Each company has its own approach to building a universal quantum computer and delivering it as a service.”[ Now see What is quantum computing [and why enterprises should care.]
Classic-computing vendors pioneer quantum computing
Along with IBM, other classical computing companies staking a claim in the emerging field of quantum computing include:To read this article in full, please click here
The first thing to know about quantum computing is that it won’t displace traditional, or ‘classical’ computing. The second thing to know: Quantum computing is still a nascent technology that probably won’t be ready for prime time for several more years.And the third thing you should know? The time to start protecting your data’s security from quantum computers is now.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ]
Here’s an overview of what you should know about quantum computing.To read this article in full, please click here
In 2018, for the first time cloud and software-defined data-center concerns have become the primary focus of enterprise network teams, bumping server virtualization from the top spot, according to an Enterprise management Associates (EMA) report based on a survey of 251 North American and European enterprise network managersThis is the first shift in their priorities for in more than a decade. Since 2008, EMA has been asking network managers to identify the broad IT initiatives that drive their priorities. Server virtualization has dominated their responses year after year. Cloud and software-defined data center (SDDC) architectures have always been secondary or tertiary drivers.To read this article in full, please click here
The tech industry got a jolt last week worse than the 3.5 magnitude quake that hit Oakland, California, on Monday. A report by Bloomberg, citing the usual anonymous sources, said that after a whole lot of R&D and hype, Qualcomm was looking to shut down or sell its Centriq line of ARM-based data center processors.Qualcomm launched the 48-core Centriq 2400 last November. At the time, potential customers, such as Microsoft, Alibaba and HPE, took to the stage to voice their support and interest.To read this article in full, please click here
HPE today took a step toward bolstering its data-center technology with plans to acquire Plexxi and its hyperconverged networking fabric.HPE said it expects the deal to close in its third quarter, which ends July 31, 2018 but did not release other financial details. Plexxi was founded in 2010 and targeted the nascent software-defined networking (SDN) market.[ Check out our 12 most powerful hyperconverged infrasctructure vendors. | Get regularly scheduled insights by signing up for Network World newsletters. ]
“Plexxi’s technology will extend HPE’s market-leading software-defined compute and storage capabilities into the high-growth, software-defined networking market, expanding our addressable market and strengthening our offerings for customers and partners,”To read this article in full, please click here
At Nutanix's .NEXT user conference last week, the company certainly flexed its software muscles with a cornucopia of new products and a roadmap to becoming the next big enterprise platform vendor. To achieve this status, Nutanix has shifted to selling software and letting its customers run its stack on their preferred hardware platform.There is currently a wide range of hardware partners supporting Nutanix, including Lenovo, IBM, and HPE. However, the vendor that has done perhaps the best job at providing the widest range of options for Nutanix customers is Dell EMC.To read this article in full, please click here
The Justice Department investigation into Huawei recalls a similar probe into whether Shenzhen rival ZTE broke U.S. sanctions by exporting devices containing American components to Iran. ZTE was found guilty last year not only of breaking the sanctions, which resulted in an $892 million fine, but of breaking the settlement deal’s terms by failing to punish those involved.To read this article in full, please click here(Insider Story)
It’s rare to see a processor find great success outside of the area it was intended for, but that’s exactly what has happened to the graphics processing unit (GPU). A chip originally intended to speed up gaming graphics and nothing more now powers everything from Adobe Premier and databases to high-performance computing (HPC) and artificial intelligence (AI).GPUs are now offered in servers from every major OEM plus off-brand vendors like Supermicro, but they aren’t doing graphics acceleration. That’s because the GPU is in essence a giant math co-processor, now being used to perform computation-intensive work ranging from 3D simulations to medical imaging to financial modelingTo read this article in full, please click here
10 competitors Cisco just can't kill offImage by IDG / jesadaphorn, Getty ImagesCreating a short list of key Cisco competitors is no easy task as the company now competes in multiple markets. In this case we tried to pick companies that have been around awhile or firms that have developed key technologies that directly impacted the networking giant. Cisco is now pushing heavily into software and security, a move that will open it up to myriad new competitors as well. Take a look.To read this article in full, please click here
New solar installs are contributing the same amount of electricity as building one new coal-fueled power station annually in Australia, according to the head of the Australian Energy Market Operator (AEMO).“We are essentially seeing the [equivalent] of a new power plant being built every season,” AEMO chief Audrey Zibelman told the Sydney Morning Herald.One reason rooftop adoption in Australia is exploding, the paper wrote, is because of government subsidies. However, there’s another financial driver of alternative power globally, which is the full-lifecycle cost of building and operating — it’s now lower.To read this article in full, please click here