Once again, IDC has thrown cold water on the notion that enterprises are looking to shut down their data centers and instead are looking to grow them. And a new form of IT spending is taking place.The latest worldwide market study by International Data Corporation (IDC) found revenue from sales of IT infrastructure equipment grew 48.4 percent year over year in the second quarter of 2018 to $15.4 billion.Quarterly spending on public cloud IT infrastructure was $10.9 billion in the second quarter of 2018, a 58.9 percent year-over-year growth, while private cloud spending reached $4.6 billion, an increase of 28.2 percent year over year.[ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ]
By end of the year, IDC projects public cloud spending will account for 68.2 percent of total IT equipment spending, growing at an annual rate of 36.9 percent. That’s not surprising, though, as Amazon, Microsoft, Google, etc., buy servers in the tens of thousands of units.To read this article in full, please click here
The stagnant server market has heated up over the last few years, due in no small part to the advent of “white box” server vendors grabbing an increasing share of the cloud business.Enterprises have been reluctant to follow the lead of hyperscale data center vendors to off-brand server competitors, largely because of a lack of enterprise-grade service and maintenance options. But the economics are compelling.
More data center stories:
Efficient container use in data centers
Data center staff are aging faster than the equipment
How cloud providers' data-migration appliances stack up
Why NVMe? Users weigh benefits of NVMe-accelerated flash storage
“White box” is a reference to the off-brand PCs built by independent PC vendors, which used to dot the landscape and appeal to buyers who built their own PCs with a plain beige tower and no vendor label on the box. In the server market, “white box” refers to vendors that are not the big three: Dell EMC, HP Enterprise and Lenovo.To read this article in full, please click here
If you are a career networking professional – be it a network architect, engineer, manager— do you need to advance your skillset to include programming and other DevOps skills to better serve you and your company?To read this article in full, please click here(Insider Story)
Three years after acquiring FPGA maker Altera for $16.7 billion, Intel’s strategy and positioning is coming into focus with the disclosure of its plans for Stratix 10 hardware and accompanying application development and acceleration stack.Altera made two FPGAs, chips that are reprogrammable to do different functions. The Arria 10, which is the low-end card, and Stratix 10, the high-performance card. The two are aimed at different target markets and use cases.“Each has its own tier, its own sweet spot for features and form factor,” said Sabrina Gomez, director of product marketing at Intel’s Programmable Solutions Group. “Arria is smaller, fits in 1U form factors. Stratix is dual PCI card. The power draw for Arria is 75 watts, while it’s 225 watts for Stratix.”To read this article in full, please click here
It is so nice to see a return of competition in the CPU space. For too long it had been a one-horse race, with Intel on its own and AMD willing to settle for good enough. Revitalized with the Zen architecture, AMD is taking it to Intel once again, and you are the winner.Both sides are proclaiming massive performance records, although in both cases they come with an asterisk next to them.Intel's announcement
In Intel’s case, it announced 95 new performance world records for its Intel Xeon Scalable processors using the most up-to-date benchmarks on hardware for major OEMs, including Dell, HPE, ASUS, and Super Micro, running SPECInt and SPECFP benchmarks as well as SAP HANA, ranging from single-socket systems up to eight-socket systems.To read this article in full, please click here
Cisco today said it has teamed with SAP to make it easier for customers to manage high volumes of data from multi-cloud and distributed data center resources.The companies announced that Cisco’s Container Platform will work with SAP’s Data Hub to integrate large data sets that may be in public clouds, such as Amazon Web Services, Hadoop, Microsoft or Google, and integrate them with private cloud or enterprise apps such as SAP S/4 HANA.[ Check out our 12 most powerful hyperconverged infrasctructure vendors. | Get regularly scheduled insights by signing up for Network World newsletters. ]
Cisco introduced its Kubernetes-based Container Platform in January and said it allows for self-service deployment and management of container clusters. SAP rolled out the Data Hub about a year ago, saying it provides visibility, orchestration and access to a broad range of data systems and assets while enabling the fast creation of powerful, organization-spanning data pipelines.To read this article in full, please click here
Whoever thought the chief competitors to HP Enterprise and Dell EMC would wind up being some of their biggest customers? But giant data center operators are in a sense becoming just that — a competitor to the hardware companies that they once and, to some degree still, sell hardware to.The needs of hyperscale data centers have driven this phenomenon. HPE and Dell design servers with maximum, broad appeal, so they don’t have to have many SKUs. But hyperscale data center operators want different configurations and find it cheaper to buy the parts and build the server themselves.Most of them— Google chief among them — don’t sell their designs; it’s just for their own internal use. But in the case of LinkedIn, the company is offering to “open source” the hardware designs it created to lower costs and speed up its data center deployment.To read this article in full, please click here
Hitachi Vantara launched a wide range of new hyperconverged infrastructure (HCI) systems, software management, and automation tools at its Hitachi Next 2018 conference taking place in San Diego.The move is meant to be a convergence of products, just as Hitachi Ventara as a company is going through a convergence. The U.S. subsidiary of the Japanese tech giant was formed last year by combining three business units: Hitachi Data Systems, the systems and storage infrastructure business; the Hitachi Insight Group IoT business; and the Pentaho Big Data business.To read this article in full, please click here
High-speed Ethernet is taking center stage this week at the European Conference on Optical Communication in Rome, Italy where a number of vendors including Arista, Cisco and Huawei are showing off gear that will power large-enterprise and hyperscale networks.The key demos come from the Ethernet Alliance and the 100G Lambda multisource agreement (MSA) group that are pushing technology advances needed to support 400G Ethernet, including new pulse amplitude modulation or PAM4 for electrical and optical interfaces, high-bandwidth switching silicon and a new high-density pluggable connector system known as QSFP-DD.To read this article in full, please click here
Intel is revamping its strategy around the data center by going beyond the Xeon chip and into silicon photonics transceivers. The company announced Monday at the European Conference on Optical Communications (ECOC) that samples of its silicon photonics transceivers targeting 5G wireless infrastructure and data centers are available now, with production set to start in the first quarter of 2019.The company notes that global data center IP traffic is increasing significantly. In 2016, global data center IP traffic was 6.8 zettabytes, and that will triple by 2021 because of all this data generated by humans and the Internet of Things (IoT).The choke point becomes copper wire, the standard for Ethernet connectivity. Copper wire can only effectively transmit about eight to 10 meters, said Eoin McConnell, director of marketing for the connectivity group in Intel’s data center group. Fiber optics can go as far as 10 kilometers.To read this article in full, please click here
Former Intel executive Renee James, who could have been the CEO following the ouster of Brian Krzanich last June, has instead launched a broadside attack against her former employer in the form of Ampere Computing, a startup company that develops ARM-based chips for the data center.Sound familiar? It’s what Cavium has been doing for some time — and gaining a good bit of momentum. However, the fields are still very green, and Ampere has more than enough room to grow.[ Now read: What is quantum computing (and why enterprises should care) ]
Ampere is based in the Silicon Valley but has an office in Portland, Oregon, not far from Intel’s primary development facility in Hillsboro, and apparently Ampere has been picking up Intel employees left and right.To read this article in full, please click here
Smelling blood in the water, a revitalized AMD is preparing for a big push against Intel in the data center, hoping to win back the market share it gained and lost a decade ago.AMD is promoting its Epyc processors, with 16 or 32 cores, as a lower TCO, higher performance option than Intel’s Xeon. It argues a single-socket 32-core server is cheaper up front and in the long run than a dual socket setup, which is Intel’s bread and butter.“We’re not saying single socket is for everyone, but at the heart of the market is where 50 percent to 80 percent are 32 cores per server and down, and our top single socket can do it more efficiently with lower costs and licensing. But in some cases some people will want to stay at two-socket,” said Glen Keels, director of product and segment marketing for data center products at AMD.To read this article in full, please click here
Hyperconverged infrastructure (HCI) vendor Scale Computing and power management specialist APC (formerly American Power Conversion, now owned by Schneider Electric) have partnered to offer a range of turnkey micro data centers for the North American market.The platform combines Scale’s hyperconverged software, HC3 HyperCore, running on top of its own hardware and built on APC’s ready-to-deploy racks for a micro data center. Micro will sell the platform as a single SKU.The pre-packaged platform is entirely turnkey, with automated virtualization, power management resources, and built-in redundancy. This makes it well-suited for remote edge locations, such as cell phone towers, where staff is not immediately available to maintain the equipment.To read this article in full, please click here
Lenovo and NetApp's storage alliance, joint venture in China, and new series of all-flash and hybrid flash products announced at Lenovo's Transform event, put them both in a much stronger position in the data center against rivals Dell EMC and HPE.The storage offerings include two familes, each subdivided into all-fash and hybrid -flash products, jointly developed by Lenovo and NetApp and available now worldwide. Several of the products support NVMe (non-volatile memory express), the extremely fast communications protocol and controller able to move data to and from SSDs via the PCIe-bus standard. NVMe SSDs are designed to provide two orders of magnitude speed improvement over prior SSDs.To read this article in full, please click here
Consumer demand for instant 24-hour access to personal bank data has taken the financial world in a new direction in less than one generation. Not only do bank IT departments now rival those of software development companies, but banking networks and infrastructure are at least as complex as a tech firm’s. Personal financial information has become one of the most protected and heavily regulated types of data in the world, and security measures and compliance programs consume the largest percentage of a financial institution’s IT budget.Knowing all this, it’s no wonder the “rip and replace” fad of the early 2000’s never materialized in the banking world. With everyone assuming the turn of the millennium meant “out with the old and in with the new,” companies were ready to rip the mainframes out of their infrastructure to prepare for whatever was next. But what came next never really materialized — or continued to prove inferior to the sheer processing power of the mainframe, which remains the only real choice for high-demand business computing.To read this article in full, please click here
Nvidia is raising its game in data centers, extending its reach across different types of AI workloads with the Tesla T4 GPU, based on its new Turing architecture and, along with related software, designed for blazing acceleration of applications for images, speech, translation and recommendation systems.The T4 is the essential component in Nvidia's new TensorRT Hyperscale Inference Platform, a small-form accelerator card, expected to ship in data-center systems from major server makers in the fourth quarter.The T4 features Turing Tensor Cores, which support different levels of compute precision for different AI applications, as well as the major software frameworks – including TensorFlow, PyTorch, MXNet, Chainer, and Caffe2 – for so-called deep learning, machine learning involving multi-layered neural networks.To read this article in full, please click here
Cloud adoption is undoubtedly the cornerstone of digital transformation, and for many, it is the foundation for rapid, scalable application development and delivery. Companies of all sizes and from across all industries are racing to achieve the many benefits afforded by public, private or hybrid cloud infrastructure. According to a recent study, 20 percent of enterprises plan to more than double public cloud spending in 2018, and 71 percent will grow public cloud spending more than 20 percent.Enterprises moving to the cloud are often seeking to improve employee collaboration, ensure redundancy, boost security and increase agility in application development. One of the top advantages afforded by the cloud is the ability to auto-scale in response to demand — a feature that has transformed what was once capacity planning into a more continuous cycle of capacity and resource management.To read this article in full, please click here
Cisco is betting heavily that artificial intelligence and machine learning will play an enormous part in future networks and data centers.To read this article in full, please click here(Insider Story)
To get up and running on a self-service, big-data analytics platform efficiently, many data-center and network managers these days would likely think about using a cloud service. But not so fast – there is some debate about whether the public cloud is the way to go for certain big-data analytics.For some big-data applications, the public cloud may be more expensive in the long run, and because of latency issues, slower than on-site private cloud solutions. In addition, having data storage reside on premises often makes sense due to regulatory and security considerations.
[ Also see How to plan a software-defined data-center network and Efficient container use requires data-center software networking.]
With all this in mind, Dell EMC has teamed up with BlueData, the provider of a container-based software platform for AI and big-data workloads, to offer Ready Solutions for Big Data, a big data as a service (BDaaS) package for on-premises data centers. The offering brings together Dell EMC servers, storage, networking and services along with BlueData software, all optimized for big-data analytics. To read this article in full, please click here
Software-defined data-center (SDDC) networks hold the promise of quickly and automatically reallocating resources to best support applications without changing the underlying physical infrastructure, but they require the proper integration of management, automation and network orchestration (MANO).To read this article in full, please click here(Insider Story)