Category Archives for "Network World Data Center"

10 hot micro-data-center startups to watch

Data-hungry technology trends such as IoT, smart vehicles, drone deliveries, smart cities and Industry 4.0 are increasing the demand for fast, always-on edge computing. One solution that has emerged to bring the network closer to the applications generating and end users consuming that data is the micro data center.The micro data center sector is a new space filled with more noise than signal. If you go hunting for a micro data center for your business you’ll find everything from suitcase-sized computing stacks that replace a server closet to modular enclosures delivered by semi-trucks to larger units that reside at the foot of cell towers to dedicated edge data centers with standardized designs that can spring up wherever there’s demand and where real estate or access rights are available, including easements, rooftops and industrial sites.To read this article in full, please click here

HPE, Nutanix deliver private cloud as a service

Nutanix announced the general availability of its integrated private cloud as-a-service solution with Hewlett Packard Enterprise (HPE), as well as a new integration with the popular IT management platform ServiceNow.Nutanix’s strategy has been to integrate its hyperconverged infrastructure (HCI) software with all of the big server-hardware vendors in addition to selling its own hardware appliances. Nutanix shared the news at its .NEXT conference in Copenhagen.HPE’s services are sold under the GreenLake brand, its metered on-premises service meant to counter the allure of the cloud. Customers can get ProLiant hardware under GreenLake without massive upfront acquisition costs and pay only for their use of the hardware.To read this article in full, please click here

The biggest risk to uptime? Your staff

There was an old joke: "To err is human, but to really foul up you need a computer." Now it seems the reverse is true. The reliability of data center equipment is vastly improved but the humans running them have not kept up and it's a threat to uptime.The Uptime Institute has surveyed thousands of IT professionals throughout the year on outages and said the vast majority of data center failures are caused by human error, from 70 percent to 75 percent.[Get regularly scheduled insights by signing up for Network World newsletters. ] And some of them are severe. It found more than 30 percent of IT service and data center operators experienced downtime that they called a “severe degradation of service” over the last year, with 10 percent of the 2019 respondents reporting that their most recent incident cost more than $1 million.To read this article in full, please click here

Schneider Electric launches wall-mounted server rack

Floor space is often at a premium in a cramped data center, and Schneider Electric believes it has a fix for that: a wall-mounted server rack.The EcoStruxure Micro Data Center Wall Mount is a 6U design, meaning it has the capacity of six rack units. Schneider is pushing its space-saving option as an edge solution. The company's EcoStruxure IT Expert remote management and vulnerability assessment service will be available for the wall-mount units, even when installed in non-secured edge locations. READ MORE: Micro-modular data centers set to multiplyTo read this article in full, please click here

High performance computing: Do you need it?

In today's data-driven world, high performance computing (HPC) is emerging as the go-to platform for enterprises looking to gain deep insights into areas as diverse as genomics, computational chemistry, financial risk modeling and seismic imaging. Initially embraced by research scientists who needed to perform complex mathematical calculations, HPC is now gaining the attention of a wider number of enterprises spanning an array of fields."Environments that thrive on the collection, analysis and distribution of data – and depend on reliable systems to support streamlined workflow with immense computational power – need HPC," says Dale Brantly, director of systems engineering at Panasas, an HPC data-storage-systems provider.To read this article in full, please click here

Intel announces Optane for workstations, higher capacity NAND

At its Memory and Storage Day 2019 in Seoul last week, Intel made several announcements concerning its Optane persistent storage as well as NAND flash capacity.Optane is a new form of non-volatile memory from Intel that has the storage capacity of a solid state drive (SSD) but speed almost equal to DRAM. It sits between memory and storage to act as a large, fast cache. While some come in a PCI Express card design, the predominant design is DRAM memory sticks that plug into the motherboard. And they cost a fortune. A 512GB Optane stick will run you $8,000.See how AI can boost data-center availability and efficiency Intel announced a new generation of Optane memory codenamed "Alder Stream," which it said has a 50x lower failure rate than 3D NAND and also triples the transfers per second compared to the current generation of Optane on the market today.To read this article in full, please click here

Data center gear will increasingly move off-premises

I've said that colocation and downsizing in favor of the cloud is happening, and the latest research from 451 Research confirms the trend. More than half of global utilized racks will be located at off-premises facilities, such as cloud and colocation sites, by the end of 2024, the company found.As companies get out of data center ownership, hardware will move to colocation sites like Equinix and DRT or cloud providers. The result is the total worldwide data center installed-base growth will see a dip of 0.1% CAGR between 2019-2024, according to the report, but overall total capacity in terms of space, power, and racks will continue to shift toward larger data centers.To read this article in full, please click here

How to decommission a data center

About the only thing harder than building a data center is dismantling one, because the potential for disruption of business is much greater when shutting down a data center than constructing one.The recent decommissioning of the Titan supercomputer at the Oak Ridge National Laboratory (ORNL) reveals just how complicated the process can be. More than 40 people were involved with the project, including staff from ORNL, supercomputer manufacturer Cray, and external subcontractors. Electricians were required to safely shut down the 9 megawatt-capacity system, and Cray staff was on hand to disassemble and recycle Titan’s electronics and its metal components and cabinets. A separate crew handled the cooling system. In the end, 350 tons of equipment and 10,800 pounds of refrigerant were removed from the site.To read this article in full, please click here

Cisco spreads ACI to Microsoft Azure, multicloud and SD-WAN environments

Cisco is significantly spreading its Application Centric Infrastructure (ACI) technology to help customers grow and control hybrid, multicloud and SD-WAN environments.ACI is Cisco’s flagship software-defined networking (SDN) data-center package, but it also delivers the company’s Intent-Based Networking technology, which brings customers the ability to automatically implement network and policy changes on the fly and ensure data delivery.  More about SD-WANTo read this article in full, please click here

Oracle updates Exadata big iron and its cloud commitment

Oracle OpenWorld 2019 is the platform for countless software announcements, but since 2010 the company has been in the hardware business thanks to the Sun Microsystems purchase, and the company remains committed to delivering integrated hardware and software systems.Proving the point, the company took the wraps off the Oracle Exadata X8M designed for acceleration of Oracle’s database applications, featuring new data analytics and business intelligence features along with Oracle's newfound religion on automation.The new Exadata X8M server platform uses second-generation Xeon Scalable processors and Intel's Optane DC persistent memory to accelerate performance. That's a big win for Intel, which is seeing quite a bit of momentum for AMD's Epyc processor. And it's another win for Optane, which pretty much every server vendor supports.To read this article in full, please click here

Microsoft brings IBM iron to Azure for on-premises migrations

When Microsoft launched Azure as a cloud-based version of its Windows Server operating system, it didn't make it exclusively Windows. It also included Linux support, and in just a few years, the number of Linux instances now outnumbers Windows instances.It's nice to see Microsoft finally shed that not-invented-here attitude that was so toxic for so long, but the company's latest move is really surprising.Microsoft has partnered with a company called Skytap to offer IBM Power9 instances on its Azure cloud service to run Power-based systems inside of the Azure cloud, which will be offered as Azure virtual machines (VM) along with the Xeon and Epyc server instances that it already offers.To read this article in full, please click here

Dell EMC updates PowerMax storage systems

Dell EMC has updated its PowerMax line of enterprise storage systems to offer Intel’s Optane persistent storage and NVMe-over-Fabric, both of which will give the PowerMax a big boost in performance.Last year, Dell launched the PowerMax line with high-performance storage, specifically targeting industries that need very low latency and high resiliency, such as banking, healthcare, and cloud service providers.The company claims the new PowerMax is the first-to-market with dual port Intel Optane SSDs and the use of storage-class memory (SCM) as persistent storage. The Optane is a new type of non-volatile storage that sits between SSDs and memory. It has the persistence of a SSD but almost the speed of a DRAM. Optane storage also has a ridiculous price tag. For example, a 512 GB stick costs nearly $8,000.To read this article in full, please click here

IBM z15 mainframe, amps-up cloud, security features

IBM has rolled out a new generation of mainframes – the z15 – that not only bolsters the speed and power of the Big Iron but promises to integrate hybrid cloud, data privacy and security controls for modern workloads.On the hardware side, the z15 mainframe systems ramp up performance and efficiency. For example IBM claims 14 percent more performance per core, 25 percent more system capacity, 25percent more memory, and 20 percent more I/O connectivity than the previous iteration, the z14 system. [ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ] IBM also says the system can save customers 50 percent of costs over operating x86-based servers and use 40 percent less power than a comparable x86 server farm. And the z15 has the capacity to handle scalable environments such as supporting 2.4 million Docker containers on a single system.To read this article in full, please click here

Cisco adds speed, smarts to MDS storage networking family

Looking to support ever-increasing workloads, Cisco has pumped up the speed and intelligence of its storage area networking family.The company addressed the need for increased speed by saying it would add support for 64Gbps ready SAN fabric across its 9700 line of MDS storage directors.The MDS family includes the 18-slot 9718, the 10-slot 9710 and the six-slot 9706. More about backup and recovery: Backup vs. archive: Why it’s important to know the difference How to pick an off-site data-backup method Tape vs. disk storage: Why isn’t tape dead yet? The correct levels of backup save time, bandwidth, space The main idea here is that customers can upgrade to the new fabric module and upgrade their software to add speed and capacity for high-speed fabrics without having to rip-and-replace any of the directors, according to Adarsh Viswanathan, Cisco product manager, data center switching. A 64Gbps line card for all three chassis will be available in the future, Cisco said.To read this article in full, please click here

Can AMD convert its growing GPU presence into a data center play?

AMD's $5.4 billion purchase of ATI Technologies in 2006 seemed like an odd match. Not only were the companies in separate markets, but they were on separate coasts, with ATI in the Toronto, Canada, region and AMD in Sunnyvale, California.They made it work, and arguably it saved AMD from extinction because it was the graphics business that kept the company afloat while the Athlon/Opteron business was going nowhere. There were many quarters where graphics brought in more revenue than CPUs and likely saved the company from bankruptcy.But those days are over, and AMD is once again a highly competitive CPU company, and quarterly sales are getting very close to the $2 billion mark. While the CPU business is on fire, the GPU business continues to do well.To read this article in full, please click here

Two AMD Epyc processors crush four Intel Xeons in tests

Tests by the evaluation and testing site ServeTheHome found a server with two AMD Epyc processors can outperform a four-socket Intel system that costs considerably more.If you don’t read ServeTheHome, you should. It’s cut from the same cloth as Tom’s Hardware Guide and AnandTech but with a focus on server hardware, mostly the low end but they throw in some enterprise stuff, as well.ServeTheHome ran tests comparing the AMD Epyc 7742, which has 64 cores and 128 threads, and the Intel Xeon Platinum 8180M with its 28 cores and 56 threads. The dollars, though, show a real difference. Each Epyc 7742 costs $6,950, while each Xeon Platinum 8180M goes for $13,011. So, two Epyc 7742 processors cost you $13,900, and four Xeon Platinum 8180M processors cost $52,044, four times as much as the AMD chips.To read this article in full, please click here

HPE’s vision for the intelligent edge

It’s not just speeds and feeds anymore, it's intelligent software, integrated security and automation that will drive the networks of the future.That about sums up the networking areas that Keerti Melkote, HPE's President, Intelligent Edge, thinks are ripe for innovation in the next few years.He has a broad perspective because his role puts him in charge of the company's networking products, both wired and wireless.Now see how AI can boost data-center availability and efficiency “On the wired side, we are seeing an evolution in terms of manageability," said Melkote, who founded Aruba, now part of HPE. "I think the last couple of decades of wired networking have been about faster connectivity. How do you go from a 10G to 100G Ethernet inside data centers? That will continue, but the bigger picture that we’re beginning to see is really around automation.” To read this article in full, please click here

Data center cooling: Electricity-free system sends excess building heat into space

We all know that blocking incoming sunlight helps cool buildings and that indoor thermal conditions can be improved with the added shade. More recently, though, scientists have been experimenting with ways to augment that passive cooling by capturing any superfluous, unwanted solar heat and expelling it, preferably into outer space, where it can’t add to global warming.Difficulties in getting that kind of radiative cooling to work are two-fold. First, directing the heat optimally is hard.“Normally, thermal emissions travel in all directions,” says Qiaoqiang Gan, an associate professor of electrical engineering at University at Buffalo, in a news release. The school is working on radiative concepts. That’s bad for heat spill-over and can send the thermal energy where it’s not wanted—like into other buildings.To read this article in full, please click here

1 2 3 117