Archive

Category Archives for "Network World Data Center"

Data centers are set to grow and become more complex, survey finds

Companies will invest more in data centers in the coming years, but it won’t necessarily be around compute. That's according to a new survey by AFCOM, the data center and IT management education company.This is AFCOM’s first study on the subject in two years, and it found that ownership, renovations, and building of new data centers were on the increase. It found 58 percent of survey respondents currently own between two and nine data centers and that on average, 5.3 data centers will be renovated per organization. That number increases to 7.8 data centers over the course of 12 months.Once again, we see the notion of people shutting down their data centers and moving everything to the cloud is evaporating.To read this article in full, please click here

Mass data fragmentation requires a storage rethink

Companies are experiencing a growing problem of mass data fragmentation (MDF). Data is siloed and scattered all over the organization — on and off premises — and businesses are unable to use the data strategically.When data is fragmented, only a small portion of it is available to be analyzed. In my last post, I described MDF as a single trend, but it can occur in a number of ways. Below are the most common forms of MDF: Fragmentation across IT silos: Secondary IT operations such as backups, file sharing/storage, provisioning for test/development and analytics are typically being done in completely separate silos that don’t share data or resources, with no central visibility or control. This results in overprovisioning/waste, as well as a challenge to meet service-level agreements (SLAs) or availability targets.   Fragmentation within a silo: There are even "silos within silos." Example: backup, where it is not uncommon to have four to five separate backup solutions from different vendors to handle different workloads such as virtual, physical, satabase, and cloud. On top of that, each solution needs associated target storage, dedupe appliances, media servers, etc., which propagate the silo problem. Fragmentation due to copies: Continue reading

Hyperconvergence: Not just for specific workloads anymore

Hyperconvergence has come a long way in a relatively short time, and enterprises are taking advantage of the new capabilities.Hyperconverged infrastructure (HCI) combines storage, computing and networking into a single system; hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking.HCI platforms were initially aimed at virtual desktop infrastructure (VDI), video storage, and other discrete workloads with predictable resource requirements. Over time, they have advanced to become suitable platforms for enterprise applications, databases, private clouds, and edge computing deployments.To read this article in full, please click here

Chip-cooling breakthrough will reduce data-center power costs

Traditional passive heatsinks affixed to microprocessors for cooling don’t work well enough for today’s high-speed computations and data throughputs and should be junked, says a group of mechanical engineering researchers.A better option, they say, are "spirals or mazes that coolant can travel through" within tiny channels on the actual processor. That technique could massively improve efficiency, says Scott Schiffres, an assistant professor at Binghamton University in New York, in an article on the school's website. The school has developed this new method for cooling chips.To read this article in full, please click here

AMD’s road to the data center and HPC isn’t as long as you think

Last week, AMD announced it was ready to take on Nvidia in the GPU space for the data center, a market the company had basically ignored for the last several years in its struggle just to survive. But now, buoyed by its new CPU business, AMD is ready to take the fight to Nvidia.It would seem a herculean task. Or perhaps Quixotic. Nvidia has spent the past decade tilling the soil for artificial intelligence (AI) and high-performance computing (HPC), but it turns out AMD has a few things in its favor.[ Learn who's developing quantum computers. ] For starters, it has a CPU and GPU business, and it can tie them together in a way Nvidia and Intel cannot. Yes, Intel has a GPU product line, but they are integrated with their consumer CPUs and not on the Xeons. And Nvidia has no x86 line.To read this article in full, please click here

Gotcha pricing from the cloud pushes workloads back on premises

A new survey by cloud software vendor Nutanix finds that most firms are embracing the hybrid model, but few have actually achieved it. And many are shifting their workloads back on premises because of cloud costs.This was Nutanix’s first global Enterprise Cloud Index, so it doesn’t have historical data by which to measure, but its initial findings match what we’ve known for a while. Read also: How to make hybrid cloud work The hybrid cloud, a mix of on-premises and public cloud computing, working in tandem is the preferred method for most firms; 91 percent to be exact. But only 19 percent of firms surveyed said they have that model today. One reason is that app vendors make it hard to operate in hybrid mode, said Wendy Pfeiffer, CIO at Nutanix.To read this article in full, please click here

Network operations: A new role for AI and ML

Artificial intelligence and machine learning are still viewed with skepticism by many in IT, despite a decades-long long history, continuing advances within academia and industry, and numerous successful applications. But it’s not hard to understand why: The very concept of an algorithm running on a digital computer being able to duplicate and even improve upon the knowledge and judgement of a highly-experienced professional – and, via machine learning, improve these results over time – still sounds at the very least a bit off in the future. And yet, thanks to advances in AI/ML algorithms and significant gains in processor and storage performance and especially the price/performance of solutions available today, AI and ML are already hard at work in network operations, as we’ll explore below.To read this article in full, please click here(Insider Story)

GPUs are vulnerable to side-channel attacks

Computer scientists at the University of California at Riverside have found that GPUs are vulnerable to side-channel attacks, the same kinds of exploits that have impacted Intel and AMD CPUs.Two professors and two students, one a computer science doctoral student and a post-doctoral researcher, reverse-engineered a Nvidia GPU to demonstrate three attacks on both graphics and computational stacks, as well as across them. The researchers believe these are the first reported side-channel attacks on GPUs.[ Read also: What are the Meltdown and Spectre exploits? | Get regularly scheduled insights: Sign up for Network World newsletters ] A side-channel attack is one where the attacker uses how a technology operates, in this case a GPU, rather than a bug or flaw in the code. It takes advantage of how the processor is designed and exploits it in ways the designers hadn’t thought of.To read this article in full, please click here

Making the right hyperconvergence choice: HCI hardware or software?

Once a niche technology, primarily attractive to organizations with specific needs, such as streamlining operations at branch offices, hyperconverged infrastructure (HCI) is rapidly finding a wide customer base.HCI is an IT framework that combines storage, computing and networking into a single system; hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking.Enterprises planning an HCI adoption can select from two main approaches: hardware or software. HCI hardware typically comes in the form of an integrated appliance, a hardware/software package created and delivered by a single vendor. Appliance vendors include Dell EMC, Nutanix and HPE/SimpliVity. A software-only offering allows customers to deploy HCI on a bring-your-own-technology basis. HCI software vendors include Maxta and VMware (vSAN).To read this article in full, please click here

What is Transport Layer Security (TLS)?

Despite the goal of keeping Web communications private, flaws in the design and implementation of Transport Layer Security have led to breaches, but the latest version – TLS 1.3 – is an overhaul that strengthens and streamlines the crypto protocol.What is TLS? TLS is a cryptographic protocol that provides end-to-end communications security over networks and is widely used for internet communications and online transactions. It is an IETF standard intended to prevent eavesdropping, tampering and message forgery. Common applications that employ TLS include Web browsers, instant messaging, e-mail and voice over IP.To read this article in full, please click here

Cisco, Amazon marriage simplifies hybrid cloud app development

Cisco and Amazon Web Services will soon offer enterprise customers an integrated platform that promises to help them more simply build, secure and connect Kubernetes clusters across private data centers and the AWS cloud.The new package, Cisco Hybrid Solution for Kubernetes on AWS combines Cisco, AWS and open source technologies to simplify complexity and helps eliminate challenges for customers who use Kubernetes to enable deploying applications across on-premises and the AWS cloud in a secure, consistent manner said David Cope, senior director of Cisco Cloud Platform & Solutions Group (CPSG).[ Also see How to plan a software-defined data-center network and Efficient container use requires data-center software networking.] “The significance of Amazon teaming with Cisco means  more integration between product lines from AWS and Cisco, thus reducing the integration costs notably on the security and management fronts for joint customers," said Stephen Elliot, program vice president with IDC.  “It also provides customers with some ideas on how to migrate workloads from private to public clouds.”To read this article in full, please click here

Cisco-AWS marriage simplifies hybrid-cloud app development

Cisco and Amazon Web Services (AWS) will soon offer enterprise customers an integrated platform that promises to help them more simply build, secure, and connect Kubernetes clusters across private data centers and the AWS cloud.The new package, Cisco Hybrid Solution for Kubernetes on AWS, combines Cisco, AWS and open-source technologies to simplify complexity and helps eliminate challenges for customers who use Kubernetes to enable deploying applications on premises and across the AWS cloud in a secure, consistent manner said David Cope, senior director of Cisco Cloud Platform & Solutions Group (CPSG).[ Also see How to plan a software-defined data-center network and Efficient container use requires data-center software networking.] “The significance of Amazon teaming with Cisco means  more integration between product lines from AWS and Cisco, thus reducing the integration costs notably on the security and management fronts for joint customers," said Stephen Elliot, program vice president with IDC. “It also provides customers with some ideas on how to migrate workloads from private to public clouds.”To read this article in full, please click here

AMD now wants to take on Nvidia in the data center

There’s no doubt that AMD’s graphics business has kept the company afloat when its CPU business stunk. More than once I saw quarterly numbers that showed all the profitability was coming from the GPU side of the market.The split between it and Nvidia is about 2:1, according to Steam analytics. Nvidia just has tremendous momentum and hasn’t lost it. And it allowed the company to branch out into artificial intelligence (AI) so thoroughly that gaming has almost become secondary to the firm. Not that they are leaving gamers hanging; they just aren’t the top priority any more.With AMD on the upswing on the CPU side, the company has decided to finally stop ceding the whole data center to Nvidia. And this week it introduced two new GPUs with the data center and HPC/AI workloads in mind.To read this article in full, please click here

AMD continues server push, introduces Zen 2 architecture

Advanced Micro Devices (AMD) revealed the Zen 2 architecture for its family of both desktop/laptop and server microprocessors that it plans to launch in 2019, with a promise of twice the performance throughput over the previous generation. The news came at a briefing in San Francisco that saw a number of AMD announcements.Zen is the core architecture. On the desktop and notebooks, it’s sold under the Ryzen brand name. For servers, it’s sold under the Epyc brand. The next generation of Epyc, code-named Rome, is due next year.Zen made AMD competitive with Intel once again after the disastrous line of subpar processors named after heavy equipment (Bulldozer, Piledriver, Steamroller). With Zen 2, AMD hopes to surpass Intel in all aspects of performance.To read this article in full, please click here

Intel responds to the Epyc server threat from AMD

I do love seeing the chip market get competitive again. Intel has formally announced a new class of Xeon Scalable processors, code-named “Cascade Lake-AP” or Cascade Lake Advanced Performance, that in many ways leapfrogs the best AMD has to offer.The news comes ahead of the Supercomputing 18 show and was likely done to avoid being drowned out in the upcoming news. It also comes one day ahead of an AMD announcement, which should be hitting the wires as you read this. I don’t think that’s a coincidence.The Cascade Lake-AP processors come with up to 48 cores and support for 12 channels of DDR4 memory, a big leap over the old design and a leap over AMD’s Epyc server processors, as well. Intel’s current top-of-the-line processor, the Xeon Platinum 8180, has only 28 cores and six memory channels, while the AMD Epyc has 32 cores and eight memory channels.To read this article in full, please click here

Tariffs on China cause new data center equipment prices to increase

As if the end of the year doesn’t present enough challenges for IT professionals, now there is the added concern coming from the Trump administration regarding the tariffs that were imposed on China back on Sept. 24.Companies including Cisco, Dell, HPE, and Juniper Networks all called for networking and server equipment to be dropped from the tariff regulations, but they were unable to persuade the U.S. government to do that.“By raising the cost of networking products, the proposed duties would impede the development and adoption of cloud-based services and infrastructure,” the group told trade regulators before the tariff was imposed, according to Reuters.To read this article in full, please click here

What’s hot in network certifications

Network certifications typically serve as a litmus test of a network professional’s knowledge of technologies that most company already use. Increasingly, however, network professionals are looking beyond what is, and they’re getting a leg up on certifications that will set them apart from their peers in the near future.To read this article in full, please click here(Insider Story)

Latest supercomputer runs Red Hat Enterprise Linux (RHEL)

On Oct. 26, the National Nuclear Security Administration (NNSA) — part of the Department of Energy — unveiled the latest supercomputer. It's named Sierra and is now the third-fastest supercomputer in the world.Sierra runs at 125 petaflops (peak performance) and will primarily be used by the NNSA for modeling and simulations as part of its core mission of ensuring the safety, security, and effectiveness of the U.S.'s nuclear stockpile. It will be used by three separate nuclear security labs — Lawrence Livermore National Labs, Sandia National Laboratories, and Los Alamos National Laboratory. And it's running none other than Red Hat Enterprise Linux (RHEL).To read this article in full, please click here

1 63 64 65 66 67 172