Archive

Category Archives for "Network World Data Center"

Making the right hyperconvergence choice: HCI hardware or software?

Once a niche technology, primarily attractive to organizations with specific needs, such as streamlining operations at branch offices, hyperconverged infrastructure (HCI) is rapidly finding a wide customer base.HCI is an IT framework that combines storage, computing and networking into a single system; hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking.Enterprises planning an HCI adoption can select from two main approaches: hardware or software. HCI hardware typically comes in the form of an integrated appliance, a hardware/software package created and delivered by a single vendor. Appliance vendors include Dell EMC, Nutanix and HPE/SimpliVity. A software-only offering allows customers to deploy HCI on a bring-your-own-technology basis. HCI software vendors include Maxta and VMware (vSAN).To read this article in full, please click here

What is Transport Layer Security (TLS)?

Despite the goal of keeping Web communications private, flaws in the design and implementation of Transport Layer Security have led to breaches, but the latest version – TLS 1.3 – is an overhaul that strengthens and streamlines the crypto protocol.What is TLS? TLS is a cryptographic protocol that provides end-to-end communications security over networks and is widely used for internet communications and online transactions. It is an IETF standard intended to prevent eavesdropping, tampering and message forgery. Common applications that employ TLS include Web browsers, instant messaging, e-mail and voice over IP.To read this article in full, please click here

Cisco-AWS marriage simplifies hybrid-cloud app development

Cisco and Amazon Web Services (AWS) will soon offer enterprise customers an integrated platform that promises to help them more simply build, secure, and connect Kubernetes clusters across private data centers and the AWS cloud.The new package, Cisco Hybrid Solution for Kubernetes on AWS, combines Cisco, AWS and open-source technologies to simplify complexity and helps eliminate challenges for customers who use Kubernetes to enable deploying applications on premises and across the AWS cloud in a secure, consistent manner said David Cope, senior director of Cisco Cloud Platform & Solutions Group (CPSG).[ Also see How to plan a software-defined data-center network and Efficient container use requires data-center software networking.] “The significance of Amazon teaming with Cisco means  more integration between product lines from AWS and Cisco, thus reducing the integration costs notably on the security and management fronts for joint customers," said Stephen Elliot, program vice president with IDC. “It also provides customers with some ideas on how to migrate workloads from private to public clouds.”To read this article in full, please click here

Cisco, Amazon marriage simplifies hybrid cloud app development

Cisco and Amazon Web Services will soon offer enterprise customers an integrated platform that promises to help them more simply build, secure and connect Kubernetes clusters across private data centers and the AWS cloud.The new package, Cisco Hybrid Solution for Kubernetes on AWS combines Cisco, AWS and open source technologies to simplify complexity and helps eliminate challenges for customers who use Kubernetes to enable deploying applications across on-premises and the AWS cloud in a secure, consistent manner said David Cope, senior director of Cisco Cloud Platform & Solutions Group (CPSG).[ Also see How to plan a software-defined data-center network and Efficient container use requires data-center software networking.] “The significance of Amazon teaming with Cisco means  more integration between product lines from AWS and Cisco, thus reducing the integration costs notably on the security and management fronts for joint customers," said Stephen Elliot, program vice president with IDC.  “It also provides customers with some ideas on how to migrate workloads from private to public clouds.”To read this article in full, please click here

AMD now wants to take on Nvidia in the data center

There’s no doubt that AMD’s graphics business has kept the company afloat when its CPU business stunk. More than once I saw quarterly numbers that showed all the profitability was coming from the GPU side of the market.The split between it and Nvidia is about 2:1, according to Steam analytics. Nvidia just has tremendous momentum and hasn’t lost it. And it allowed the company to branch out into artificial intelligence (AI) so thoroughly that gaming has almost become secondary to the firm. Not that they are leaving gamers hanging; they just aren’t the top priority any more.With AMD on the upswing on the CPU side, the company has decided to finally stop ceding the whole data center to Nvidia. And this week it introduced two new GPUs with the data center and HPC/AI workloads in mind.To read this article in full, please click here

AMD continues server push, introduces Zen 2 architecture

Advanced Micro Devices (AMD) revealed the Zen 2 architecture for its family of both desktop/laptop and server microprocessors that it plans to launch in 2019, with a promise of twice the performance throughput over the previous generation. The news came at a briefing in San Francisco that saw a number of AMD announcements.Zen is the core architecture. On the desktop and notebooks, it’s sold under the Ryzen brand name. For servers, it’s sold under the Epyc brand. The next generation of Epyc, code-named Rome, is due next year.Zen made AMD competitive with Intel once again after the disastrous line of subpar processors named after heavy equipment (Bulldozer, Piledriver, Steamroller). With Zen 2, AMD hopes to surpass Intel in all aspects of performance.To read this article in full, please click here

Intel responds to the Epyc server threat from AMD

I do love seeing the chip market get competitive again. Intel has formally announced a new class of Xeon Scalable processors, code-named “Cascade Lake-AP” or Cascade Lake Advanced Performance, that in many ways leapfrogs the best AMD has to offer.The news comes ahead of the Supercomputing 18 show and was likely done to avoid being drowned out in the upcoming news. It also comes one day ahead of an AMD announcement, which should be hitting the wires as you read this. I don’t think that’s a coincidence.The Cascade Lake-AP processors come with up to 48 cores and support for 12 channels of DDR4 memory, a big leap over the old design and a leap over AMD’s Epyc server processors, as well. Intel’s current top-of-the-line processor, the Xeon Platinum 8180, has only 28 cores and six memory channels, while the AMD Epyc has 32 cores and eight memory channels.To read this article in full, please click here

Tariffs on China cause new data center equipment prices to increase

As if the end of the year doesn’t present enough challenges for IT professionals, now there is the added concern coming from the Trump administration regarding the tariffs that were imposed on China back on Sept. 24.Companies including Cisco, Dell, HPE, and Juniper Networks all called for networking and server equipment to be dropped from the tariff regulations, but they were unable to persuade the U.S. government to do that.“By raising the cost of networking products, the proposed duties would impede the development and adoption of cloud-based services and infrastructure,” the group told trade regulators before the tariff was imposed, according to Reuters.To read this article in full, please click here

What’s hot in network certifications

Network certifications typically serve as a litmus test of a network professional’s knowledge of technologies that most company already use. Increasingly, however, network professionals are looking beyond what is, and they’re getting a leg up on certifications that will set them apart from their peers in the near future.To read this article in full, please click here(Insider Story)

Latest supercomputer runs Red Hat Enterprise Linux (RHEL)

On Oct. 26, the National Nuclear Security Administration (NNSA) — part of the Department of Energy — unveiled the latest supercomputer. It's named Sierra and is now the third-fastest supercomputer in the world.Sierra runs at 125 petaflops (peak performance) and will primarily be used by the NNSA for modeling and simulations as part of its core mission of ensuring the safety, security, and effectiveness of the U.S.'s nuclear stockpile. It will be used by three separate nuclear security labs — Lawrence Livermore National Labs, Sandia National Laboratories, and Los Alamos National Laboratory. And it's running none other than Red Hat Enterprise Linux (RHEL).To read this article in full, please click here

Is Oracle’s silence on its on-premises servers cause for concern?

When Oracle consumed Sun Microsystems in January 2010, founder Larry Ellison promised new hiring and new investment in the hardware line, plus a plan to offer fully integrated, turnkey systems.By and large, he kept that promise. Oracle dispensed with the commodity server market in favor of high-end, decked-out servers such as Exadata and Exalogic fully loaded with Oracle software, which included Java.Earlier this year, word leaked that the company had gutted its Solaris Unix and Sparc processor development, but after eight years of spinning its wheels, no one could say Oracle had been impatient. It had invested rather heavily in Sparc for a long time, but the writing was on the wall.To read this article in full, please click here

Cray introduces a multi-CPU supercomputer design

Supercomputer maker Cray announced what it calls its last supercomputer architecture before entering the era of exascale computing. It is code-named “Shasta,” and the Department of Energy, already a regular customer of supercomputing, said it will be the first to deploy it, in 2020.The Shasta architecture is unique in that it will be the first server (unless someone beats Cray to it) to support multiple processor types. Users will be able to deploy a mix of x86, GPU, ARM and FPGA processors in a single system.Up to now, servers either came with x86 or, in a few select cases, ARM processors, with GPUs and FPGAs as add-in cards plugged into PCI Express slots. This will be the first case of fully native onboard processors, and I hardly expect Cray to be alone in using this design.To read this article in full, please click here

Optical networking breakthrough will run networks 100x faster

Researchers reckon they could speed up the internet a hundredfold with a new technique that twists light beams within fiber optic cable rather than sending them in a straight path.“What we’ve managed to do is accurately transmit data via light at its highest capacity in a way that will allow us to massively increase our bandwidth,” Dr. Haoran Ren, of Australia’s RMIT University, said in a press release.[ Learn who's developing quantum computers. ] The corkscrewing configuration, in development over the last few years and now recently physically miniaturized, uses a technique called orbital angular momentum (OAM).To read this article in full, please click here

Understanding mass data fragmentation

The digital transformation era is upon us, and it’s changing the business landscape faster than ever.I’ve seen numerous studies that show that digital companies are more profitable and have more share in their respective markets. Businesses that master being digital will be able to sustain market leadership, and those that can’t will struggle to survive; many will go away.This is why digital transformation is now a top initiative for every business and IT leader. A recent ZK Research study found that a whopping 89% of organizations now have at least one digital initiative under way, showing the level of interest across all industry verticals.To read this article in full, please click here

Rackspace launches disaster recovery as a service program

Give managed cloud computing provider Rackspace points for timing. Coming right after the Uptime Institute issued a warning for data center operators to improve their environmental disaster plans, the company announced it is broadening its existing disaster recovery as a service (DRaaS) program for on-premises, colocation, and multi-cloud environments.The expansion utilizes Zerto’s disaster recovery software, which is specifically designed to provide business continuity and disaster recovery in a cloud and virtualized environment.To read this article in full, please click here

1 2 3 107