Andy Patrizio

Author Archives: Andy Patrizio

Study shows admins are doing a terrible job of patching servers

Open source has taken over the server side of things, but admins are doing a terrible job of keeping the software patched and up to date.Black Duck Software, a developer of auditing software for open-source security, has released its annual Open Source Security and Risk Analysis, which finds enterprise open source to be full of security vulnerabilities and compliance issues.[ For more on IoT security see our corporate guide to addressing IoT security concerns. | Get regularly scheduled insights by signing up for Network World newsletters. ] According to the study, open-source components were found in 96% of the applications the company scanned last year, with an average of 257 instances of open source code in each application.To read this article in full, please click here

Study shows admins are doing a terrible job of patching servers

Open source has taken over the server side of things, but admins are doing a terrible job of keeping the software patched and up to date.Black Duck Software, a developer of auditing software for open-source security, has released its annual Open Source Security and Risk Analysis, which finds enterprise open source to be full of security vulnerabilities and compliance issues.[ For more on IoT security see our corporate guide to addressing IoT security concerns. | Get regularly scheduled insights by signing up for Network World newsletters. ] According to the study, open-source components were found in 96% of the applications the company scanned last year, with an average of 257 instances of open source code in each application.To read this article in full, please click here

SSDs get bigger, while prices get smaller

With so much going on in the enterprise storage world, two bits of good news have come out — and it’s only Tuesday. Capacity is going up, and prices are coming down.According to the report from DRAMeXchange, the enterprise SSD market has been growing fast. It projects enterprise SSD sales to top 30 million units this year, up from fewer than 20 million units in 2016, and that rate of growth is expected to continue in the next three years.That’s despite tight supply for memory chips in the first quarter resulting in high average selling prices. For the second quarter, which we are in the midst of, DRAMeXchange expects a rebound in demand due to increased supply.To read this article in full, please click here

Why NVMe over Fabric matters

In my earlier blog post on SSD storage news from HPE, Hitachi and IBM, I touched on the significance of NVMe over Fabric (NoF). But not wanting to distract from the main storage, I didn’t go into detail. I will do so with this blog post.Hitachi Vantara goes all in on NVMe over Fabric First, though, an update on the news from Hitachi Vantara, which I initially said had not commented yet on NoF. It turns out they are all in.“Hitachi Vantara currently offers, and continues to expand support for, NVMe in our hyperconverged UCP HC line. As NVMe matures over the next year, we see opportunities to introduce NVMe into new software-defined and enterprise storage solutions. More will follow, but it confuses the conversation to pre-announce things that customers cannot implement today,” said Bob Madaio, vice president, Infrastructure Solutions Group at Hitachi Vantara, in an email to me.To read this article in full, please click here

What Qualcomm’s rumored exit from data centers means

The tech industry got a jolt last week worse than the 3.5 magnitude quake that hit Oakland, California, on Monday. A report by Bloomberg, citing the usual anonymous sources, said that after a whole lot of R&D and hype, Qualcomm was looking to shut down or sell its Centriq line of ARM-based data center processors.Qualcomm launched the 48-core Centriq 2400 last November. At the time, potential customers, such as Microsoft, Alibaba and HPE, took to the stage to voice their support and interest.To read this article in full, please click here

What Qualcomm’s rumored exit from data centers means

The tech industry got a jolt last week worse than the 3.5 magnitude quake that hit Oakland, California, on Monday. A report by Bloomberg, citing the usual anonymous sources, said that after a whole lot of R&D and hype, Qualcomm was looking to shut down or sell its Centriq line of ARM-based data center processors.Qualcomm launched the 48-core Centriq 2400 last November. At the time, potential customers, such as Microsoft, Alibaba and HPE, took to the stage to voice their support and interest.To read this article in full, please click here

Hitachi, HPE and IBM enhance their SSD-based storage products

When three major vendors all make similar product announcements, you know things are cooking in that space. In this case, Hitachi Vantara, HP Enterprise, and IBM all made news around SSD-based storage, much of it related to de-duplication and other ways to get control over data creep.With users generating gigabytes of data every week, the solution for many enterprises has been to throw storage at it. That can get expensive, especially with SSD. SSD averages about 40 cents per gigabyte, while HDD storage averages about 5 cents per gigabyte.To get control over data sprawl, storage vendors are offering de-duplication, or in the case of Hitachi Vantara, better de-duplication with their new systems. We’ll run down the news alphabetically.To read this article in full, please click here

GPUs: Designed for gaming now crucial to HPC and AI

It’s rare to see a processor find great success outside of the area it was intended for, but that’s exactly what has happened to the graphics processing unit (GPU). A chip originally intended to speed up gaming graphics and nothing more now powers everything from Adobe Premier and databases to high-performance computing (HPC) and artificial intelligence (AI).GPUs are now offered in servers from every major OEM plus off-brand vendors like Supermicro, but they aren’t doing graphics acceleration. That’s because the GPU is in essence a giant math co-processor, now being used to perform computation-intensive work ranging from 3D simulations to medical imaging to financial modelingTo read this article in full, please click here

GPUs: Designed for gaming now crucial to HPC and AI

It’s rare to see a processor find great success outside of the area it was intended for, but that’s exactly what has happened to the graphics processing unit (GPU). A chip originally intended to speed up gaming graphics and nothing more now powers everything from Adobe Premier and databases to high-performance computing (HPC) and artificial intelligence (AI).GPUs are now offered in servers from every major OEM plus off-brand vendors like Supermicro, but they aren’t doing graphics acceleration. That’s because the GPU is in essence a giant math co-processor, now being used to perform computation-intensive work ranging from 3D simulations to medical imaging to financial modelingTo read this article in full, please click here

Cavium launches ThunderX2 ARM-based server processors

Cavium this week announced general availability of ThunderX2, its second-generation 64-bit, ARM-based system-on-a-chip (SoC) line of server processors. And it’s coming with some big-name endorsements.The first generation, ThunderX, had a more muted launch two years ago. No one wanted to get on Intel’s bad side, it seemed, and Intel was viewing ARM, not AMD, as its biggest threat. Fast forward two years, and Cavium has endorsements from HPE, Cray, and Atos, as well as HPC users such as Sandia National Labs and the U.K.’s GW4 Isambard project.Cavium announced ThunderX2 almost two years ago. It’s not an upgrade to ThunderX; it’s a whole new chip. It acquired a line of ARM server processors code-named Vulcan from Broadcom after the company was acquired by Avago and decided to shed its data center effort.To read this article in full, please click here

Enterprises are moving SD-WAN beyond pilot stages to deployment

Research conducted by market research firm IHS Markit found that 74 percent of firms surveyed had SD-WAN lab trials in 2017, and many of them plan to move into production this year.The report, titled “The WAN Strategies North America” (pdf, registration required), found security is the number one network concern by a wide margin and the top reason to invest in new infrastructure, as companies must fend off the constant threat of cyber attacks.There are other reasons, as well, such as traffic growth, company expansion, adoption of the Internet of things (IoT), the need for greater control over the WAN, and the need to put WAN costs on a sustainable path.To read this article in full, please click here

Enterprises are moving SD-WAN beyond pilot stages to deployment

Research conducted by market research firm IHS Markit found that 74 percent of firms surveyed had SD-WAN lab trials in 2017, and many of them plan to move into production this year.The report, titled “The WAN Strategies North America” (pdf, registration required), found security is the number one network concern by a wide margin and the top reason to invest in new infrastructure, as companies must fend off the constant threat of cyber attacks.There are other reasons, as well, such as traffic growth, company expansion, adoption of the Internet of things (IoT), the need for greater control over the WAN, and the need to put WAN costs on a sustainable path.To read this article in full, please click here

Intel job posting hints at major overhaul to the processor core

A job listing on Intel’s official webpage for a senior CPU micro-architect and designer to build a revolutionary microprocessor core has fueled speculation that the company is finally going to redesign its Core-branded CPU architecture after more than 12 years.Intel introduced the Core architecture in 2006, and that was an iteration of the P6 microarchitecture first introduced with the Pentium Pro in 1995. So, in some ways, Intel in 2018 is running on a 1995 design. Even though its tick/tock model called for a new microarchitecture every other year, the new architecture was, in fact, just a tweak of the old one and not a clean sheet design.The job is based in the Intel's Hillsboro, Oregon, facility, where all of the major development work is done. It initially said “join the Ocean Cove team to deliver Intel’s next-generation core design in Hillsboro, Oregon.” That entry has since been removed from the posting.To read this article in full, please click here

Intel job posting hints at major overhaul to the processor core

A job listing on Intel’s official webpage for a senior CPU micro-architect and designer to build a revolutionary microprocessor core has fueled speculation that the company is finally going to redesign its Core-branded CPU architecture after more than 12 years.Intel introduced the Core architecture in 2006, and that was an iteration of the P6 microarchitecture first introduced with the Pentium Pro in 1995. So, in some ways, Intel in 2018 is running on a 1995 design. Even though its tick/tock model called for a new microarchitecture every other year, the new architecture was, in fact, just a tweak of the old one and not a clean sheet design.The job is based in the Intel's Hillsboro, Oregon, facility, where all of the major development work is done. It initially said “join the Ocean Cove team to deliver Intel’s next-generation core design in Hillsboro, Oregon.” That entry has since been removed from the posting.To read this article in full, please click here

Startup RStor promises a new type of distributed compute fabric

A startup funded by Cisco and featuring some big-name talent has come out of stealth mode with the promise of unifying data stored across multiple distributed data centers.RStor is led by Giovanni Coglitore, the former head of the hardware team at Facebook and before that CTO at Rackspace. The company also features C-level talent who were veterans of EMC’s technology venture capital arm, Amazon Web Services, Microsoft, Google, VMware, Dropbox, Yahoo, and Samsung.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers. | Get regularly scheduled insights by signing up for Network World newsletters. ] Bouyed by $45 million in venture capital money from Cisco, the company has announced RStor, a “hyper-distributed multicloud platform” that enables organizations to aggregate and automate compute resources from private data centers, public cloud providers, and trusted supercomputing centers across its networking fabric.To read this article in full, please click here

Startup RStor promises a new type of distributed compute fabric

A startup funded by Cisco and featuring some big-name talent has come out of stealth mode with the promise of unifying data stored across multiple distributed data centers.RStor is led by Giovanni Coglitore, the former head of the hardware team at Facebook and before that CTO at Rackspace. The company also features C-level talent who were veterans of EMC’s technology venture capital arm, Amazon Web Services, Microsoft, Google, VMware, Dropbox, Yahoo, and Samsung.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers. | Get regularly scheduled insights by signing up for Network World newsletters. ] Bouyed by $45 million in venture capital money from Cisco, the company has announced RStor, a “hyper-distributed multicloud platform” that enables organizations to aggregate and automate compute resources from private data centers, public cloud providers, and trusted supercomputing centers across its networking fabric.To read this article in full, please click here

Rackspace offers on-premises ‘cloud’ and a bare-metal cloud offering

Rackspace’s latest project is called Private Cloud Everywhere and is a collaboration with VMware to offer what it calls Private Cloud as a Service (PCaaS), making on-demand provisioning of virtualized servers available at most colocation facilities and data centers.PCaaS basically means provisioning data center hardware the same way you would on Amazon Web Services, Microsoft Azure or Google Cloud, but instead of using the cloud providers, you use your own hardware, use Rackspace data centers, or set it up in a third-party colocation facility.Because customers have the option of deploying a private cloud wherever they want physically, it can help with data sovereignty requirements, such as rules in Europe that restrict data inside national borders.To read this article in full, please click here

Rackspace offers on-premises ‘cloud’ and a bare-metal cloud offering

Rackspace’s latest project is called Private Cloud Everywhere and is a collaboration with VMware to offer what it calls Private Cloud as a Service (PCaaS), making on-demand provisioning of virtualized servers available at most colocation facilities and data centers.PCaaS basically means provisioning data center hardware the same way you would on Amazon Web Services, Microsoft Azure or Google Cloud, but instead of using the cloud providers, you use your own hardware, use Rackspace data centers, or set it up in a third-party colocation facility.Because customers have the option of deploying a private cloud wherever they want physically, it can help with data sovereignty requirements, such as rules in Europe that restrict data inside national borders.To read this article in full, please click here

Red Hat launches turnkey storage solution

Red Hat has always been a software company. It still is, but with an OEM partner, it will now offer a plug-and-play software-defined storage (SDS) system called Red Hat Storage One.Red Hat Storage One is built on the company’s software-defined Gluster storage product, but it includes hardware from Supermicro, which will manufacture and sell the hardware. When you purchase a Storage One box from a Red Hat partner, support for both the hardware and software are rolled up into one package with “a single part number,” as Red Hat puts it.Support contracts are for one-, three-, or five-year periods, and they cover everything — hardware and software. The hardware vendor is the first line of defense, with Red Hat taking over for more serious issues.To read this article in full, please click here

Schneider Electric announces Edge Module for IoT processing

Schneider Electric is the latest player to jump into the edge computing game for Internet of Things (IoT) devices with the announcement of its Edge Module for mobile and IoT applications. It follows the trend of processing IoT data where it is generated rather than sending it to a remote data center.Schneider Electric is a European giant that mostly specializes in energy management and power systems. So, it’s no surprise that the Edge Module comes with integrated power and cooling systems. That includes single- or three-phase power with a flexible power train in multiple ranges, N+1 standard cooling, and package cooling units mounted on the outside of the module to eliminate the need for external condensers or piping.To read this article in full, please click here