Ann Bednarz

Author Archives: Ann Bednarz

What is NVMe, and how is it changing enterprise storage

NVMe (non-volatile memory express) is shaking up the enterprise storage industry.A communications protocol developed specifically for all-flash storage, NVMe enables faster performance and greater density compared to legacy protocols. It's geared for enterprise workloads that require top performance, such as real-time data analytics, online trading platforms and other latency-sensitive workloads.[ Check out AI boosts data-center availability, efficiency. Also learn what hyperconvergence is and whether you’re ready for hyperconverged storage. | For regularly scheduled insights sign up for Network World newsletters. ] NVMe vs. serial-attached SCSI (SAS) NVMe is aimed at reducing the software overhead between applications and storage in all-flash systems.To read this article in full, please click here

AI boosts data center availability, efficiency

Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters. AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here

Data center management: What does DMaaS deliver that DCIM doesn’t?

Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ] DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here

AI boosts data center availability, efficiency

Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters. AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here

Data center management: What does DMaaS deliver that DCIM doesn’t?

Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ] DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here

AI boosts data-center availability, efficiency

Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters. AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here

Data-center management: What does DMaaS deliver that DCIM doesn’t?

Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ] DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here

AI boosts data-center availability, efficiency

Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters. AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here

Data-center management: What does DMaaS deliver that DCIM doesn’t?

Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ] DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here

Cost-savings theme pervades IBM storage news

A flurry of storage announcements from IBM share a common theme: Helping customers achieve greater efficiency and wring cost savings from their multitier, multi-cloud storage environments.Anchoring the news is IBM Storage Insights, a new AI and cloud-based storage management platform that’s designed to give users a fast view of storage capacity and performance, as well as make tiering recommendations to help cut storage costs. A single dashboard shows the status of block storage and captures trend information.[ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ] “Imagine you have an up-to-the-second event feed where you can see everything happening, not just on one of your arrays but across your entire environment,” said Sam Werner, vice president of offering management for IBM’s software-defined infrastructure (SDI) and storage software.To read this article in full, please click here

Don’t get left behind: SDN, programmable networks change how network engineers work

The rise of programmable networks has changed the role of the network engineer, and accepting those changes is key to career advancement. Network engineers need to become software fluent and embrace automation, according to a panel of network professionals brought together by Cisco to discuss the future of networking careers.[ For more on SDN see where SDN is going and learn the difference between SDN and NFV. | Get regularly scheduled insights by signing up for Network World newsletters. ] “The whole concept of engineer re-skilling has become a pretty hot topic over the last four or five years. What’s notable to me is that the engineers themselves are now embracing it,” says Zeus Kerravala, founder of ZK Research, who moderated the panel. To read this article in full, please click here

NetApp partners with Google for cloud-native file-storage service

NetApp is expanding its cloud data services range through a new partnership with Google that integrates NetApp’s flash-powered data services with Google’s cloud platform.Announced today, NetApp Cloud Volumes for Google Cloud Platform is a fully-managed storage service that's designed to make it easier for the companies' joint customers to run new and existing workloads in the cloud. The cloud-native file storage service links NetApp’s data services with Google Cloud’s application development, analytics and machine learning functions with a goal of speeding access to resources and simplifying management.[ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ] At the same time, NetApp rolled out a new high-end enterprise all-flash storage array and updated its ONTAP enterprise data management software. Software advances target increased data retention compliance, and new machine learning-driven analytics are aimed at reducing capacity costs. To read this article in full, please click here

AI, analytics drive Dell EMC storage, server upgrades

Dell EMC this week unveiled storage, server and hyperconvergence upgrades aimed at enterprises that are grappling with new application types, ongoing digital transformation efforts, and the pressure to deliver higher performance and greater automation in the data center.On the storage front, Dell EMC rearchitected its flagship VMAX enterprise product line, which is now called PowerMax, to include NVMe support and a built-in machine learning engine. Its XtremIO all-flash array offers native replication for the first time and a lower entry-level price. To read this article in full, please click here

AI, analytics drive Dell EMC storage, server upgrades

Dell EMC this week unveiled storage, server and hyperconvergence upgrades aimed at enterprises that are grappling with new application types, ongoing digital transformation efforts, and the pressure to deliver higher performance and greater automation in the data center.On the storage front, Dell EMC rearchitected its flagship VMAX enterprise product line, which is now called PowerMax, to include NVMe support and a built-in machine learning engine. Its XtremIO all-flash array offers native replication for the first time and a lower entry-level price. To read this article in full, please click here

Product–services bundles boost AI

The infrastructure required to run artificial intelligence algorithms and train deep neural networks is so dauntingly complex, that it’s hampering enterprise AI deployments, experts say.“55% of firms have not yet achieved any tangible business outcomes from AI, and 43% say it’s too soon to tell,” says Forrester Research about the challenges of transitioning from AI excitement to tangible, scalable AI success.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions. | Get regularly scheduled insights by signing up for Network World newsletters. ] “The wrinkle? AI is not a plug-and-play proposition,” the analyst group says. “Unless firms plan, deploy, and govern it correctly, new AI tech will provide meager benefits at best or, at worst, result in unexpected and undesired outcomes.”To read this article in full, please click here

Product–services bundles boost AI

The infrastructure required to run artificial intelligence algorithms and train deep neural networks is so dauntingly complex, that it’s hampering enterprise AI deployments, experts say.“55% of firms have not yet achieved any tangible business outcomes from AI, and 43% say it’s too soon to tell,” says Forrester Research about the challenges of transitioning from AI excitement to tangible, scalable AI success.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions . | Get regularly scheduled insights by signing up for Network World newsletters. ] “The wrinkle? AI is not a plug-and-play proposition,” the analyst group says. “Unless firms plan, deploy, and govern it correctly, new AI tech will provide meager benefits at best or, at worst, result in unexpected and undesired outcomes.”To read this article in full, please click here

What is composable infrastructure?

Composable infrastructure treats compute, storage, and network devices as pools of resources that can be provisioned as needed, depending on what different workloads require for optimum performance. It’s an emerging category of infrastructure that’s aimed at optimizing IT resources and improving business agility.The approach is like a public cloud in that resource capacity is requested and provisioned from shared capacity – except composable infrastructure sits on-premises in an enterprise data center.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions . | Get regularly scheduled insights by signing up for Network World newsletters. ] IT resources are treated as services, and the composable aspect refers to the ability to make those resources available on the fly, depending on the needs of different physical, virtual and containerized applications. A management layer is designed to discover and access the pools of compute and storage, ensuring that the right resources are in the right place at the right time.To read this article in full, please click here

What is composable infrastructure?

Composable infrastructure treats compute, storage, and network devices as pools of resources that can be provisioned as needed, depending on what different workloads require for optimum performance. It’s an emerging category of infrastructure that’s aimed at optimizing IT resources and improving business agility.The approach is like a public cloud in that resource capacity is requested and provisioned from shared capacity – except composable infrastructure sits on-premises in an enterprise data center.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions . | Get regularly scheduled insights by signing up for Network World newsletters. ] IT resources are treated as services, and the composable aspect refers to the ability to make those resources available on the fly, depending on the needs of different physical, virtual and containerized applications. A management layer is designed to discover and access the pools of compute and storage, ensuring that the right resources are in the right place at the right time.To read this article in full, please click here

Penn State secures building automation, IoT traffic with microsegmentation

It was time to get a handle on BACnet traffic at Penn State.BACnet is a communications protocol for building automation and control (BAC) systems such as heating, ventilating and air conditioning (HVAC), lighting, access control and fire detection. Penn State standardized on BACnet because of its openness.[ For more on IoT see tips for securing IoT on your network, our list of the most powerful internet of things companies and learn about the industrial internet of things. | Get regularly scheduled insights by signing up for Network World newsletters. ] “Any device, any manufacturer – as long as they talk BACnet, we can integrate them,” says Tom Walker, system design specialist in the facility automation services group at Penn State. “It’s a really neat protocol, but you have to know the quirks that come with deploying it, especially at scale.”To read this article in full, please click here

Penn State secures building automation, IoT traffic with microsegmentation

It was time to get a handle on BACnet traffic at Penn State.BACnet is a communications protocol for building automation and control (BAC) systems such as heating, ventilating and air conditioning (HVAC), lighting, access control and fire detection. Penn State standardized on BACnet because of its openness.[ For more on IoT see tips for securing IoT on your network, our list of the most powerful internet of things companies and learn about the industrial internet of things. | Get regularly scheduled insights by signing up for Network World newsletters. ] “Any device, any manufacturer – as long as they talk BACnet, we can integrate them,” says Tom Walker, system design specialist in the facility automation services group at Penn State. “It’s a really neat protocol, but you have to know the quirks that come with deploying it, especially at scale.”To read this article in full, please click here

1 2 3 9