Author Archives: Ann Bednarz
Author Archives: Ann Bednarz
IBM has an answer for some of the biggest trends in enterprise data storage – including Non-Volatile Memory Express (NVMe), artificial intelligence, multi-cloud environments and containers – and it comes in a 2U package.The new FlashSystem 9100 is an all-flash NVMe-accelerated storage platform. It delivers up to 2 petabytes of effective storage in 2U and can provide up to 32 petabytes of all-flash storage in a 42U rack.[ Check out AI boosts data-center availability, efficiency. Also learn what hyperconvergence is and whether you’re ready for hyperconverged storage. For regularly scheduled insights sign up for Network World newsletters. ] NVMe is a protocol for accessing high-speed storage media that’s designed to reduce latency and increase system and application performance. It's optimized for all-flash storage systems and is aimed at enterprise workloads that require low latency and top performance, such as real-time data analytics and high-performance relational databases.To read this article in full, please click here
Google is adding to its cloud storage portfolio with the debut of a network attached storage (NAS) service.Google Cloud Filestore is managed file storage for applications that require a file system interface and shared file system for data. It lets users stand up managed NAS with their Google Compute Engine and Kubernetes Engine instances, promising high throughput, low latency and high IOPS.[ Check out AI boosts data-center availability, efficiency. Also learn what hyperconvergence is and whether you’re ready for hyperconverged storage. | For regularly scheduled insights sign up for Network World newsletters. ] The managed NAS option brings file storage capabilities to Google Cloud Platform for the first time. Google’s cloud storage portfolio already includes Persistent Disk, a network-attached block storage service, and Google Cloud Storage, a distributed system for object storage. Cloud Filestore fills the need for file workloads, says Dominic Preuss, director of product management at Google Cloud.To read this article in full, please click here
NVMe (non-volatile memory express) is shaking up the enterprise storage industry.A communications protocol developed specifically for all-flash storage, NVMe enables faster performance and greater density compared to legacy protocols. It's geared for enterprise workloads that require top performance, such as real-time data analytics, online trading platforms and other latency-sensitive workloads.[ Check out AI boosts data-center availability, efficiency. Also learn what hyperconvergence is and whether you’re ready for hyperconverged storage. | For regularly scheduled insights sign up for Network World newsletters. ] NVMe vs. serial-attached SCSI (SAS) NVMe is aimed at reducing the software overhead between applications and storage in all-flash systems.To read this article in full, please click here
Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters. AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here
Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ] DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here
Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters. AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here
Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ] DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here
Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters. AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here
Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ] DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here
Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters. AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here
Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ] DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here
A flurry of storage announcements from IBM share a common theme: Helping customers achieve greater efficiency and wring cost savings from their multitier, multi-cloud storage environments.Anchoring the news is IBM Storage Insights, a new AI and cloud-based storage management platform that’s designed to give users a fast view of storage capacity and performance, as well as make tiering recommendations to help cut storage costs. A single dashboard shows the status of block storage and captures trend information.[ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ] “Imagine you have an up-to-the-second event feed where you can see everything happening, not just on one of your arrays but across your entire environment,” said Sam Werner, vice president of offering management for IBM’s software-defined infrastructure (SDI) and storage software.To read this article in full, please click here
The rise of programmable networks has changed the role of the network engineer, and accepting those changes is key to career advancement. Network engineers need to become software fluent and embrace automation, according to a panel of network professionals brought together by Cisco to discuss the future of networking careers.[ For more on SDN see where SDN is going and learn the difference between SDN and NFV. | Get regularly scheduled insights by signing up for Network World newsletters. ] “The whole concept of engineer re-skilling has become a pretty hot topic over the last four or five years. What’s notable to me is that the engineers themselves are now embracing it,” says Zeus Kerravala, founder of ZK Research, who moderated the panel. To read this article in full, please click here
NetApp is expanding its cloud data services range through a new partnership with Google that integrates NetApp’s flash-powered data services with Google’s cloud platform.Announced today, NetApp Cloud Volumes for Google Cloud Platform is a fully-managed storage service that's designed to make it easier for the companies' joint customers to run new and existing workloads in the cloud. The cloud-native file storage service links NetApp’s data services with Google Cloud’s application development, analytics and machine learning functions with a goal of speeding access to resources and simplifying management.[ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ] At the same time, NetApp rolled out a new high-end enterprise all-flash storage array and updated its ONTAP enterprise data management software. Software advances target increased data retention compliance, and new machine learning-driven analytics are aimed at reducing capacity costs. To read this article in full, please click here
Dell EMC this week unveiled storage, server and hyperconvergence upgrades aimed at enterprises that are grappling with new application types, ongoing digital transformation efforts, and the pressure to deliver higher performance and greater automation in the data center.On the storage front, Dell EMC rearchitected its flagship VMAX enterprise product line, which is now called PowerMax, to include NVMe support and a built-in machine learning engine. Its XtremIO all-flash array offers native replication for the first time and a lower entry-level price. To read this article in full, please click here
Dell EMC this week unveiled storage, server and hyperconvergence upgrades aimed at enterprises that are grappling with new application types, ongoing digital transformation efforts, and the pressure to deliver higher performance and greater automation in the data center.On the storage front, Dell EMC rearchitected its flagship VMAX enterprise product line, which is now called PowerMax, to include NVMe support and a built-in machine learning engine. Its XtremIO all-flash array offers native replication for the first time and a lower entry-level price. To read this article in full, please click here
The infrastructure required to run artificial intelligence algorithms and train deep neural networks is so dauntingly complex, that it’s hampering enterprise AI deployments, experts say.“55% of firms have not yet achieved any tangible business outcomes from AI, and 43% say it’s too soon to tell,” says Forrester Research about the challenges of transitioning from AI excitement to tangible, scalable AI success.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions . | Get regularly scheduled insights by signing up for Network World newsletters. ] “The wrinkle? AI is not a plug-and-play proposition,” the analyst group says. “Unless firms plan, deploy, and govern it correctly, new AI tech will provide meager benefits at best or, at worst, result in unexpected and undesired outcomes.”To read this article in full, please click here
The infrastructure required to run artificial intelligence algorithms and train deep neural networks is so dauntingly complex, that it’s hampering enterprise AI deployments, experts say.“55% of firms have not yet achieved any tangible business outcomes from AI, and 43% say it’s too soon to tell,” says Forrester Research about the challenges of transitioning from AI excitement to tangible, scalable AI success.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions. | Get regularly scheduled insights by signing up for Network World newsletters. ] “The wrinkle? AI is not a plug-and-play proposition,” the analyst group says. “Unless firms plan, deploy, and govern it correctly, new AI tech will provide meager benefits at best or, at worst, result in unexpected and undesired outcomes.”To read this article in full, please click here
Composable infrastructure treats compute, storage, and network devices as pools of resources that can be provisioned as needed, depending on what different workloads require for optimum performance. It’s an emerging category of infrastructure that’s aimed at optimizing IT resources and improving business agility.The approach is like a public cloud in that resource capacity is requested and provisioned from shared capacity – except composable infrastructure sits on-premises in an enterprise data center.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions . | Get regularly scheduled insights by signing up for Network World newsletters. ] IT resources are treated as services, and the composable aspect refers to the ability to make those resources available on the fly, depending on the needs of different physical, virtual and containerized applications. A management layer is designed to discover and access the pools of compute and storage, ensuring that the right resources are in the right place at the right time.To read this article in full, please click here
Composable infrastructure treats compute, storage, and network devices as pools of resources that can be provisioned as needed, depending on what different workloads require for optimum performance. It’s an emerging category of infrastructure that’s aimed at optimizing IT resources and improving business agility.The approach is like a public cloud in that resource capacity is requested and provisioned from shared capacity – except composable infrastructure sits on-premises in an enterprise data center.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions . | Get regularly scheduled insights by signing up for Network World newsletters. ] IT resources are treated as services, and the composable aspect refers to the ability to make those resources available on the fly, depending on the needs of different physical, virtual and containerized applications. A management layer is designed to discover and access the pools of compute and storage, ensuring that the right resources are in the right place at the right time.To read this article in full, please click here