John Edwards

Author Archives: John Edwards

Hybrid cloud management requires new tools, skills

Hybrid cloud environments can deliver an array of benefits, but in many enterprises, they're becoming increasingly complex and difficult to manage. To cope, adopters typically turn to some type of management software. What soon becomes apparent, however, is that hybrid cloud management tools can be as complex and confounding as the environments they're designed to support.A hybrid cloud typically includes a mix of computing, storage and other services. The environment is formed by a combination of on-premises infrastructure resources, private cloud services, and one or more public cloud offerings, such as Amazon Web Services (AWS) or Microsoft Azure, as well as orchestration among the various platforms.To read this article in full, please click here

Hybrid cloud management requires new tools, skills

Hybrid cloud environments can deliver an array of benefits, but in many enterprises, they're becoming increasingly complex and difficult to manage. To cope, adopters typically turn to some type of management software. What soon becomes apparent, however, is that hybrid cloud management tools can be as complex and confounding as the environments they're designed to support.A hybrid cloud typically includes a mix of computing, storage and other services. The environment is formed by a combination of on-premises infrastructure resources, private cloud services, and one or more public cloud offerings, such as Amazon Web Services (AWS) or Microsoft Azure, as well as orchestration among the various platforms.To read this article in full, please click here

Hybrid-cloud management requires new tools, skills

Hybrid cloud environments can deliver an array of benefits, but in many enterprises, they're becoming increasingly complex and difficult to manage. To cope, adopters typically turn to some type of management software. What soon becomes apparent, however, is that hybrid cloud management tools can be as complex and confounding as the environments they're designed to support.A hybrid cloud typically includes a mix of computing, storage and other services. The environment is formed by a combination of on-premises infrastructure resources, private cloud services, and one or more public cloud offerings, such as Amazon Web Services (AWS) or Microsoft Azure, as well as orchestration among the various platforms.To read this article in full, please click here

Hybrid-cloud management requires new tools, skills

Hybrid cloud environments can deliver an array of benefits, but in many enterprises, they're becoming increasingly complex and difficult to manage. To cope, adopters typically turn to some type of management software. What soon becomes apparent, however, is that hybrid cloud management tools can be as complex and confounding as the environments they're designed to support.A hybrid cloud typically includes a mix of computing, storage and other services. The environment is formed by a combination of on-premises infrastructure resources, private cloud services, and one or more public cloud offerings, such as Amazon Web Services (AWS) or Microsoft Azure, as well as orchestration among the various platforms.To read this article in full, please click here

Serverless computing: Ready or not?

Until a few years ago, physical servers were a bedrock technology, the beating digital heart of every data center. Then the cloud materialized. Today, as organizations continue to shovel an ever-growing number of services toward cloud providers, on-premises servers seem to be on the verge of becoming an endangered species.Serverless computing is doing its share to accelerate the demise of on-premises servers. The concept of turning to a cloud provider to dynamically manage the allocation of machine resources and bill users only for the actual amount of resources consumed by applications is gaining increasing acceptance. A late 2019 survey conducted by technical media and training firm O'Reilly found that four out of 10 enterprises, spanning a wide range of locations and industries, have already adopted serverless technologies.To read this article in full, please click here

Serverless computing: Ready or not?

Until a few years ago, physical servers were a bedrock technology, the beating digital heart of every data center. Then the cloud materialized. Today, as organizations continue to shovel an ever-growing number of services toward cloud providers, on-premises servers seem to be on the verge of becoming an endangered species.Serverless computing is doing its share to accelerate the demise of on-premises servers. The concept of turning to a cloud provider to dynamically manage the allocation of machine resources and bill users only for the actual amount of resources consumed by applications is gaining increasing acceptance. A late 2019 survey conducted by technical media and training firm O'Reilly found that four out of 10 enterprises, spanning a wide range of locations and industries, have already adopted serverless technologies.To read this article in full, please click here

5 disruptive storage technologies for 2020

For decades, storage technology progress was measured primarily in terms of capacity and speed. No longer. In recent times, those steadfast benchmarks have been augmented, and even superseded, by sophisticated new technologies and methodologies that make storage smarter, more flexible and easier to manage.Next year promises to bring even greater disruption to the formerly staid storage market, as IT leaders seek more efficient ways of coping with the data tsunami generated by AI, IoT devices and numerous other sources. Here's a look at the five storage technologies that will create the greatest disruption in 2020, as enterprise adoption gains ground.To read this article in full, please click here

5 disruptive storage technologies for 2020

For decades, storage technology progress was measured primarily in terms of capacity and speed. No longer. In recent times, those steadfast benchmarks have been augmented, and even superseded, by sophisticated new technologies and methodologies that make storage smarter, more flexible and easier to manage.Next year promises to bring even greater disruption to the formerly staid storage market, as IT leaders seek more efficient ways of coping with the data tsunami generated by AI, IoT devices and numerous other sources. Here's a look at the five storage technologies that will create the greatest disruption in 2020, as enterprise adoption gains ground.To read this article in full, please click here

High performance computing: Do you need it?

In today's data-driven world, high performance computing (HPC) is emerging as the go-to platform for enterprises looking to gain deep insights into areas as diverse as genomics, computational chemistry, financial risk modeling and seismic imaging. Initially embraced by research scientists who needed to perform complex mathematical calculations, HPC is now gaining the attention of a wider number of enterprises spanning an array of fields."Environments that thrive on the collection, analysis and distribution of data – and depend on reliable systems to support streamlined workflow with immense computational power – need HPC," says Dale Brantly, director of systems engineering at Panasas, an HPC data-storage-systems provider.To read this article in full, please click here

High performance computing: Do you need it?

In today's data-driven world, high performance computing (HPC) is emerging as the go-to platform for enterprises looking to gain deep insights into areas as diverse as genomics, computational chemistry, financial risk modeling and seismic imaging. Initially embraced by research scientists who needed to perform complex mathematical calculations, HPC is now gaining the attention of a wider number of enterprises spanning an array of fields."Environments that thrive on the collection, analysis and distribution of data – and depend on reliable systems to support streamlined workflow with immense computational power – need HPC," says Dale Brantly, director of systems engineering at Panasas, an HPC data-storage-systems provider.To read this article in full, please click here

Using predictive analytics to troubleshoot network issues: Fact or fiction?

Predicting the future is getting easier. While it's still not possible to accurately forecast tomorrow's winning lottery number, the ability to anticipate various types of damaging network issues — and nip them in the bud — is now available to any network manager.Predictive analytic tools draw their power from a variety of different technologies and methodologies, including big data, data mining and statistical modeling. A predictive analytics tool can be trained, for instance, to use pattern recognition — the automated recognition of patterns and regularities in data — to identify issues before they become significant problems or result in partial or total network failures.To read this article in full, please click here

How to get a handle on multicloud management

As enterprises pile more cloud activities onto the platforms of more cloud providers, many IT and network managers are feeling overwhelmed because each cloud provider comes with its own toolset, rules and user demands. In a multicloud environment, this convoluted mixture quickly leads enterprises into a pit of complexity, confusion and cost.Coming to the rescue are more than a dozen vendors, ranging from IT stalwarts to startups, offering multicloud management tools designed to bring order, control and insight to data centers juggling multiple cloud services. IBM, BMC Software, Cisco, Dell Technologies Cloud, DXC Technology, VMware, HyperGrid, and Divvycloud are just some of the firms promising stable and reliable multicloud management. Many cloud services also provide some degree of management and integration with other cloud providers.To read this article in full, please click here

For enterprise storage, persistent memory is here to stay

It's hard to remember a time when semiconductor vendors haven't promised a fast, cost-effective and reliable persistent memory technology to anxious data center operators. Now, after many years of waiting and disappointment, technology may have finally caught up with the hype to make persistent memory a practical proposition.High-capacity persistent memory, also known as storage class memory (SCM), is fast and directly addressable like dynamic random-access memory (DRAM), yet is able to retain stored data even after its power has been switched off—intentionally or unintentionally. The technology can be used in data centers to replace cheaper, yet far slower traditional persistent storage components, such as hard disk drives (HDD) and solid-state drives (SSD).To read this article in full, please click here

For enterprise storage, persistent memory is here to stay

It's hard to remember a time when semiconductor vendors haven't promised a fast, cost-effective and reliable persistent memory technology to anxious data center operators. Now, after many years of waiting and disappointment, technology may have finally caught up with the hype to make persistent memory a practical proposition.High-capacity persistent memory, also known as storage class memory (SCM), is fast and directly addressable like dynamic random-access memory (DRAM), yet is able to retain stored data even after its power has been switched off—intentionally or unintentionally. The technology can be used in data centers to replace cheaper, yet far slower traditional persistent storage components, such as hard disk drives (HDD) and solid-state drives (SSD).To read this article in full, please click here

NVMe over Fabrics creates data-center storage disruption

It's quite a mouthful, but Non-Volatile Memory Express over Fabrics (NVMeoF) is shaping up to become perhaps the most disruptive data center storage technology since the introduction of solid-state drives (SSD), promising to bring new levels of performance and economy to rapidly expanding storage arrays.NVMe over Fabrics is designed to deliver the high-speed and low-latency of NVMe SSD technology over a network fabric. There are currently three basic NVMe fabric implementations available: NVMe over Fibre Channel, NVMe over remote direct memory access, and NVMe over TCP.To read this article in full, please click here

NVMe over Fabrics creates data-center storage disruption

It's quite a mouthful, but Non-Volatile Memory Express over Fabrics (NVMeoF) is shaping up to become perhaps the most disruptive data center storage technology since the introduction of solid-state drives (SSD), promising to bring new levels of performance and economy to rapidly expanding storage arrays.NVMe over Fabrics is designed to deliver the high-speed and low-latency of NVMe SSD technology over a network fabric. There are currently three basic NVMe fabric implementations available: NVMe over Fibre Channel, NVMe over remote direct memory access, and NVMe over TCP.To read this article in full, please click here

5 times when cloud repatriation makes sense

A growing number of enterprises are pulling selected applications out of the cloud and returning them to their brick-and-mortar data centers. Cloud repatriation is gaining momentum as enterprises realize the cloud isn't always the best solution to IT cost, performance and other concerns.Dave Cope, senior director of market development for Cisco's CloudCenter, believes that technology has evolved to the point where enterprises now have the unprecedented freedom to locate applications wherever maximum cost, performance and security benefits can be achieved. "There’s an ability to place workloads where they best reside based on business priorities, not IT constraints," he notes. "We’re starting to get this natural distribution of workloads across existing and new environments … where they make the most sense."To read this article in full, please click here

5 times when cloud repatriation makes sense

A growing number of enterprises are pulling selected applications out of the cloud and returning them to their brick-and-mortar data centers. Cloud repatriation is gaining momentum as enterprises realize the cloud isn't always the best solution to IT cost, performance and other concerns.Dave Cope, senior director of market development for Cisco's CloudCenter, believes that technology has evolved to the point where enterprises now have the unprecedented freedom to locate applications wherever maximum cost, performance and security benefits can be achieved. "There’s an ability to place workloads where they best reside based on business priorities, not IT constraints," he notes. "We’re starting to get this natural distribution of workloads across existing and new environments … where they make the most sense."To read this article in full, please click here