Archive

Category Archives for "Network World SDN"

You’re probably doing your IIoT implementation wrong

The Industrial internet of things promises a quantum leap forward in automation, centralized management and a wealth of new data and insight that is often too tempting to pass up. But automating a factory floor or a fleet of vehicles is far from simple, and many would-be IIoT adopters are going about the process all wrong, according to experts.To make an IIoT transition a success, the process has to be led by the line-of-business side of the company – not IT. Successful IIoT adopters frame the entire operation as a matter of digital transformation, aimed at addressing specific business problems, rather than as a fun challenge for IT architects to solve.To read this article in full, please click here

IBM, strengthens mainframe cloud services with CA’s help

IBM continues to mold the Big Iron into a cloud and devops beast.This week IBM and its long-time ally CA teamed up to link the mainframe and its Cloud Managed Services on z Systems, or zCloud, software with cloud workload-development tools from CA with the goal of better performing applications for private, hybrid or multicloud operations.[ For more on mainframes, read: The (mostly) cool history of the IBM mainframe and Why are mainframes still in the enterprise data center? | Get regularly scheduled insights by signing up for Network World newsletters. ] IBM says zCloud offers customers a way to move critical workloads into a cloud environment with the flexibility and security of the mainframe. In addition, the company offers the IBM Services Platform with Watson that provides another level of automation within zCloud to assist clients with their moves to cloud environments.To read this article in full, please click here

Cato Networks adds threat hunting to its Network as a Service

Enterprises that have grown comfortable with Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service (IaaS) are increasingly accepting of Network as a Service (NaaS). NaaS is a rapidly growing market. According to Market Research Future, NaaS is expected to become a US $126 billion market by 2022, sustaining an annual growth rate of 28.4 percent.One of the key benefits of cloud-based networking is increased security for applications and data. Given that the traditional perimeter of on-premise networks has been decimated by mobile and cloud computing, NaaS builds a new perimeter in the cloud. Now it’s possible to unify all traffic – from data centers, branch locations, mobile users, and cloud platforms – in the cloud. This means an enterprise can set all its security policies in one place, and it can push traffic through cloud-based security functions such as next-generation firewall, secure web gateway, advanced threat protection, and so on.To read this article in full, please click here

REVIEW: 6 enterprise-scale IoT platforms

There's little need to tell anyone in IT that the Internet of Things (IoT) is a big deal and that it's growing insanely fast; BI Intelligence estimates that there will be some 23.3 billion IoT devices by 2019. As IoT support becomes more of an enterprise concern, there are four key issues about enterprise IoT (EIoT) deployments to consider: The sheer number of enterprise IoT endpoint devices – There will be 1 billion by 2019. The frequency of data generated IoT devices – IDC estimates that by 2025, an average connected person anywhere in the world will interact with connected devices nearly 4,800 times per day or one interaction every 18 seconds. The incredible volume of IoT data – Of the 163 zettabytes (that's 1021bytes) of data that will be created in 2025, IDC estimates that 60% will be from IoT endpoints and half of that (roughly 49 zettabytes) will be stored in enterprise data centers. The challenges of maintaining security for your device constellation – IDC estimates that by 2025, 45% of the stored enterprise data will be sensitive enough to require being secured but will not be. [ For more on IoT see tips for securing IoT Continue reading

AMD’s Epyc server encryption is the latest security system to fall

It’s a good thing AMD had the sense not to rub Intel’s nose in the Meltdown/Spectre vulnerability, because it would be getting it right back for this one: Researchers from the Fraunhofer Institute for Applied and Integrated Safety in Germany have published a paper detailing how to compromise a virtual machine encrypted by AMD's Secure Encrypted Virtualization (SEV).The news is a bit of a downer for AMD, since it just added Cisco to its list of customers for the Epyc processor. Cisco announced today plans to use Epyc in its density-optimized Cisco UCS C4200 Series Rack Server Chassis and the Cisco UCS C125 M5 Rack Server Node.To read this article in full, please click here

Blockchain, service-centric networking key to IoT success

Connecting and securing the Internet of Things (IoT) should be achieved with a combination of service-centric networking (SCN) along with blockchain, researchers say. `A multi-university, multiple-discipline group led by Zhongxing Ming, a visiting scholar at Princeton University, say IoT’s adoption will face an uphill battle due in part to bottlenecks between potentially billions of devices, along with the mobile nature of much of it.The scientists, who call their IoT architecture Blockcloud, presented their ideas at GENESIS C.A.T., an innovation-in-blockchain technology event recently in Tokyo.To read this article in full, please click here

Nvidia aims to unify AI, HPC computing in HGX-2 server platform

Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it's no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers.  The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.To read this article in full, please click here

Nvidia aims to unify AI, HPC computing in HGX-2 server platform

Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it's no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers.  The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.To read this article in full, please click here

Copying and renaming files on Linux

Linux users have for many decades been using simple cp and mv commands to copy and rename files. These commands are some of the first that most of us learned and are used every day by possibly millions of people. But there are other techniques, handy variations, and another command for renaming files that offers some unique options.First, let’s think about why might you want to copy a file. You might need the same file in another location or you might want a copy because you’re going to edit the file and want to be sure you have a handy backup just in case you need to revert to the original file. The obvious way to do that is to use a command like “cp myfile myfile-orig”.To read this article in full, please click here

IDG Contributor Network: The evolution of storage from on-premise to cloud

Anyone that has kept up with this column knows I tend to focus on one storage architecture more than any other – the hybrid-cloud storage architecture. That’s because I truly believe in its ability to meet the challenges of today’s IT storage – ever-expanding data, multiple sites, a need for flexibility and scale, while simultaneously meeting specific performance demands. For this month’s column, I thought we would take a look at how we got to this point and see if this evolution informs where we might go in the near future.Early days – pre-NAS The very earliest business storage systems were designed for a world long-gone. One in which a business would be expected to manage maybe thousands of files. Even the largest enterprise would have a storage system to support hundreds of concurrent users, no more. These legacy systems had regularly scheduled down time for maintenance, but it was not unusual to not have access for unscheduled reasons.To read this article in full, please click here

IDG Contributor Network: The evolution of storage from on-premises to cloud

Anyone that has kept up with this column knows I tend to focus on one storage architecture more than any other – the hybrid-cloud storage architecture. That’s because I truly believe in its ability to meet the challenges of today’s IT storage – ever-expanding data, multiple sites, a need for flexibility and scale, while simultaneously meeting specific performance demands. For this month’s column, I thought we would take a look at how we got to this point and see if this evolution informs where we might go in the near future.Early days – pre-NAS The very earliest business storage systems were designed for a world long-gone. One in which a business would be expected to manage maybe thousands of files. Even the largest enterprise would have a storage system to support hundreds of concurrent users, no more. These legacy systems had regularly scheduled down time for maintenance, but it was not unusual to not have access for unscheduled reasons.To read this article in full, please click here

Q&A: Cisco’s Theresa Bui on the company’s Kinetic IoT platform

It's been almost a year since Cisco announced Kinetic, a cloud-managed IoT platform aimed at capturing a large and profitable share of the rapidly growing business and industrial IoT market. The executive in charge of Kinetic, Theresa Bui, spoke to us about the platform and how it's architected, in the wake of a flagship customer announcement - the Port of Rotterdam - and a limited partnership with IBM.What’s a customer getting for their money when they buy Cisco Kinetic?As a whole, the platform enables three core, functional capabilities. It allows you to easily and automatically extract data, and how we do that is we ship a library of automated connectors that help you extract data from various data pipes, put it into a model – whether it’s CoAP or MQTT or whatever the flavor that works for you.To read this article in full, please click here

AI boosts data-center availability, efficiency

Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters. AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.To read this article in full, please click here

Data-center management: What does DMaaS deliver that DCIM doesn’t?

Data-center downtime is crippling and costly for enterprises. It’s easy to see the appeal of tools that can provide visibility into data-center assets, interdependencies, performance and capacity – and turn that visibility into actionable knowledge that anticipates equipment failures or capacity shortfalls.Data center infrastructure management (DCIM) tools are designed to monitor the utilization and energy consumption of both IT and building components, from servers and storage to power distribution units and cooling gear.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ] DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. With DCIM software, enterprises can simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.To read this article in full, please click here

22 essential Linux security commands

There are many aspects to security on Linux systems – from setting up accounts to ensuring that legitimate users have no more privilege than they need to do their jobs. This is look at some of the most essential security commands for day-to-day work on Linux systems.To read this article in full, please click here(Insider Story)

22 essential security commands for Linux

There are many aspects to security on Linux systems – from setting up accounts to ensuring that legitimate users have no more privilege than they need to do their jobs. This is look at some of the most essential security commands for day-to-day work on Linux systems.To read this article in full, please click here(Insider Story)

IDG Contributor Network: Overcoming kludges to secure web applications

When it comes to technology, nothing is static, everything is evolving. Either we keep inventing mechanisms that dig out new security holes, or we are forced to implement existing kludges to cover up the inadequacies in security on which our web applications depend.The assault on the changing digital landscape with all its new requirements has created a black hole that needs attention. The shift in technology, while creating opportunities, has a bias to create security threats. Unfortunately, with the passage of time, these trends will continue to escalate, putting web application security at center stage.Business relies on web applications. Loss of service to business-focused web applications not only affects the brand but also results in financial loss. The web application acts as the front door to valuable assets. If you don’t efficiently lock the door or at least know when it has been opened, valuable revenue-generating web applications are left compromised.To read this article in full, please click here

DNS in the cloud: Why and why not

As enterprises consider outsourcing their IT infrastructure, they should consider moving their public authoritative DNS services to a cloud provider’s managed DNS service, but first they should understand the advantages and disadvantages.To read this article in full, please click here(Insider Story)

Why NVMe over Fabric matters

In my earlier blog post on SSD storage news from HPE, Hitachi and IBM, I touched on the significance of NVMe over Fabric (NoF). But not wanting to distract from the main storage, I didn’t go into detail. I will do so with this blog post.Hitachi Vantara goes all in on NVMe over Fabric First, though, an update on the news from Hitachi Vantara, which I initially said had not commented yet on NoF. It turns out they are all in.“Hitachi Vantara currently offers, and continues to expand support for, NVMe in our hyperconverged UCP HC line. As NVMe matures over the next year, we see opportunities to introduce NVMe into new software-defined and enterprise storage solutions. More will follow, but it confuses the conversation to pre-announce things that customers cannot implement today,” said Bob Madaio, vice president, Infrastructure Solutions Group at Hitachi Vantara, in an email to me.To read this article in full, please click here

IDG Contributor Network: Defining network performance with Google’s 4 golden signals

You’re supposed to meet someone for coffee. If they’re three minutes late, no problem, but if they’re thirty minutes late, its rude. Was the change from “no problem” to “rude” a straight line, or were there steps of increasing rudeness? Do we care why? A good reason certainly increases our tolerance. Someone who is always late reduces it.Network performance follows many of the same dynamics. We used to talk about outages, but they have become less frequent. “Slow” is the new “out.” But how slow is slow? Do we try to understand the user experience and adjust our performance monitoring to reflect it? Or is the only practical answer to just wait until someone complains?To read this article in full, please click here