Archive

Category Archives for "Network World SDN"

Automation critical to scalable network security

Securing the business network has been and continues to be one of the top initiatives for engineers. Suffering a breach can have catastrophic consequences to a business, including lawsuits, fines, and brand damage from which some companies never recover.To combat this, security professionals have deployed a number of security tools, including next-generation firewalls (NGFW) such as Cisco’s Firepower, which is one of the most widely deployed in the industry. Managing firewalls becomes increasingly difficult Managing a product like Firepower has become increasingly difficult, though, because the speed at which changes need to be made has increased. Digital businesses operate at a pace never seen before in the business world, and the infrastructure teams need to keep up. If they can’t operate at this accelerated pace, the business will suffer. And firewall rules continue to grow in number and complexity, making it nearly impossible to update them manually.To read this article in full, please click here

End-to-end data, analytics key to solving application performance problems

As someone who used to work in corporate IT, I can attest to the fact that in general, workers and IT are at odds most of the time. Part of the problem is the tools that IT uses has never provided the right information to help the technical people understand what the user is experiencing.That is why help desks are often referred to as “the no help desk” or “helpless desk” by the internal employees. Users call the help desk when an application isn’t performing the way it should, and IT is looking at a dashboard where everything is green and indicates things should be working.Traditional network management tools don’t provide the right information The main reason for this mismatch is that traditional network management tends to look at the IT environment through the lens of infrastructure instead of what the user experiences. Looking at specific infrastructure components doesn’t provide any view of the end-to-end environment, leading to a false sense of how things are running.To read this article in full, please click here

IBM launches new availability zones worldwide for hybrid enterprise clouds

CIOs and data center managers who run large hybrid clouds worldwide have a good chance of hearing IBM knock on their doors in the next few months.That's because IBM is opening 18 new "availability zones" for its public cloud across the U.S., Europe, and Asia-Pacific. An availability zone is an isolated physical location within a cloud data center that has its own separate power, cooling and networking to maximize fault tolerance, according to IBM.Along with uptime service level agreements and high-speed network connectivity, users have gotten used to accessing corporate databases wherever they reside, but proximity to cloud data centers is important. Distance to data centers can have an impact on network performance, resulting in slow uploads or downloads.To read this article in full, please click here

Supermicro is the latest hardware vendor with a security issue

Security researchers with Eclypsium, a firm created by two former Intel executives that specializes in rooting out vulnerabilities in server firmware, have uncovered vulnerabilities affecting the firmware of Supermicro servers. Fortunately, it’s not easily exploited.The good news is these vulnerabilities can be exploited only via malicious software already running on a system. So, the challenge is to get the malicious code onto the servers in the first place. The bad news is these vulnerabilities are easily exploitable and can give malware the same effect as having physical access to this kind of system.“A physical attacker who can open the case could simply attach a hardware programmer to bypass protections. Using the attacks we have discovered, it is possible to scale powerful malware much more effectively through malicious software instead of physical access,” Eclypsium said in a blog post announcing its findings.To read this article in full, please click here

IoT has an obsolescence problem

The Internet of Things (IoT) is a long way from becoming a mature technology. From wearable devices to industrial sensors and consumer conveniences, IoT vendors and users are still trying to figure out what the technology does best as it grows into a $9 trillion market by 2020 (according to some estimates).And yet, IoT is somehow already faced with a huge and growing problem of obsolescence. The problem, ironically, lies in the “things” themselves.Apple Watch: A premature antique Don’t believe me? Consider the solid gold Apple Watch Edition, launched in 2015 and sold for $10,000 to as much as $17,000 a pop. A traditional watch at that price point would be expected to last decades, perhaps even generations as it turns into a family heirloom. But with the announcement of Apple Watch OS 5 at the company’s World Wide Developers Conference this week, the original version of these fancy timepieces can no longer keep up. They simply won’t run the latest version of the operating system due out this fall, and they won’t have the features of brand-new Apple Watches that cost a tiny fraction of that amount.To read this article in full, please click here

What is digital twin technology? [and why it matters]

Digital twin technology has moved beyond manufacturing and into the merging worlds of the Internet of Things, artificial intelligence and data analytics.As more complex “things” become connected with the ability to produce data, having a digital equivalent gives data scientists and other IT professionals the ability to optimize deployments for peak efficiency and create other what-if scenarios.[ Click here to download a PDF bundle of five essential articles about IoT in the enterprise. ] What is a digital twin? The basic definition of a digital twin: it’s a digital representation of a physical object or system. The technology behind digital twins has expanded to include larger items such as buildings, factories and even cities, and some have said people and processes can have digital twins, expanding the concept even further.To read this article in full, please click here

IDG Contributor Network: Living on the edge: 5 reasons why edge services are critical to your resiliency strategy

When it comes to computing, living on the edge is currently all the rage. Why? Edge computing is a way to decentralize computing power and move processing closer to the end points where users and devices access the internet and data is generated. This allows for better control of the user experience and for data to be processed faster at the edge of the network – on devices such as smartphones and IoT devices.As enterprise organizations look to extend their corporate digital channel strategies involving websites with rich media and personalized content, it is vital to have a strong resiliency strategy.Deploying a combination of cloud and edge services can help by: reducing unplanned downtime; improving security and performance; extending the benefits of multi-cloud infrastructure; speeding application development and delivery; and improving user experience.To read this article in full, please click here

Comparing files and directories with diff and comm

There are a number of ways to compare files and directories on Linux systems. The diff, colordiff, and wdiff commands are just a sampling of commands that you're like to run into. Another is comm. The command (think "common") lets you compare files in side-by-side columns the contents of individual files.Where diff gives you a display like this showing the lines that are different and the location of the differences, comm offers some different options with a focus on common content. Let's look at the default output and then some other features.Here's some diff output -- displaying the lines that are different in the two files and using < and > signs to indicate which file each line came from.To read this article in full, please click here

Network-intelligence platforms, cloud fuel a run on faster Ethernet

2018 is shaping up to be a banner year for all things Ethernet.First of all, the ubiquitous networking technology is having a banner year already in the data center where in the first quarter alone, the switching market recorded its strongest year-over-year revenue growth in over five years, and 100G Ethernet port shipments more than doubled year-over-year, according to a report by Dell’Oro Group researchers.[ Now see who's developing quantum computers.] The 16-percent switching growth was, "driven by the large-tier cloud hyperscalers such as Amazon, Google, Microsoft and Facebook but also by enterprise customers,” said Sameh Boujelbene, senior director at Dell’Oro.To read this article in full, please click here

IDG Contributor Network: A digital-first enterprise needs SD-WAN

Since the advent of the internet and IP, networking technology has not seen a seismic shift of this magnitude that is occurring in Enterprise networks today. As organizations move from on-premises application hosting to a cloud-based approach, they are inundated with the inherent challenges of legacy network solutions. The conventional network architectures in most of today’s enterprises, were not built to handle the workloads of a cloud-first organization. Moreover, the increasing usage of broadband to connect to multi-cloud-based applications have escalated concerns around application performance, agility, and network security.Software-defined WAN (SD-WAN) has gained immense traction among CIOs lately. Gartner forecasts that SD-WAN will grow at a 59% compound annual growth rate through 2021 to become a $1.3 billion market. This is because there are a myriad of payoffs of moving to SD-WAN: Primarily, SD-WAN enables easier access to cloud and SaaS based applications for geographically distributed branch offices and mobile work force. Here are but just a few other important benefits that SD-WAN brings to digital-first organizations:To read this article in full, please click here

IBM, strengthens mainframe cloud services with CA’s help

IBM continues to mold the Big Iron into a cloud and devops beast.This week IBM and its long-time ally CA teamed up to link the mainframe and its Cloud Managed Services on z Systems, or zCloud, software with cloud workload-development tools from CA with the goal of better performing applications for private, hybrid or multicloud operations.[ For more on mainframes, read: The (mostly) cool history of the IBM mainframe and Why are mainframes still in the enterprise data center? | Get regularly scheduled insights by signing up for Network World newsletters. ] IBM says zCloud offers customers a way to move critical workloads into a cloud environment with the flexibility and security of the mainframe. In addition, the company offers the IBM Services Platform with Watson that provides another level of automation within zCloud to assist clients with their moves to cloud environments.To read this article in full, please click here

6 years on, IPv4 still dominates IPv6

IPv6, the modern version of the Internet Protocol framework that underlies just about everything on the network, is seeing steady uptake among service providers, but still hasn’t pushed its predecessor, IPv4, into obsolescence, according to a report released today by the Internet Society.There are 24 countries in the world where IPv6 totals more than 15% of overall IP traffic, and 49 that have topped the 5% threshold. Yet the Internet Society – a non-profit that works to promulgate internet standards and lobby for open access to the internet – describes the technology as having moved from the “early adoption” development stage to the “early majority” phase.To read this article in full, please click here

You’re probably doing your IIoT implementation wrong

The Industrial internet of things promises a quantum leap forward in automation, centralized management and a wealth of new data and insight that is often too tempting to pass up. But automating a factory floor or a fleet of vehicles is far from simple, and many would-be IIoT adopters are going about the process all wrong, according to experts.To make an IIoT transition a success, the process has to be led by the line-of-business side of the company – not IT. Successful IIoT adopters frame the entire operation as a matter of digital transformation, aimed at addressing specific business problems, rather than as a fun challenge for IT architects to solve.To read this article in full, please click here

Cato Networks adds threat hunting to its Network as a Service

Enterprises that have grown comfortable with Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service (IaaS) are increasingly accepting of Network as a Service (NaaS). NaaS is a rapidly growing market. According to Market Research Future, NaaS is expected to become a US $126 billion market by 2022, sustaining an annual growth rate of 28.4 percent.One of the key benefits of cloud-based networking is increased security for applications and data. Given that the traditional perimeter of on-premise networks has been decimated by mobile and cloud computing, NaaS builds a new perimeter in the cloud. Now it’s possible to unify all traffic – from data centers, branch locations, mobile users, and cloud platforms – in the cloud. This means an enterprise can set all its security policies in one place, and it can push traffic through cloud-based security functions such as next-generation firewall, secure web gateway, advanced threat protection, and so on.To read this article in full, please click here

REVIEW: 6 enterprise-scale IoT platforms

There's little need to tell anyone in IT that the Internet of Things (IoT) is a big deal and that it's growing insanely fast; BI Intelligence estimates that there will be some 23.3 billion IoT devices by 2019. As IoT support becomes more of an enterprise concern, there are four key issues about enterprise IoT (EIoT) deployments to consider: The sheer number of enterprise IoT endpoint devices – There will be 1 billion by 2019. The frequency of data generated IoT devices – IDC estimates that by 2025, an average connected person anywhere in the world will interact with connected devices nearly 4,800 times per day or one interaction every 18 seconds. The incredible volume of IoT data – Of the 163 zettabytes (that's 1021bytes) of data that will be created in 2025, IDC estimates that 60% will be from IoT endpoints and half of that (roughly 49 zettabytes) will be stored in enterprise data centers. The challenges of maintaining security for your device constellation – IDC estimates that by 2025, 45% of the stored enterprise data will be sensitive enough to require being secured but will not be. [ For more on IoT see tips for securing IoT Continue reading

AMD’s Epyc server encryption is the latest security system to fall

It’s a good thing AMD had the sense not to rub Intel’s nose in the Meltdown/Spectre vulnerability, because it would be getting it right back for this one: Researchers from the Fraunhofer Institute for Applied and Integrated Safety in Germany have published a paper detailing how to compromise a virtual machine encrypted by AMD's Secure Encrypted Virtualization (SEV).The news is a bit of a downer for AMD, since it just added Cisco to its list of customers for the Epyc processor. Cisco announced today plans to use Epyc in its density-optimized Cisco UCS C4200 Series Rack Server Chassis and the Cisco UCS C125 M5 Rack Server Node.To read this article in full, please click here

Blockchain, service-centric networking key to IoT success

Connecting and securing the Internet of Things (IoT) should be achieved with a combination of service-centric networking (SCN) along with blockchain, researchers say. `A multi-university, multiple-discipline group led by Zhongxing Ming, a visiting scholar at Princeton University, say IoT’s adoption will face an uphill battle due in part to bottlenecks between potentially billions of devices, along with the mobile nature of much of it.The scientists, who call their IoT architecture Blockcloud, presented their ideas at GENESIS C.A.T., an innovation-in-blockchain technology event recently in Tokyo.To read this article in full, please click here

Nvidia aims to unify AI, HPC computing in HGX-2 server platform

Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it's no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers.  The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.To read this article in full, please click here

Nvidia aims to unify AI, HPC computing in HGX-2 server platform

Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it's no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers.  The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.To read this article in full, please click here

Copying and renaming files on Linux

Linux users have for many decades been using simple cp and mv commands to copy and rename files. These commands are some of the first that most of us learned and are used every day by possibly millions of people. But there are other techniques, handy variations, and another command for renaming files that offers some unique options.First, let’s think about why might you want to copy a file. You might need the same file in another location or you might want a copy because you’re going to edit the file and want to be sure you have a handy backup just in case you need to revert to the original file. The obvious way to do that is to use a command like “cp myfile myfile-orig”.To read this article in full, please click here

1 91 92 93 94 95 366