Forced to keep pace with rapidly emerging business requirements, networks are changing faster than ever. The business-facing side of networking is under continuous pressure to do more, in more places, faster. Challenging as it is, the network-to-business interaction is simpler than what is going on behind the scenes, as network professionals transform almost every area of their networks to meet new demands.New technologies such as cloud, NFV and SDN are turning traditional networks into hybrid ones. In fact, Gartner predicts that cloud infrastructure services will grow 35.9 percent in 2018, and IDC predicts that SD-WAN adoption will grow at a 40.4 percent CAGR from 2017 to 2022. These numbers imply a great deal of change in networks, change that introduces significant risk of service disruption from minor – a few inconvenienced users – to major – significant outages visible to customers and executives. Reducing the risk during significant transitions is critical. That’s where network performance management and diagnostics (NPMD) products play a significant role.To read this article in full, please click here
The edge is top-of-mind for many IT and OT professionals across a wide range of industries and sectors. This interest is driven by the need to use data more effectively to maintain operations, optimize performance and increase uptime.Existing IT and OT infrastructures typically don’t collect, store and analyze data at the edge. They instead either send this data to the cloud or to enterprise-level computing systems for storage and analysis, the domain of IT personnel.A better solution, specifically for applications where access to data needs to happen quickly, is to perform data collection, storage and analysis at the edge using technologies designed to perform these specific tasks. The benefits of this approach include reduced latency, improved data security and more efficient use of bandwidth.To read this article in full, please click here
The first of the upcoming 5G network technologies won’t provide significant reliability gains over existing wireless, such as 4G LTE, according to a developer involved in 5G.Additionally, the millisecond levels of latency that the new 5G wireless will attempt to offer—when some of it is commercially launched, possibly later this year—isn’t going to be enough of an advantage for a society that’s now completely data-driven and needs near-instant, microsecond connectivity.“Ultra-reliability will be basically not there,” Ari Pouttu, professor for Dependable Wireless at the University of Oulu, told me during a visit to the university in Finland. 5G’s principal benefits over current wireless platforms are touted as latency reduction and improved reliability by marketers who are pitching the still-to-be-released technology.To read this article in full, please click here
To get up and running on a self-service, big-data analytics platform efficiently, many data-center and network managers these days would likely think about using a cloud service. But not so fast – there is some debate about whether the public cloud is the way to go for certain big-data analytics.For some big-data applications, the public cloud may be more expensive in the long run, and because of latency issues, slower than on-site private cloud solutions. In addition, having data storage reside on premises often makes sense due to regulatory and security considerations.
[ Also see How to plan a software-defined data-center network and Efficient container use requires data-center software networking.]
With all this in mind, Dell EMC has teamed up with BlueData, the provider of a container-based software platform for AI and big-data workloads, to offer Ready Solutions for Big Data, a big data as a service (BDaaS) package for on-premises data centers. The offering brings together Dell EMC servers, storage, networking and services along with BlueData software, all optimized for big-data analytics. To read this article in full, please click here
Application and network engineers see the world differently. Unfortunately, these differences often result in resentment, with each party keeping score. Recently, application engineers have encroached on networking in a much bigger way. Sadly, if technical history repeats itself, we will revisit many of the long-ago problems again as application engineers rediscover the wisdom held by networking engineers.There are many areas of network engineering and application engineering where there is no overlap or contention. However, the number of overlapping areas is increasing as the roles of network and application engineers expand and evolve.Application engineers will try to do anything they can with code. I’ve spoken to many network engineers who struggle to support multi-cast. When I ask them why they are using multi-cast, they nearly always say, “the application engineers chose it, because it's in the Unix Network Programming book.” The Berkley Socket programming interface permits using multi-cast. The application engineers then provide lost packet recovery techniques to deliver files and real-time media using unicast and multicast. The Berkeley Socket does not easily support VLANs. Thus VLANs have always been the sole property of the network engineer. Linux kernel network programming capabilities in recent years become much more Continue reading
This month marks the 20th anniversary of Nmap, the open-source network mapping tool that became the standard used by many IT professionals, but that can be a bit much if you only need to do general network maintenance and are intimidated by its command-line interface.There are alternatives – not many – that range in technical sophistication from tools with GUIs that can ease you into performing the essentials of network maintenance to more advanced software that is similar to Nmap itself.[ Also see reviews of Icinga, Observium, Nagios and Zabbix network-monitoring software.]
Like Nmap, all these network tools are free.To read this article in full, please click here
We took a look at open-source Zabbix network monitoring software version 3.4.9 and found it to be a solid, straightforward offering that’s easy to install, provides the configurability and granularity that enterprises demand, and delivers fast discovery.To read this article in full, please click here(Insider Story)
Enterprises are investing in their networks at an accelerating rate. As legacy IT on-premises infrastructure gives way to hybrid cloud and virtualized environments, and an escalating data tsunami drives data center expansions, increasing investments of time and money are raising the stakes ever higher. Unfortunately, end users’ expectations for service are growing as well, piling additional demands onto network operators and engineers who are already wrestling with network migration challenges.Yet despite the fact that the enterprise networking environment is rapidly changing, IT support teams are still using the same network performance metrics to monitor their networks and evaluate whether or not service delivery is up to par. The problem is that they’re using a one-dimensional tool to measure a subjective experience that tool was not designed to even understand, much less aid in troubleshooting. It’s kind of like trying to tighten a screw with a hammer.To read this article in full, please click here
Vapor IO, the edge computing specialist that builds mini data centers for deployment at locations such as cell phone towers, has secured Series C financing, which the company says will help accelerate the deployment of its Kinetic Edge Platform as a national network for edge colocation.Vapor IO has been all about developing a model for a distributed network of edge colocation sites, with micro modular data centers in containers about the size of a shipping container. The company had been working with Crown Castle, the nation’s largest provider of shared wireless infrastructure, on an edge collaboration project under the name Project Volutus.Vapor IO has now acquired the assets of Project Volutus from Crown Castle and will offer it under the brand name The Kinetic Edge. It uses both wired and wireless connections to create a low-latency network of its colocation sites, allowing cloud providers, wireless carriers and web-scale companies to deliver cloud-based edge computing applications via its data centers.To read this article in full, please click here
Cisco has introduced its first Unified Compute System (UCS) server designed specifically to handle artificial intelligence (AI) and machine learning (ML) workloads. The Cisco UCS C480 ML is designed specifically for data scientists to perform AI and ML at every stage of the lifecycle.It’s not like Cisco whipped up all kinds of special sauce for this server; it’s just a lot of very high-end components. The UCS C480 ML M5 rack server is a 4U device with the latest Intel Xeon processors and 8 Nvidia Tesla V100-32G GPUs with NVLink interconnects.The top-of-the-line configuration features two Xeon processors, up to 128GB of DDR4 RAM, 24 SATA hard drives or SSDs, six NVMe SSD drives, and four x100G Virtual Interface Cards (VICs). The UCS C480 ML M5 is designed to work with Cisco's various servers and HyperFlex systems with GPUs.To read this article in full, please click here
Some people love to use the expression “before it was cool”. In hindsight, it can be applied to almost anything that gains acclaim. According to this Reddit thread, for example, Facebook was already cool when it was still known simply as “The Facebook” way back in 2004. My point: the “before it was cool” expression is really about when something’s value or significance is recognized very early on, and this can certainly be applied to many of the technological advancements we see today. Connecting devices, or instrumenting machinery with some form of connectivity, to capture data and provide control, was a used in many industries, before the term ‘Internet of Things’ or ‘IoT’ became cool and all pervasive.To read this article in full, please click here
Ciena
Chris Sweetapple, Consultant, Managed Service Providers
In our final post in this 3-part series covering one hero’s journey on the road to streamlined enterprise networking operations, Ciena’s Chris Sweetapple describes how Our Hero embraces business Ethernet to shed complexity and simplify operations, creating a network that grows with the business.To read this article in full, please click here
The free and open-source network monitoring software Nagios Core has a long and strong reputation, providing the base for other monitoring suites - Icinga, Naemon and OP5 among them – and a history dating back to 2002 when it launched under the name NetSaint.For this review we tested Nagios Core version 4.4.2 for Linux, which monitors common network services such as HTTP, SMTP, POP3, NNTP and PING.There’s a Windows port that’s a plugin, but many users say it’s unstable. The version we tested also tracks the usage of host resources such as processor load, memory and disk utilization.[ Also see reviews of Icinga and Observium network-monitoring software. | For regularly scheduled insights sign up for Network World newsletters. ]
Hardware requirements vary depending on the number and types of items being monitored, but generally speaking Nagios recommends a server configuration with at least two or four cores, 4-8 GB of RAM and adequate storage for the intended application.To read this article in full, please click here
The OpenStack Foundation has announced the general availability of the 18th iteration of its cloud platform, called OpenStack Rocky. The major new functionalities to the platform are faster upgrades and enhanced support for bare metal infrastructure.Bare-metal cloud is a term for cloud services that come with zero software. When you rent an instance on Amazon S3 or Microsoft Azure, you get a virtualized environment that is run on a hypervisor and shared with another, unknown user. This often causes performance issues, since you never know what kind of neighbor you will get each time.To read this article in full, please click here
High-quality, reliable network hardware and data center cabling are requirements for a high-performing technology infrastructure and for a successful IT team that helps drive more business. It’s the life cycle for your network.However, in these days of shrinking budgets and rising demands, CIOs, IT professionals, and buyers are being pressured to do more while reducing costs. How can this be done?Having the right approach when it comes to network hardware and data center cabling is a powerful way to enable your IT organization to do a lot more while optimizing your budget. [ Read also: How to plan a software-defined data-center network ]
The IT value within the life cycle
There are many nuances to a hardware investment that some organizations don’t take into account. The opportunity to reduce capital expenditure (CAPEX) spends exists, but it requires incorporating pre-owned hardware into the equation.To read this article in full, please click here
Turning a list of names, addresses and related information into a Google map is a lot easier than you might think. The effort required depends, as you might imagine, on the information that you starting with. But if the format is fairly consistent, it’s relatively easy to massage the information into a form that can be uploaded into a format that works.First, what you can expect
Once you’ve loaded a list of names and addresses into a Google map, you will be able view the location of each person and set up your map such that clicking on any of the map markers displays the information collected for that address.To read this article in full, please click here
Cisco’s announcement earlier this month that it will add the Viptela SD-WAN technology to the IOS XE software running the ISR/ASR routers will be a mixed blessing for enterprises. On the one hand, it brings SD-WAN migration closer to Cisco customers. On the other hand, two preliminary indicators — one-on-one conversations and Cisco’s refusal to participate in an SD-WAN test — suggest enterprises should expect reduced throughput if they enable the SD-WAN capabilities on their routers.Cisco’s easy migration to SD-WAN
By including the SD-WAN code with IOS XE, Cisco will provide a migration path for the more than one million ISR/ASR edge routers in the field. There’s been a lot of conversation as to whether or not SD-WAN is going to kill the router performance. Delivering SD-WAN code on the ISRs is Cisco’s answer: routers are here to stay but they’ll morph into SD-WAN appliances.To read this article in full, please click here
Welcome to Agility City! Let me set the scene.In the castle, the Wonderful Wizard orchestrates networks in beautiful and powerful ways. Point-to-point tunnel connections are heralded as “architectural wonders,” which decades ago were called bridges with disdain.Meanwhile, The Wicked Witch of the West brews a primordial potion of complexity that is hidden behind curtains of automated provisioning. Packets of information are heavily laden with unnecessary information and double encryption.[ Learn who's developing quantum computers. ]
It almost makes you want Dorothy Gale to appear and click her ruby slippers - “There's no place like home. There's no place like home.” If only we start talking about true networking and not orchestration of bridges.To read this article in full, please click here
Cybercrime damage is projected to reach $6 trillion annually by 2021. That’s creating lots of demand for security protection—estimated at over $1 trillion cumulatively between 2017 and 2021. As a result, an estimated 1,200 vendors are competing to provide enterprise-class cybersecurity products, so how do you go about choosing which solution to use?There’s no doubt, cyberthreats are real—according to the Online Trust Alliance (OTA), the number of cyber incidents targeting businesses almost doubled from 82,000 in 2016 to 159,700 in 2017, and due to non-reporting of many incidents, the actual number for 2017 could well have exceeded 360,000.To read this article in full, please click here
Ready or not, the upgrade to an important internet security operation may soon be launched. Then again, it might not.The Internet Corporation for Assigned Names and Numbers (ICANN) will meet the week of Sept. 17 and will likely decide whether or not to give the go ahead on its multi-year project to upgrade the top pair of cryptographic keys used in the Domain Name System Security Extensions (DNSSEC) protocol — commonly known as the root zone key signing key (KSK) — which secures the Internet's foundational servers.[ RELATED: Firewall face-off for the enterprise ]
Changing these keys and making them stronger is an essential security step, in much the same way that regularly changing passwords is considered a practical habit by any Internet user, ICANN says. The update will help prevent certain nefarious activities such as attackers taking control of a session and directing users to a site that for example might steal their personal information.To read this article in full, please click here