Enterprises are investing in their networks at an accelerating rate. As legacy IT on-premises infrastructure gives way to hybrid cloud and virtualized environments, and an escalating data tsunami drives data center expansions, increasing investments of time and money are raising the stakes ever higher. Unfortunately, end users’ expectations for service are growing as well, piling additional demands onto network operators and engineers who are already wrestling with network migration challenges.Yet despite the fact that the enterprise networking environment is rapidly changing, IT support teams are still using the same network performance metrics to monitor their networks and evaluate whether or not service delivery is up to par. The problem is that they’re using a one-dimensional tool to measure a subjective experience that tool was not designed to even understand, much less aid in troubleshooting. It’s kind of like trying to tighten a screw with a hammer.To read this article in full, please click here
Vapor IO, the edge computing specialist that builds mini data centers for deployment at locations such as cell phone towers, has secured Series C financing, which the company says will help accelerate the deployment of its Kinetic Edge Platform as a national network for edge colocation.Vapor IO has been all about developing a model for a distributed network of edge colocation sites, with micro modular data centers in containers about the size of a shipping container. The company had been working with Crown Castle, the nation’s largest provider of shared wireless infrastructure, on an edge collaboration project under the name Project Volutus.Vapor IO has now acquired the assets of Project Volutus from Crown Castle and will offer it under the brand name The Kinetic Edge. It uses both wired and wireless connections to create a low-latency network of its colocation sites, allowing cloud providers, wireless carriers and web-scale companies to deliver cloud-based edge computing applications via its data centers.To read this article in full, please click here
The free and open-source network monitoring software Nagios Core has a long and strong reputation, providing the base for other monitoring suites - Icinga, Naemon and OP5 among them – and a history dating back to 2002 when it launched under the name NetSaint.For this review we tested Nagios Core version 4.4.2 for Linux, which monitors common network services such as HTTP, SMTP, POP3, NNTP and PING.There’s a Windows port that’s a plugin, but many users say it’s unstable. The version we tested also tracks the usage of host resources such as processor load, memory and disk utilization.[ Also see reviews of Icinga and Observium network-monitoring software. | For regularly scheduled insights sign up for Network World newsletters. ]
Hardware requirements vary depending on the number and types of items being monitored, but generally speaking Nagios recommends a server configuration with at least two or four cores, 4-8 GB of RAM and adequate storage for the intended application.To read this article in full, please click here
The rise of the cloud settled the age-old debate about whether IT teams should choose an array of exceptional technologies from various providers, or a fully integrated stack of (mostly unexceptional) applications from a single vendor.Thanks to cloud computing, you can have the very best applications—and the very best clouds—for the IT tasks at hand. And you don't have to deal with the headache of investing heavily in infrastructure and building it yourself.Here's a quick look at five ways the cloud and related technologies enable your IT team to launch, integrate, scale, and secure the full spectrum of applications.1. You can pick the best cloud for the job
While businesses may end up running most applications in a single cloud, there are lots of reasons to diversify and adopt a multi-cloud strategy. The major cloud infrastructure providers have individual strengths—and a well-planned multi-cloud strategy enables you to pick the cloud platform that offers the best combination of technical features, pricing, and performance for each application. Some examples of workloads that may be better running on one hyper-scale cloud over another could be enterprise business applications, big data or high-performance computing (HPC) workloads.To read this article Continue reading
Cisco’s announcement earlier this month that it will add the Viptela SD-WAN technology to the IOS XE software running the ISR/ASR routers will be a mixed blessing for enterprises.On the one hand, it brings SD-WAN migration closer to Cisco customers. On the other hand, two preliminary indicators — one-on-one conversations and Cisco’s refusal to participate in an SD-WAN test — suggest enterprises should expect reduced throughput if they enable the SD-WAN capabilities on their routers.Cisco’s easy migration to SD-WAN
By including the SD-WAN code with IOS XE, Cisco will provide a migration path for the more than one million ISR/ASR edge routers in the field. There’s been a lot of conversation as to whether or not SD-WAN is going to kill the router performance. Delivering SD-WAN code on the ISRs is Cisco’s answer: routers are here to stay but they’ll morph into SD-WAN appliances.To read this article in full, please click here
Over the last few years, I have been sprawled in so many technologies that I have forgotten where my roots began in the world of data center. Therefore, I decided to delve deeper into what’s prevalent and headed straight to Ivan Pepelnjak's Ethernet VPN (EVPN) webinar hosted by Dinesh Dutt.I knew of the distinguished Dinesh since he was the chief scientist at Cumulus Networks, and for me, he is a leader in this field. Before reading his book on EVPN, I decided to give Dinesh a call to exchange our views about the beginning of EVPN. We talked about the practicalities and limitations of the data center. Here is an excerpt from our discussion.To read this article in full, please click here
Welcome to Agility City! Let me set the scene.In the castle, the Wonderful Wizard orchestrates networks in beautiful and powerful ways. Point-to-point tunnel connections are heralded as “architectural wonders,” which decades ago were called bridges with disdain.Meanwhile, The Wicked Witch of the West brews a primordial potion of complexity that is hidden behind curtains of automated provisioning. Packets of information are heavily laden with unnecessary information and double encryption.[ Learn who's developing quantum computers. ]
It almost makes you want Dorothy Gale to appear and click her ruby slippers - “There's no place like home. There's no place like home.” If only we start talking about true networking and not orchestration of bridges.To read this article in full, please click here
In a world with vastly increasing amounts of data and dependency on the Internet, digital transformation is now paramount to the long-term survival of enterprises. But what will digital transformation in the years ahead involve? A crucial component for companies will be ensuring they have enough interconnection bandwidth to handle business demands in the future.Interconnection bandwidth is the ability to support direct private data exchange across a variety of hubs and interconnection points within a network, bypassing the public Internet. These private connections are important because they offer scalability, security, and direct connections to copartners and service providers that companies cannot get otherwise.To read this article in full, please click here
Starting in 2019, NASA will begin using laser communications technology to "enable greater return of science data from space." The reason is laser is more bandwidth-friendly than classic radio for data delivery, plus it's more secure, NASA says in a newly released explainer of its plans.Laser signals from space will be much harder to hack than old-school radio because the signal is more concentrated, the agency says on its website. Plus, the higher frequencies provide more bandwidth — important for space data crunching. And laser equipment is lighter, allowing for longer missions, among other benefits.To read this article in full, please click here
Getting wide area network links up and securely running quickly with minimal IT irritation has always been Cisco Meraki’s strong suite.Equipping customers tasked with securely supporting more cloud applications and mobile devices with ever more throughput and the latest connectivity options are the chief goals behind a raft of new model additions to Cisco Meraki’s MX and Z branch-office security appliances. [ Related: MPLS explained -- What you need to know about multi-protocol label switching
Meraki’s MX family supports everything from SD-WAN and WiFi features to next-generation firewall and intrusion prevention in a single package. To read this article in full, please click here
Getting wide-area network links up and securely running quickly with minimal IT irritation has always been Cisco Meraki’s strong suite.Equipping customers tasked with securely supporting more cloud applications and mobile devices with ever more throughput and the latest connectivity options are the chief goals behind a raft of new model additions to Cisco Meraki’s MX and Z branch-office security appliances. [ Related: MPLS explained -- What you need to know about multi-protocol label switching
Meraki’s MX family supports everything from SD-WAN and Wi-Fi features to next-generation firewall and intrusion prevention in a single package. To read this article in full, please click here
If you still live in a world of the script-driven approach for both service provider and enterprise networks, you are going to reach limits. There is only so far you can go alone. It creates a gap that lacks modeling and database at a higher layer. Production-grade service provider and enterprise networks require a production-grade automation framework. In today's environment, the network infrastructure acts as the core centerpiece, providing critical connection points. Over time, the role of infrastructure has expanded substantially. In the present day, it largely influences the critical business functions for both the service provider and enterprise environments. To read this article in full, please click here
Imagine you are in a crowded ER, and doctors are running from room to room. In the waiting area, patients are checking in via an online portal, and hospital staff are quickly capturing their confidential medical and insurance information on a mobile device. You look down the hall, where admitted patients are receiving treatment through Wi-Fi enabled biomedical devices, and some are even streaming their favorite show on Netflix while they wait. In a hospital, there are hundreds of individuals conducting critical tasks at any given moment, and they rely on thousands of connected devices to get the job done. But, this begs the question: What happens if that network fails?To read this article in full, please click here
To the surprise of many not living it every day, a robust, resilient, and reliable network is one of the most important drivers behind success in today’s business world. Organizations must continuously improve their network infrastructure to better meet organizational requirements and offer the experiences their customers expect. Recent changes in the network market mean this continuous improvement needs to go beyond optimizations and extend all the way to re-architecting the network.The forces driving network re-architecture are twofold: new demands on the network, and innovations in network technology and solutions. These new demands on the network stem from enterprise-wide digital transformation initiatives such as cloud, SD-WAN, machine learning and AI, IoT, edge computing, and more. While these new requirements offer a host of business benefits, they’re also introducing disruptive complexity to the network, driving the need to simplify and accelerate the way all IT services are delivered today.To read this article in full, please click here
Network traffic, by nature, is often unbalanced. For example, a client that requests a video on demand may receive ten times more bandwidth than it sends for that service. Likewise, most web applications are very one-sided, with the bulk of the traffic being from server to client. The opposite is true for many backup applications, where the bulk of the traffic originates at the client and terminates at the server.The United States is like your network – suffering from a trade imbalance. For every packet we ship to a foreign network, we are receiving four or five in return. Just as there are barriers to trade, we apply barriers to our inbound traffic. The barrier for most of us is the actual size of our Internet service interface. Packets queue up and drop at our carrier’s equipment prior to even being seen by our equipment. If you purchase a 50Meg download speed, any packets that arrive at a faster rate (even for a sub-second of time) will be dropped without prejudice. This is a barrier, restriction and tariff on your services that limit your business. The only solution – buy more bandwidth!To read this article in full, please Continue reading
Microsoft and Salesforce have separately announced plans to release some key software products as open source for anyone to use in their data centers.Microsoft plans to release its Open Network Emulator (ONE), a simulator of its entire Azure network infrastructure that it uses as a way to find and troubleshoot problems before they cause network outages. The announcement was made by Victor Bahl, a distinguished scientist with Microsoft Research, on a Microsoft podcast.To read this article in full, please click here
Whether users are looking to stabilize cloud-connected resources, better manage remote networks or simply upgrade a timeworn wide area environment, software-defined-WAN (SD-WAN) technologies are what’s on the purchasing menu.The proof lies in the fact that this segment of the networking market will hit $4.5 billion and grow at a 40.4% compound annual growth rate from 2017 to 2022. In 2017 alone, SD-WAN infrastructure revenues increased 83.3% in 2017 to reach $833 million, according to IDC's recent SD-WAN Infrastructure Forecast. [ Click here to find out more about SD-WAN and why you’ll use it one day and learn about WANs and where they’re headed. | Get regularly scheduled insights by signing up for Network World newsletters. ]
A related report from researchers at the Dell’Oro Group predicts revenue from SD-WAN software components, including controller and virtual network functions, will grow almost twice as fast as the hardware components. Over the next five years, SD-WAN software revenue will grow at a 41% compounded annual growth rate, compared to 21% for hardware.To read this article in full, please click here
The ability to network devices quickly and easily is critical in a hyper-connected world, and although it has been around for decades, DHCP remains an essential method to ensure that devices are able to join networks and are configured correctly. DHCP greatly reduces the errors that are made when IP addresses are assigned manually, and can stretch IP addresses by limiting how long a device can keep an individual IP address.
[ Now read 20 hot jobs ambitious IT pros should shoot for. ]
DHCP definition
DHCP stands for dynamic host configuration protocol and is a network protocol used on IP networks where a DHCP server automatically assigns an IP address and other information to each host on the network so they can communicate efficiently with other endpoints.To read this article in full, please click here
A CIO of a retail chain recently issued an edict that all requirements for networking must be stated as business needs, including all RFIs, RFQs and internal proposals. No networking protocols, features or terms are now permitted. At first glance this seems like a relatively simple instruction, but the IT staff struggled to articulate business needs and map them to network capabilities. The CIO is imposing a discipline of asking “why” three times to try to understand and separate the inertia of past choices from what their business needs today. I believe the CIO is wise in trying to connect the business needs to network capabilities.Speaking the language of the industry
Networking professionals are being left out of the narrative. We are deemed a necessary evil rather than a partner in producing products and services. We are the people that slow things down, make things harder and budget for things people do not understand nor value. Becoming part of the narrative requires that each networking professional understand and anticipate their business’s needs. In fact, I would argue that the public cloud, bring your own device and shadow IT are the result of networking not being part of the narrative. Continue reading
As enterprises endeavor to expand domestic and global footprints, agile network infrastructure connectivity across geographies continues to prove an ongoing challenge. In particular, ensuring that data shared over these networks is protected from unauthorized access is a primary directive in today’s evolving cyber threat landscape. These often-contradictory demands call for IT decision makers to invest in innovation that will facilitate network flexibility and agility without compromising security, productivity or performance.This challenge begs a simple question. How can a WAN deliver the flexibility and agility necessary to help an organization grow without increasing exposure to data breaches and other security problems? After all, if the cost of convenience is increased network vulnerabilities, can it be considered a sound approach?To read this article in full, please click here