A recent Gartner report on network performance monitoring and diagnostics (NPMD) estimated the market to a whopping $2.1 billion and growing at a compound annual growth rate (CAGR) of 15.9 percent, with more growth in sight. Wow. So what will drive this growth and why?New approaches to harvesting network data using sophisticated big data analytics techniques combined with cloud computing and machine learning technologies is the answer. This perfect confluence of technologies is poised to redefine the conventional infrastructure management market.Central to this shift is the use of analytics technologies and strategies to extract new insights and value from data produced by and collected from the network to drive business value.To read this article in full, please click here
Traditional networking architectures over the past two decades or so prescribe that the hub of the network be build around a specific location, such as a data center or a company’s headquarters building. This location houses most of the equipment for compute, storage, communications, and security, and this is where enterprise applications are traditionally hosted. For people in branch and other remote locations, traffic is typically backhauled to this hub before going out to other locations, including to the cloud.Though that formula has been standard operating procedure for many years, it doesn’t fit the way of work for many enterprises today. For one thing, there has been a major migration to the cloud. Those enterprise applications that run the business are now hosted in cloud platforms such as Amazon Web Services or Microsoft Azure, either as private applications or as SaaS apps such as Office 365 and Salesforce. In fact, companies often use multiple cloud platforms these days.To read this article in full, please click here
In comedy, unexpected actions make for good fun. The pratfalls. The eye pokes. But in networking, the unexpected is hardly funny. And yet it was the antics of the Three Stooges that came to mind as I reviewed the results of Cato Networks’ latest networking survey.The survey canvassed more than 700 enterprise IT buyers from around the globe about the drivers and challenges facing their networking and security deployments. What we observed serves as a promise and warning for anyone considering SD-WAN.SD-WAN is supposed to be the answer to network complexity. And like any good slapstick setup, we can almost see how SD-WAN meets that objective. As an overlay aggregating traffic from MPLS, broadband and any other underlying data transport, SD-WAN hides the complexity of a building a network from multiple data transports. Policies provide the intelligence for SD-WAN to select the optimum network for each application freeing IT from making those calculations and changes manually, if that was even possible.To read this article in full, please click here
Reno-based analyst Synergy Research Group released a review of the 2017 cloud market on January 4th. The report, which estimated the total scope of the industry at $180 billion, gauged the year-over-year growth rate of infrastructure as a service (cloud hosting) and platform as a service (combined cloud hardware and software) at 47%. Such astronomical growth in the infrastructure of cloud is fueling growth of data centers. The extent to which cloud is becoming the new form of infrastructure cannot be overstated, with Cisco predicting 95% of data center traffic will be through cloud infrastructure by 2021.To read this article in full, please click here
An important side effect of digital transformation is that your network is likely to become a digital crime scene. As such, it needs a systematic approach to identify the culprit. In this analogy, a crime can be equivalent to a network outage or gray failure. And this is where intent-based networking (IBN) can help.The general approach in solving a crime like this is to collect as much information as possible, as soon as possible, and to narrow down the pool of suspects. So, let’s see via an example what role IBN plays in all this.Digital crime scene profiling
Without “intent” you don’t even know that a crime has been committed. Finding traces of blood in a room in a blood bank or hospital are expected. Finding traces of blood in a room of a home of a missing person is a different matter. But without intent it’s hard to distinguish a blood bank from a home. In a similar manner, dropping a packet of an intruder or forbidden traffic source is a good thing. Dropping a packet of a customer because of a misconfigured ACL is a bad thing. Intent helps you differentiate the two.To Continue reading
In 2010, pop singer Katy Perry released a song called Firework. Some of its lyrics are: “Cause baby you're a firework, come on show 'em what you're worth, make 'em go oh, oh, oh.” In addition to being one of my favorite Katy Perry songs, it’s always reminded me of the firework that was Riverbed and its charismatic and often outspoken CEO, Jerry Kennelly.Riverbed was the face of WAN optimization
Riverbed was indeed a firework, as it hit the market with a bang and became the face of WAN optimization. (Note: Riverbed is a client of ZK Research.) Riverbed wasn’t the first vendor in this market — that was Packeteer — but Riverbed evangelized it and became synonymous with the technology.To read this article in full, please click here
"I am all about useful tools. One of my mottos is 'the right tool for the right job.'" –Martha StewartIf your "right job" involves wrangling computer networks and figuring out how to do digital things effectively and efficiently or diagnosing why digital things aren't working as they're supposed to, you've got your hands full. Not only does your job evolve incredibly quickly becoming evermore complex, but whatever tools you use need frequent updating and/or replacing to keep pace, and that's what we're here for; to help in your quest for the right tools.[ Don’t miss customer reviews of top remote access tools and see the most powerful IoT companies . | Get daily insights by signing up for Network World newsletters. ]
We've done several roundups of free network tools in the past, and since the last one, technology has, if anything, sped up even more. To help you keep up, we've compiled a new shortlist of seven of the most useful tools that you should add to your toolbox.To read this article in full, please click here
Networking used to be all about specialized “boxes,” but that era is fading fast. By a specialized box, I mean a piece of hardware that was built to perform an individual function. Physical firewalls, routers, servers, load balancers, etc., are all examples of these different boxes, and they are still everywhere. But new technology is seriously disrupting the old ways of doing things.Virtualization has made it possible to separate the software functionality of all those boxes from the specific appliance-type hardware in which it resides. Network functions virtualization (NFV) software can replicate an appliance’s function in a more cost-effective commodity server, which is easy to obtain and deploy and can hold the software for numerous functions at once. People like the improved simplicity, cost, agility and speed that comes with this change.To read this article in full, please click here
It’s fair to say that there has never been a bigger driver of network evolution than the cloud. The reason for this is the cloud is a fundamentally different kind of compute paradigm, as it enables applications, data, and architecture changes to be done seemingly instantly. Cloud-native infrastructure is what enables mobile app developers to roll out new versions daily if they so choose.The cloud is network-centric
Another fact about the cloud is that it is a network-centric compute model, so a poorly performing network leads to equally poorly performing applications. A lack of network agility means DevOps teams need to sit around twiddling their thumbs while network operations make changes to the network. To read this article in full, please click here
The technology that powers businesses is evolving faster than ever before, allowing us to do more than we ever thought possible. Things that were once only seen in science fiction movies are actually coming to life.One of these areas is the field of artificial intelligence (AI). We’re on the verge of having machines diagnose cancer, map out the universe, take over dangerous jobs, and drive us around. The downside to the rapid evolution has been a rise in complexity. Putting together the infrastructure and software to power AI-based systems can often take months to build, tune, and tweak so that it runs optimally.Compounding the difficulty is that AI infrastructure is often deployed by data scientists who do not have the same level of technical acumen as the IT team. To read this article in full, please click here
The Open Compute Project began in 2011 when Facebook published the designs of some homebrew servers it had built to make its data centers run more efficiently.Facebook hoped that other companies would adopt and adapt its initial designs, pushing down costs and improving quality – and they have: Sales of hardware built to Open Compute Project designs topped $1.2 billion in 2017, double the previous year, and are expected to reach $6 billion by 2021.[ Don’t miss customer reviews of top remote access tools and see the most powerful internet of things companies . | Get weekly insights by signing up for our CIO Leader newsletter. ]
Those figures, from IHS Markit, exclude hardware spending by OCP board members Facebook, Intel, Rackspace, Microsoft and Goldman Sachs, which all use OCP to some degree. The spend is still a small part of the overall market for data-center systems, which Gartner estimated was worth $178 billion in 2017, but IHS expects OCP’s part to grow 59 percent annually, while Gartner forecasts that the overall market will stagnate, at least through 2019.To read this article in full, please click here
As long as I have been an industry analyst, network engineers have tried to build multifunction boxes that are capable of addressing a wide range of network functions. These all-purpose network boxes have been lost to history as single-function platforms optimized for network performance (e.g., router or WAN optimization) dominated the market. The branch network is poised to benefit from the advances in software networking to collapse all network functions on to a single platform — the software-defined branch (SD-Branch).A total addressable market (TAM) analysis of the SD-Branch market starts with understanding the total spend on branch networking hardware and software. Worldwide spending on routers, WAN optimization, SD-WAN, network security, Wi-Fi, and ethernet switches at branch locations is approximately $15 billion, according to Doyle Research. (Disclosure: I’m the principal analyst at Doyle Research.)To read this article in full, please click here
There’s no question that Wi-Fi networks continue to grow in importance for most companies. Workers rely on it to do their jobs, students are being educated on mobile tablets, doctors are pulling up records at a patients' bedside, and millions of Internet of Things (IoT) devices are now being connected to Wi-Fi. Wireless is no longer the connection of convenience — it’s mission critical, and a poor-performing wireless network means a key process is likely to fail. [ Find out whether MU-MIMO can really boost Wi-Fi capacity and learn why you need MU-MIMO in your wireless routers . | Get regularly scheduled insights by signing up for Network World newsletters. ]
Wi-Fi troubleshooting a continued source of pain for network engineers
If the wireless network is so critical, why aren’t there better Wi-Fi troubleshooting tools? A recent ZK Research survey about Wi-Fi troubleshooting uncovered how difficult this. Some interesting data points from the survey:To read this article in full, please click here
The notion of disaggregation – separating the operating system and applications from the underlying hardware – has always been a conundrum for Cisco. In a nutshell, why would the company risk losing all of the millions of dollars in development and the key networking features tied up in current Cisco hardware and software packages?But in the new world of all-things software in which Cisco plans to be king, the disaggregation strategy is gaining momentum.
[ Learn about how server disaggregation can boost data center efficiency and learn the how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ]
This week the company took things a step further in announcing a variety of disaggregation steps enterprise and service provider customers could be interested in.To read this article in full, please click here
Last month it was Catalyst 9000 switches, and this month its routers. Yes, my project engineering staff have had a surprising amount of inquiries regarding routers.Routers vital to enterprise networks
When looking at distribution for an enterprise network, well-planned routing is the key to success. Routers can be absolutely vital for networks, as they connect a large amount of worksites within one large, umbrella-like network. At the enterprise level, they provide redundant paths, connect ISPs, and can translate data between different media.To read this article in full, please click here
It’s not good enough to run cables and just hope they work, or simply say it’s all good if they provide a working network connection to the computer or device. You should double-check by testing or qualifying the cable runs before you call the job complete.You should use a tester to check if all the cable pairs are intact and correctly wired and see if the cable can truly handle the data rates you desire. Network testers can also be a lifesaver when troubleshooting network issues or making changes to the wired network.They could for instance tell you which cable pairs you might have mixed up when terminating the cable. Or if you’re working on someone else’s network install that didn’t document or label any cable runs, you can utilize the tester to help identify where the cables are running.To read this article in full, please click here
Cisco is well known for many things. It’s the world’s largest networking vendor, it has typically been the bell weather for IT spending, as it’s often predicted upticks or downticks in spending before other vendors, and its ability to catch market transitions has been remarkable, which is why it has a market leading position in so many technology areas adjacent to the network.I’ve always felt that one of the more under-appreciated attributes of Cisco is the work its corporate social responsibility (CSR) group does in trying to solve some of the globe’s biggest problems. Cisco has been very active at the World Economic Forum held annually in Davos, Switzerland, where world leaders, celebrities, and business leaders gather to discuss issues such as ending hunger and creating greater equality.To read this article in full, please click here
Troubleshooting WiFi problems has been the bane of the network engineer’s existence for nearly a decade. So often these problems go undiagnosed that clients have even since stopped reporting them. Bad WiFi chalked up as just part of everyday life.Yet the role enterprise WLAN plays has literally become a critical part of an ever-growing ecosystem of both end user and IoT devices. Add to that the technology advancements in 802.11 and the task of maintaining a reliable WiFi network has become nearly out of reach of the average WLAN engineer. To solve this conundrum WLAN vendors, have a long history of attempting to solve the problem with hardware sensors and detailed active site surveys.To read this article in full, please click here
Troubleshooting WiFi problems has been the bane of the network engineer’s existence for nearly a decade. So often these problems go undiagnosed that clients have even since stopped reporting them. Bad WiFi chalked up as just part of everyday life.Yet the role enterprise WLAN plays has literally become a critical part of an ever-growing ecosystem of both end user and IoT devices. Add to that the technology advancements in 802.11 and the task of maintaining a reliable WiFi network has become nearly out of reach of the average WLAN engineer. To solve this conundrum WLAN vendors, have a long history of attempting to solve the problem with hardware sensors and detailed active site surveys.To read this article in full, please click here
Few of us think about filters until we take our car in for its 50,000-mile service. Looking at the service invoice, there’s an air filter, oil filter, fuel filter, cabin air filter, transmission filter…Sheesh, how many filters does this thing have?!We may also think about them at family dinners. All of us have at least one relative who could use a filter. I’m looking at you, Aunt Sondra.But most of the time, filters are out of sight, out of mind. Most people are gobsmacked when they discover we carry a dozen or more around in our pockets – and they’re not for pocket lint, Snapchat or Instagram.The basics of RF filters
Filters, like antennas, are an increasingly important part of the networking mix.To read this article in full, please click here