Throughput testing has long been regarded as the best way to find great Wi-Fi products, validate WLAN design and troubleshoot user Wi-Fi issues. It's not. Wi-Fi throughput testing generates a single data point under a specific scenario in a highly dynamic environment. That's it. In today's enterprise network environment, we need a lot more than that.+RELATED: What is MU-MIMO and can it boost Wi-Fi capacity?+It’s tempting, for example, to use Wi-Fi throughput tests to evaluate vendor equipment by determining the maximum TCP data rate (or speed) that, say, an access point can achieve with one or more client devices concurrently connected. But these tests don’t really reflect reality because you won’t see how that equipment really measures up until you have the network fully loaded and deployed.To read this article in full or to leave a comment, please click here
To solidify its position at the center of the industrial internet of things (IIoT), GE Digital is adding features to its Predix platform as a service (PaaS) that will let industrial enterprises run predictive analytics as close as possible to data sources, whether they be pumps, valves, heat exchangers, turbines or even machines on the move.The main idea behind edge computing is to analyze data in near real-time, optimize network traffic and cut costs. At its annual Minds + Machines conference this week in San Francisco, GE Digital, the software arm of industrial conglomerate GE, is offering an array of new applications and features designed to run at the edge network and let companies more efficiently and precisely plan service times and predict equipment failure.To read this article in full or to leave a comment, please click here
Have you ever wondered how a hacker breaks into a live system? Would you like to keep any potential attacker occupied so you can gather information about him without the use of a production system? Would you like to immediately detect when an attacker attempts to log into your system or retrieve data? One way to see and do those things is to deploy a honeypot. It’s a system on your network that acts as a decoy and lures potential hackers like bears get lured to honey. Honeypots do not contain any live data or information, but they can contain false information. Also, a honeypot should prevent the intruder from accessing protected areas of your network.To read this article in full or to leave a comment, please click here
From an infrastructure perspective, Fidelity Investments uses a combination of private cloud hosted in company data centers plus multiple public cloud platforms, leading to the question, how to manage this hybrid infrastructure?One key is being flexible, say Maria Azua Himmel, senior vice president of distributed systems at the 71-year old multi-national with $2.13 trillion in assets under management.Azua is attempting to implement strategies among Fidelity’s application developers to ensure that when new apps are built they can be run in almost any environment, whether it be one of the public clouds the company uses or inside its own data centers. To do this Azua is advocating for the use of application containers and software-defined infrastructure that can be controlled via application programming interfaces (APIs).To read this article in full or to leave a comment, please click here
The cloud revolution began in the Linux and Unix world, and for a long time the cloud wasn’t a welcoming environment for workloads that run on Windows Server.To read this article in full or to leave a comment, please click here(Insider Story)
To understand where we are going, we first must understand where we have been. This applies equally well to the history of nations across the globe as it does to computers and computer networking.With that in mind, we’re taking a slow (somewhat meandering) stroll through the history of how computers talk to each other. Last time, we talked a bit about dial-up Bulletin Board Systems (BBSs) – popular through the 1980s and the bulk of the 1990s.Also on Network World: The hidden cause of slow Internet and how to fix it
Today, I’d like to talk about one of the most influential, but rarely discussed, networking protocol suites: PARC Universal Packet (PUP).To read this article in full or to leave a comment, please click here
Earlier this year, Cisco surprised many industry watchers when it forked out a cool $3.7 billion to acquire AppDynamics, which was about 2x the valuation it had going into its IPO. Most people know Cisco as the de facto standard and market leader in networking. AppDynamics lives higher up the stack and provides a view into how applications are performing by collecting data from users, applications, databases and servers.One might surmise that Cisco will use AppDynamics to go after a different buyer, and that assumption is correct. AppDynamics paves the way for Cisco to have a meaningful discussion with lines of business, application developers and company leaders. However, thinking AppDynamics isn’t for Cisco’s current core customers, network engineers, is wrong. AppDynamics can provide an equal amount of value to that audience.To read this article in full or to leave a comment, please click here
The ifconfig and netstat commands are incredibly useful, but there are many other commands that can help you see what's up with you network on Linux systems. Today’s post explores some very handy commands for examining network connections.ip command
The ip command shows a lot of the same kind of information that you'll get when you use ifconfig. Some of the information is in a different format – e.g., "192.168.0.6/24" instead of "inet addr:192.168.0.6 Bcast:192.168.0.255" and ifconfig is better for packet counts, but the ip command has many useful options.First, here's the ip a command listing information on all network interfaces.To read this article in full or to leave a comment, please click here
Cisco today announced plans to acquire San Jose-based startup Perspica, a company that specializes in using machine learning to analyze streams of data.Cisco says it will integrate the Perspica technology into its AppDynamics product, which provides network and application monitoring and analytics.One of the reasons it was attracted to Perspica is because of the company’s ability to monitor data in real-time, Cisco says. Being able to process data as it's created or very soon afterwards can speed the time that end users are able to gain insights from the data, the company says. “Perspica is known for its stream-based processing with the unique ability to apply machine learning to data as it comes in instead of waiting until it’s neatly stored,” says Bhaskar Sunkara, VP of Engineering at AppDynamics.To read this article in full or to leave a comment, please click here
General Electric outfitted 650 British Petroleum (BP) oil rigs with sensors and software that report operational data to a central GE platform that analyzes it to optimize how the rigs run – making them 2 to 4% more efficient than before. General Electric
Jim Fowler, CIO, General Electric
GE CIO Jim Fowler credits most of the improvement not with workers, but with machines. “Machines are telling people what to do more than people are telling machines what to do,” Fowler said at a meeting of the Open Networking User Group (ONUG) this week in New York. The sensors and accompanying software platform helped create incremental improvements in production and avoidance of downtime. He calls it the merging of information technology and operational technology to create value.To read this article in full or to leave a comment, please click here
As we enter the last quarter of 2017, business and IT executives are turning more of their attention to how they can use technology to accomplish their 2018 business objectives. We’ve compiled a list of five trends in cloud computing that strategic businesses will prepare for in the coming year.1. Exponential growth in cloud services solutions
Software as a Service (SaaS) opened a flexible and financially attractive door for businesses and consumers to try early cloud services. The growth of infrastructure and platform as a service (Iaas and PaaS, respectively) has expanded the number of cloud solutions available in the public and private sectors. In 2018, we expect to see many more organizations take advantage of the simplicity and high-performance the cloud guarantees.To read this article in full or to leave a comment, please click here
Artificial intelligence is all the rage these days. There’s broad consensus that AI is the next game-changing technology, poised to impact virtually every aspect of our lives in the coming years, from transportation to medical care to financial services. Gartner predicts that by 2020, AI will be pervasive in almost every new software product and service and the technology will be a top five investment priority for more than 30 percent of CIOs.An area where AI is already showing enormous value is wireless networking. The use of machine learning can transform WLANs into neural networks that simplify operations, expedite troubleshooting and provide unprecedented visibility into the user experience.To read this article in full or to leave a comment, please click here
A decade ago, one of the big knocks on Cisco was that its products were difficult to deploy and often even harder to manage. Over the past few years, though, particularly since Chuck Robbins took the helm as CEO, the company has been laser focused on making its products simpler to operate.It’s important to understand that making products easy to use is actually much more difficult than those that are hard to use. As an example, Cisco’s network-intuitive, intent-based networking solution enables the operations for the campus network to be fully automate, dramatically cutting the operational overhead required by network engineers.MORE ON NETWORK WORLD: What is intent-based networking?
This week, Cisco is bringing the benefits of intent-based networking to the data center with the 3.0 version of its Application Centric Infrastructure (ACI) software-defined networking (SDN) product. The latest release of ACI will increase network automation, simplify operational tasks and make it easier to secure agile workloads regardless of whether they are in containers, in virtual machines, on bare metal or in on-premises data centers. To read this article in full or to leave a comment, please click here
Technologies like content delivery networks, cloud compute and storage, container schedulers, load balancers, web application firewalls, DDoS mitigation services and many more make up the building blocks that serve the online applications of organizations today. But the entry point to every one of those applications is an often-ignored bit of infrastructure: DNS. As the internet has mushroomed in size and traffic, DNS has adapted to become a critical factor in application delivery. Organizations that rely on content delivery networks (CDNs) can work with their DNS provider(s) to create a CDN strategy that best serves them and their customers.CDN: the what and the why
A CDN’s job is what it sounds like: deliver content such as images, video, html files and javascript from a network of distributed systems to end-users. CDNs have been around for about as long as Managed DNS companies. Akamai is usually considered the first serious CDN player, and the company rose to prominence during the first dot-com boom. Generally, CDNs deliver content over HTTP or HTTPS, the web protocols, although there are occasionally use cases like video delivery where other protocols come into play.To read this article in full or to leave a comment, please Continue reading
“I don’t drink water, period. I live in Los Angeles and the water I get from the tap is lackluster, in terms of quality.”If someone said this, what is the first thought that comes to mind (assuming the person isn’t wearing hemp clothing and has their hair in dreadlocks)?“Have you tried using a water filter?” … is what you and I would probably ask, right?After you read below, this will be the same thing you say when you hear someone say, “VoIP isn’t a good fit for our company because we only have one ISP in the area, and the connection is shaky, at best.”Your response will be “Have you tried using SD-WAN to fix your call quality?”To read this article in full or to leave a comment, please click here
The rise of SD-WANs has raised an interesting debate. Is the internet good enough to replace a private network for an enterprise WAN?A decade ago, no one would have even considered this, but broadband speeds have increased and more things have moved to the cloud. Also, SD-WAN technology allows for dynamic path selection, which protects the WAN from outages so companies can use multiple broadband connections instead of something like MPLS.Global SD-WAN vendor Aryaka recently examined this question in its “State of SD-WAN Connectivity” report (registration required), which measured and compared data transport from the same pairs of locations using both the internet and over Aryaka’s own global private network. The test run was a randomly created 100 KB file, and connect time and transfer time were captured. The application response time was then calculated as the sum of these two metrics.To read this article in full or to leave a comment, please click here
Network-based firewalls have become almost ubiquitous across US enterprises for their proven defense against an ever-increasing array of threats.A recent study by network testing firm NSS Labs found that up to 80% of US large businesses run a next-generation firewall. Research firm IDC estimates the firewall and related unified threat management market was a $7.6 billion industry in 2015 and expected to reach $12.7 billion by 2020.What is a firewall?
Firewalls act as a perimeter defense tool that monitor traffic and either allow it or block it. Over the years functionality of firewalls has increased, and now most firewalls can not only block a set of known threats and enforce advanced access control list policies, but they can also deeply inspect individual packets of traffic and test packets to determine if they’re safe. Most firewalls are deployed as network hardware that processes traffic and software that allow end users to configure and manage the system. Increasingly, software-only versions of firewalls are being deployed in highly virtualized environments to enforce policies on segmented networks or in the IaaS public cloud.To read this article in full or to leave a comment, please click here
Old habits die hard, especially when it comes to buying network gear and accessories based on long-standing procurement practices. While it may seem easier to sustain the status quo, doing so can expose you to undue costs created by manufacturer price-gouging practices.Case in point: Optical transceivers, which Gartner says accounts for 10 to 15 percent of enterprise network capital spending. This may not seem like a big budget buster, but huge markups on optics is the subject of a new Gartner report, entitled “How to Avoid the Biggest Rip-Off in Networking.”To read this article in full or to leave a comment, please click here
Retail hasn’t lost its “cool.” [aaaayyyyy]It’s just reinventing it. We know this but Amazon’s recent purchase of Whole Foods sure gave everyone a wake-up call to “innovate or get left-in-the-dust.”I know, you’re in charge of IT, not corporate strategy… but bear with me. This ends up being an IT thing.As Forbes recently detailed, while Amazon unveiled its plans for Whole Foods (which includes decreased prices and the addition of industry-disrupting in-store technology), the market reacted. That same afternoon, stocks of several major brick-and-mortar retailers and grocery stores experienced significant drops in stock price.To read this article in full or to leave a comment, please click here
In the world of ever-more complex systems, there is nothing more fragile than an attempt to make nothing fail. A system that assumes that everything must work is a system designed to fail. The reality of the world is that things will fail, and those cannot bring down the whole business. As British Airways has amply demonstrated, a fragile system where everything fails is not good for business.Many years ago I wrote some posts on the challenges of five nines in a distributed world, and as systems become ever more about delivering functionality through a combination of services, micro-services and networks so the importance of designing for failure becomes ever more important, and the foundation of designing for failure is assuming it will happen.To read this article in full or to leave a comment, please click here