You’ve seen the headlines: Artificial intelligence is a job killer. It will end humanity. At the very least, it will lead to social unrest.The truth is, no one really knows what changes AI will bring. What’s indisputable is that it’s already here and it’s getting more prevalent by the day. IDC predicts that worldwide revenues from cognitive and AI systems will grow 59.3% this year to $12.5 billion and will achieve a compound growth rate of 54.4% through 2020, when revenues will hit $46 billion.To read this article in full or to leave a comment, please click here
As network pros rely more and more on SD-WAN to streamline connections among enterprise sites, the market for this technology will balloon from $225 million in 2015 to $1.19 billion by the end of this year, according to IDC.Over the next five years, SD-WAN sales will grow at a 69% compound annual growth rate, hitting $8.05 billion in 2021, according to IDC’s Worldwide SD-WAN Forecast, 2017–2021.As businesses adopt what IDC calls “third-platform” technologies such as cloud, mobile, big data and analytics, they put increased strain on the network. As organizations look to better connect their remote and branch office employees and provide them better quality network services, SD-WAN will continue to grow.To read this article in full or to leave a comment, please click here
Much ink has been spilled on the topic of what constitutes true “line rate,” and in the past we’ve advocated offering traffic at, and only at, 100.00 percent of theoretical line rate to determine if frame loss exists. However, the distinction between 99.99 percent (which we used in these tests) and 100.00 percent load is not all that meaningful, especially at higher Ethernet speeds, for a couple of reasons. First, Ethernet is inherently an asynchronous technology, meaning each device (in this case, the device under test and the test instrument) uses one or more of its own free-running clocks, without synchronization. Thus, throughput measurements may just be artifacts of minor differences in the speeds of clock chips, not descriptions of a system’s fabric capacity.To read this article in full or to leave a comment, please click here
The device under test for this project was the Cisco Nexus 9516 data center core switch/router, a 16-slot chassis equipped with 1,024 50-gigabit Ethernet interfaces and two supervisor modules. Cisco equipped the switch with its N9K-X9732C-EX line cards, each of which offers 32, 64, or 128 ports of 100-, 50-, and 25-gigabit Ethernet capacity.The traffic generator/analyzer was Spirent TestCenter equipped with its 10/25/40/50/100G MX3 modules. The Spirent instrument has a measurement precision of +/- 2.5 nanoseconds.To read this article in full or to leave a comment, please click here
How many ports are enough at the core of the data center? How does 1,024 sound?That’s the configuration we used to assess Cisco Systems’ Nexus 9516 data center core switch. In this exclusive Clear Choice test, we assessed the Cisco data center core switch with more than 1,000 50G Ethernet ports. That makes this by far the largest 50G test, and for that matter the highest-density switch test, Network World has ever published.As its name suggests, the Nexus 9516 accepts up to 16 N9K-X9732C-EX line cards, built around Cisco’s leaf-and-spine engine (LSE) ASICs. These multi-speed chips can run at 100G rates, for up to 512 ports per chassis; 50G rates for up to 1,024 ports; or 25G rates for up to 2,048 ports. We picked the 50G rate, and partnered with test and measurement vendor Spirent Communications to fully load the switch’s control and data planes.To read this article in full or to leave a comment, please click here
Big data visualization tools are great for graphing past data to see patterns and trends, but future trends is a tougher nut to crack, since past performance is often no indicator of future actions. But graph database technology vendor Franz Inc. is doing just that with the latest version of its graph visualization software.Gruff 7.0 adds a new feature called a “time slider” that serves as a kind of time machine for temporal graph analytics. The new feature is intended to allow both novices and graph experts alike to visually build queries and explore connections as they develop over time and uncover hidden relationships within time-based data. To read this article in full or to leave a comment, please click here
In the late 1990s and early 2000s when it became too difficult for large companies to manage their own WAN footprints, they adopted managed multiprotocol label switching (MPLS) services. These offered a simple connection at every location and offloaded the complexities of building large-scale routed networks from enterprises to the service provider.The advent of cloud computing, however, changed the dynamics of MPLS forever. Enterprises not only needed ubiquitous site-to-site connectivity, but also required better performance from the network to support Software as a Service-based business applications hosted in third-party data centers. In addition, video was becoming a standard mode of communication for corporate meeting and training applications, boosting the need for more bandwidth across the network.To read this article in full or to leave a comment, please click here
Lately, I have been spending a lot of time on integrating security systems together, and specifically focusing a lot of my energy on Cisco’s Advanced Threat Security product family. (Disclosure: I am employed by Cisco.)Which is what brings me to Cisco’s Advanced Malware Protection (AMP), which is a solution to enable malware detection, blocking, continuous analysis and retrospective actions and alerting.In fact, when the Talos cyber-vigilantes parachute into an environment and performs their forensics analysis and active defense against attacks—AMP is one of the primary tools that they use.To read this article in full or to leave a comment, please click here
Without simulation, complex systems would fail. Satellites would not reach an accurate orbit, semiconductor circuits would not function, and bridges would not carry the load. Businesses and governments would not invest in these projects without robust simulation software. And without a simulation proving value and functionality, IoT networks of hundreds of thousands or millions of inexpensive devices adding up to large capital investments will not be built.Researchers from the University of Bologna published an analysis of IoT simulation and a smart cities vehicular transportation system case study (pdf). They recommend a networked simulation of orchestrated simulators that model specific IoT features that fit the diversity of IoT devices and use cases.To read this article in full or to leave a comment, please click here
Supercomputer specialist Cray announced it is acquiring Seagate’s ClusterStor HPC storage array business for an undisclosed sum as part of a strategic deal and partnership. The deal should close in the third quarter. Cray will take over development, manufacturing, support and sales of the ClusterStor product line, picking up 100 Seagate employees in the process. Seagate acquired Xyratex, the maker of ClusterStor for $374 million in 2014. Cray already sells ClusterStor under its Sonexion scale out Lustre arrays. Sonexion is based on ClusterStor, so it simply comes in-house. Cray is the biggest OEM for the ClusterStor line. Even though Cray was already knee deep in ClusterStor, it brought the technology in-house so it can reduce margins and push further on development to align with its strategy, which sounds like it intends to compete with Dell EMC. To read this article in full or to leave a comment, please click here
The Internet of Things is often thought of as primarily an industrial and consumer technology. But there’s a growing consensus that IoT is also taking a leading role in digital transformation in a wide variety of business applications in locations around the world. Survey says: Global enterprises bullish on IoT
A recent study by satellite communications vendor Inmarsat, for example, reveals that IoT is the top priority for 92 percent of the more than 500 enterprises surveyed across the globe. Titled “The Future of IoT in Enterprise 2017,” the report assembles responses from companies that have more than 1,000 workers in agritech, energy production, transportation and mining.To read this article in full or to leave a comment, please click here
Any organization that creates and promotes industry standards should operate in an open and transparent way. Any lack of visibility will cause tremendous doubt and concerns around those standards. Case in point: the World Wide Web Consortium (W3C). A few weeks back, I wrote about one of their most recent standards—Electronic Media Extensions (EME)—which sought to create a standard framework for Digital Right Management (DRM) on the web. When the W3C officially approved this standard, it generated massive backlash from every corner of the technology world. To read this article in full or to leave a comment, please click here
Google has long run a distant third behind Amazon and Microsoft in the cloud services business, but it finally seems to be catching some momentum, if the most recent quarter is an indicator of future trajectory. During an earnings call with Wall Street analysts, Google CEO Sundar Pichai said that Google Cloud Platform continues to experience “impressive growth across products, sectors and geographies and increasingly with large enterprise customers in regulated sectors.” To be more specific, Pichai said Google closed three times as many $500,000-plus deals in the most recent quarter as it did in the same time period last year. Of course, that is kind of pointless without knowing the exact number. And given Alphabet, Google’s parent company, reported overall revenue of $25.8 billion for the quarter, it’s likely a few drops in the bucket. To read this article in full or to leave a comment, please click here
On Unix systems, there are several ways to send signals to processes—with a kill command, with a keyboard sequence (like control-C), or through your own program (e.g., using a kill command in C). Signals are also generated by hardware exceptions such as segmentation faults and illegal instructions, timers and child process termination.But how do you know what signals a process will react to? After all, what a process is programmed to do and able to ignore is another issue.Fortunately, the /proc file system makes information about how processes handle signals (and which they block or ignore) accessible with commands like the one shown below. In this command, we’re looking at information related to the login shell for the current user, the "$$" representing the current process.To read this article in full or to leave a comment, please click here
Moving to the cloud is supposed to reduce the headaches for IT administrators, but often it has the opposite effect of increasing their workload, especially around security and visibility. Not only do they have to make sure on-premises systems adhere to regulatory compliance, but that their cloud services do as well.Security specialist Qualys addresses these issues of security and visibility with its new app framework, CloudView, which complements existing Qualys services for security, compliance and threat intelligence with real-time monitoring of all enterprise cloud services from a single dashboard.+ Also on Network World: 18 free cloud storage options +
"Accelerated cloud adoption requires new adaptive security solutions that support fast-moving digital transformation efforts," said Philippe Courtot, Qualys CEO, in a statement. "Our new CloudView and its apps add unparalleled visibility and continuous security of all cloud workloads to provide customers complete cloud security in a single, integrated platform and drastically reduce their spend."To read this article in full or to leave a comment, please click here
Networking performance monitoring and diagnostics (NPMD) software, whether running as an independent appliance or embedded in networking equipment, can help stave off productivity issues for internal corporate users as well as those interacting with the network from the outside.But with ever-increasing traffic on corporate networks, users attempting to optimize connections to the cloud and new Internet of Things devices bombarding the network, enterprises and network performance monitoring vendors face growing challenges.+ ALSO ON NETWORK WORLD: 7 must-have network tools +To read this article in full or to leave a comment, please click here
There’s a trend emerging among many Internet-based companies that I find intriguing: they are creating their own edge delivery networks. Why? So that they can service their applications via these networks to enable greater resilience and performance for their users.Rather than the standard, garden-variety content delivery networks (CDNs), these edge delivery networks are tailored specifically for the applications they’ve been built to service. In some cases, this means the edge networks leverage highly specific connectivity to regional internet service providers or between application facilities; in other cases, it means placing specialized hardware tuned to specific needs of the application in delivery facilities around the world. And most importantly, these networks are operating application-specific software and configurations that are customized beyond what’s possible in general-purpose, shared networks.To read this article in full or to leave a comment, please click here
A couple of months ago I was having dinner with a fairly well-known Silicon Valley executive who predicted that success for an IT vendor is based on two things: having lots of data and a robust artificial intelligence (AI) engine to discover new insights.If that is true, then Mist Systems seems to be in a strong position, as the company’s solutions were designed to use AI to solve some of the bigger challenges in Wi-Fi today.This week the wireless network company announced several new access points, as well as use cases, for its solution. Specifics are as follows:Introduction of client service-level expectations (SLE)
In telecommunications, the concept of a service-level agreement (SLA) is a threshold that service providers are contracted to meet. The SLE from Mist is similar, although more proactive than a carrier’s SLA. With Mist, administrators can use data to set, monitor and enforce things that impact performance pre and post connection. Examples of this are time to connect, failed connection attempts, roaming, coverage, capacity and AP uptime. The SLEs can be monitored in real time and watched over time to provide up-to-the minute insight as to the health of Wi-Fi.To read this article Continue reading
For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.And the bad guys worked at breaking the encryption.We used longer keys and better algorithms.And the bad guys kept at it. And life went on.But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.To read this article in full or to leave a comment, please click here
Wi-Fi networks have many variables and points of frustration. Different types of walls, materials and objects can impact the Wi-Fi signal in varying ways. Visualizing how the signals move about the area is difficult without the right tools. A simple Wi-Fi stumbler is great for quickly checking signal levels, but a map-based surveying tool helps you visualize the coverage, interference and performance much more easily. They allow you to load your floor plan map, walk the building to take measurements and then give you heatmaps of the signals and other data.Most Windows-based Wi-Fi surveying tools offer more features and functionality than Android-based tools provide, such as detecting noise levels and providing more heatmap visualizations. However, if you don’t require all the bells and whistles, using an app on an Android-based smartphone or tablet can lighten your load. (And in case you’re wondering why we're not discussing iOS apps, it’s because Apple won’t allow developers access to the Wi-Fi data, thus there can’t be any legit Wi-Fi surveying apps without jailbreaking the device.)To read this article in full or to leave a comment, please click here