Hybrid IT networking has come a long way in the past decade, as enterprises have gradually come to embrace and trust cloud computing. Yet, despite the growing popularity of both private and public clouds, many enterprise IT teams are still struggling with how to handle the resulting migration challenges.Originally envisioned as simply a way to reduce costs, migration to the cloud has escalated in large part due to a drive for greater agility and flexibility. In fact, according to a recent State of the Network global survey of more than 600 IT professionals, the top two reasons enterprises are moving to the cloud are to increase IT scalability and agility, and to improve service availability and reliability. The need to lower costs was ranked number four, tied with the desire to deliver new services faster.To read this article in full, please click here
At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The monolith keeps much of the state in memory and replicates between the instances, which makes them hard to split and scale. Scaling up can be expensive and scaling out requires replicating the state and the entire application, rather than the parts that need to be replicated.In comparison to microservices, which provide separation of the logic from the state, the separation enables the application to be broken apart into a number of smaller more manageable units, making them easier to scale. Therefore, a microservices environment consists of multiple services communicating with each other. All the communication between services is initiated and carried out with network calls, and services exposed via application programming interfaces (APIs). Each service comes with its own purpose that serves a unique business value.To read this article in full, please click here
Network administrators, IT managers and security professionals face a never-ending battle, constantly checking on what exactly is running on their networks and the vulnerabilities that lurk within. While there is a wealth of monitoring utilities available for network mapping and security auditing, nothing beats Nmap's combination of versatility and usability, making it the widely acknowledged de facto standard.What is Nmap?
Nmap, short for Network Mapper, is a free, open-source tool for vulnerability scanning and network discovery. Network administrators use Nmap to identify what devices are running on their systems, discovering hosts that are available and the services they offer, finding open ports and detecting security risks.To read this article in full, please click here
Cisco’s strategy of diversifying into a more software-optimized business is paying off – literally.The software differentiation was perhaps never more obvious than in its most recent set of year-end and fourth quarter results. (Cisco's 2018 fiscal year ended July 28.) Cisco said deferred revenue for the fiscal year was $19.7 billion, up 6 percent overall, “with deferred product revenue up 15 percent, driven largely by subscription-based and software offers, and deferred service revenue was up 1 percent.”[ Related: Getting grounded in intent-based networking]
The portion of deferred product revenue that is related to recurring software and subscription offers increased 23 percent over 2017, Cisco stated. In addition, Cisco reported deferred revenue from software and subscriptions increasing 23 percent to $6.1 billion in the fourth quarter alone.To read this article in full, please click here
Domain Name System (DNS) is our root of trust and is one of the most critical components of the internet. It is a mission-critical service because if it goes down, a business’s web presence goes down.DNS is a virtual database of names and numbers. It serves as the backbone for other services critical to organizations. This includes email, internet site access, voice over internet protocol (VoIP), and the management of files.You hope that when you type a domain name that you are really going where you are supposed to go. DNS vulnerabilities do not get much attention until an actual attack occurs and makes the news. For example, in April 2018, public DNS servers that managed the domain for Myetherwallet were hijacked and customers were redirected to a phishing site. Many users reported losing funds out of their account, and this brought a lot of public attention to DNS vulnerabilities.To read this article in full, please click here
Fiber transmission could be more efficient, go farther, carry more traffic and be cheaper to implement if the work of scientists in Sweden and Estonia is successful.In a recent demonstration, researchers at Chalmers University of Technology, Sweden, and Tallinn University of Technology, Estonia, used new, ultra-low-noise amplifiers to increase the normal fiber-optic transmission link range six-fold.And in a separate experiment, researchers at DTU Fotonik, Technical University of Denmark used a unique frequency comb to push more than the total of all internet traffic down one solitary fiber link.[ Read also: How Google is speeding up the Internet ]
Fiber transmission limits
Signal noise and distortion have always been behind the limits to traditional (and pretty inefficient) fiber transmission. They’re the main reason data-send distance and capacity are restricted using the technology. Experts believe, however, that if the noise that’s found in the amplifiers used for gaining distance could be cleaned up and the signal distortion inherent in the fiber itself could be eliminated, fiber could become more efficient and less costly to implement.To read this article in full, please click here
Dynamic Host Configuration Protocol (DHCP) is the standard way network administrators assign IP addresses in IPv4 networks, but eventually organizations will have to pick between two protocols created specifically for IPv6 as the use of this newer IP protocol grows.DHCP, which dates back to 1993, is an automated way to assign IPv4 addresses, but when IPv6 was designed, it was provided with an auto-configuration feature dubbed SLAAC that could eventually make DHCP irrelevant. To complicate matters, a new DHCP – DHCPv6 – that performs the same function as SLAAC was independently created for IPv6.
[ Now read 20 hot jobs ambitious IT pros should shoot for. ]
Deciding between SLAAC and DHCPv6 isn’t something admins will have to do anytime soon, since the uptake of IPv6 has been slow, but it is on the horizon.To read this article in full, please click here
Dynamic Host Configuration Protocol (DHCP) is the standard way network administrators assign IP addresses in IPv4 networks, but eventually organizations will have to pick between two protocols created specifically for IPv6 as the use of this newer IP protocol grows.DHCP, which dates back to 1993, is an automated way to assign IPv4 addresses, but when IPv6 was designed, it was provided with an auto-configuration feature dubbed SLAAC that could eventually make DHCP irrelevant. To complicate matters, a new DHCP – DHCPv6 – that performs the same function as SLAAC was independently created for IPv6.[ Now read 20 hot jobs ambitious IT pros should shoot for. ]
Deciding between SLAAC and DHCPv6 isn’t something admins will have to do anytime soon, since the uptake of IPv6 has been slow, but it is on the horizon.To read this article in full, please click here
More than 120 million Microsoft Office accounts have moved from on-premises to the cloud since the launch of Microsoft Office 365. Many of those accounts belong to users in large enterprises that weren’t fully prepared for the transition. The fact is as many as 30 to 40 percent of enterprises struggle with some level of application performance as they make the shift to cloud.Some of the signs of poor performance (and the source of users’ frustration) include Outlook responding slowly when the user tries to open messages, VoIP calls over Skype for Business having rough spots, and documents being slow to open, close and save in Word. Performance problems in the Office applications manifest in many other ways, as well.To read this article in full, please click here
The financial services industry is experiencing a period of dramatic change as a result of the growth in digitalization and its effect on customer behavior. In an emerging landscape made up of cryptocurrencies, frictionless trading, and consolidated marketplace lending, traditional banks have found themselves shaken by the introduction of new, disruptive, digitally-native and mobile-first brands.With a reputation as being somewhat conservative and slow to innovate, many financial service providers are now modernizing and improving their systems, transforming their new business models and technologies in an effort to stay ahead of the more agile challengers snapping at their heels.To read this article in full, please click here
After nearly four years of slashing at each other in court with legal swords Cisco and Arista have agreed to disagree, mostly.To settle the litigation mêlée, Arista has agreed to pay Cisco $400 million, which will result in the dismissal of all pending district court and International Trade Commission litigation between the two companies. [ Related: How to plan a software-defined data-center network ]
For Arista the agreement should finally end any customer fear, uncertainty and doubt caused by the lawsuit. In fact Zacks Equity Research wrote the settlement is likely to immensely benefit Arista.To read this article in full, please click here
Network packet brokers (NPB) have played a key role in helping organizations manage their management and security tools. The tool space has exploded, and there is literally a tool for almost everything. Cybersecurity, probes, network performance management, forensics, application performance, and other tools have become highly specialized, causing companies to experience something called “tool sprawl” where connecting a large number of tools into the infrastructure creates a big complex mesh of connections.Ideally, every tool would receive information from every network device, enabling it to have a complete view of what’s happening, who is accessing what, where they are coming in from, and when events occurred.To read this article in full, please click here
Cisco is moving rapidly toward its ultimate goal of making SD-WAN features ubiquitous across its communication products, promising to boost network performance and reliability of distributed branches and cloud services.The company this week took a giant step that direction by adding Viptela SD-WAN technology to the IOS XE software that runs its core ISR/ASR routers. Over a million of ISR/ASR edge routers, such as the ISR models 1000, 4000 and ASR 5000 are in use by organizations worldwide.[ Related: MPLS explained -- What you need to know about multi-protocol label switching]To read this article in full, please click here
For years it has been normal practice for organizations to store as much data as they can. More economical storage options combined with the hype around big data encouraged data hoarding, with the idea that value would be extracted at some point in the future.With advances in data analysis many companies are now successfully mining their data for useful business insights, but the sheer volume of data being produced and the need to prepare it for analysis are prime reasons to reconsider your strategy. To balance cost and value it’s important to look beyond data hoarding and to find ways of processing and reducing the data you’re collecting.To read this article in full, please click here
If you’re reading this, you’ve got RF power. Power is a necessity for networking, allowing us to charge our batteries, connect millions of devices, communicate over long distances and keep our signals clear.Don’t believe me? Kill the power and see what happens to your network.But with great RF power comes great responsibility. Power management is the art and science of optimizing input and output signals to maximize the efficiency and performance of RF devices – and it’s no easy feat. Each networking device has its own unique power requirements. Higher data rates often mean more power consumption and complexity, which can introduce losses that reduce reliability and increase cost. Low data rate devices, such as those supporting the Internet of Things (IoT), draw very little power in order to conserve every millisecond of precious battery power.To read this article in full, please click here
It’s in our phones, TVs, toasters, cars, watches, toothbrushes – even in the soles of our shoes. The internet is everywhere. Right?Well, no. About 47 percent of the global population of 7.6 billion people doesn’t have internet access, as tough as that is for those of us in internet-rich locales to imagine. But companies are working on ways to bridge this digital divide, and systems based on low-earth-orbit (LEO) satellites are becoming a big part of the conversation.The benefits of satellite internet are obvious in places where land-based network infrastructure doesn’t exist. But while systems based on high-orbit satellites need only minimal ground equipment to reach remote places, a range of complications – including cost, speed and performance – prevent them from being a global solution. LEO systems aim to get past the problems by getting closer to earth.To read this article in full, please click here
When most people encounter headlines about high-profile cloud outages, they think about the cloud vendor's name, or how the negative publicity might affect stock prices. I think about the people behind the scenes—the ones tasked with fixing the problem and getting customer systems back up and running.Despite their best efforts, the occasional outage is inevitable. The internet is a volatile place, and nobody is completely immune to this danger. Fortunately, there are some straightforward steps businesses can take to guard against the possibility of unplanned downtime.Here are four ways to avoid cloud outages while improving security and performance in the process:To read this article in full, please click here
Perimeter-based firewalls
When I stepped into the field of networking, everything was static and security was based on perimeter-level firewalling. It was common to have two perimeter-based firewalls; internal and external to the wide area network (WAN). Such layout was good enough in those days.I remember the time when connected devices were corporate-owned. Everything was hard-wired and I used to define the access control policies on a port-by-port and VLAN-by-VLAN basis. There were numerous manual end-to-end policy configurations, which were not only time consuming but also error-prone.There was a complete lack of visibility and global policy throughout the network and every morning, I relied on the multi router traffic grapher (MRTG) to manual inspect the traffic spikes indicating variations from baselines. Once something was plugged in, it was “there for life”. Have you ever heard of the 20-year-old PC that no one knows where it is but it still replies to ping? In contrast, we now live in an entirely different world. The perimeter has dissolved, resulting in perimeter-level firewalling alone to be insufficient.To read this article in full, please click here
As distributed resources from wired, wireless, cloud and Internet of Things networks grow, the need for a more intelligent network edge is growing with it.Network World’s 8th annual State of the Network survey shows the growing importance of edge networking, finding that 56% of respondents have plans for edge computing in their organizations. [ Related: How to plan a software-defined data-center network ]
Typically, edge networking entails sending data to a local device that includes compute, storage and network connectivity in a small form factor. Data is processed at the edge, and all or a portion of it is sent to the central processing or storage repository in a corporate data center or infrastructure-as-a-service (IaaS) cloud.To read this article in full, please click here
Cisco today laid out $2.35 billion in cash and stock for network identity, authentication security company Duo.According to Cisco, Duo helps protect organizations against cyber breaches through the company’s cloud-based software that verifies the identity of users and the health of their devices before granting access to applications with the idea of preventing breaches and account takeover.A few particulars of the deal include:
Cisco currently provides on-premises network access control via its Identity Services Engine (ISE) product. Duo's software as a service-based (SaaS) model will be integrated with Cisco ISE to extend ISE to provide cloud-delivered application access control.
By verifying user and device trust, Duo will add trusted identity awareness into Cisco's Secure Internet Gateway, Cloud Access Security Broker, Enterprise Mobility Management, and several other cloud-delivered products.
Cisco's in-depth visibility of over 180 million managed devices will be augmented by Duo's broad visibility of mobile and unmanaged devices.
Cisco said that Integration of its network, device and cloud security platforms with Duo Security’s zero-trust authentication and access products will let customers to quickly secure users to any application on any networked device. In fact, about 75% of Duo’s customers are up and running in less than Continue reading