Dynamic Host Configuration Protocol (DHCP) is the standard way network administrators assign IP addresses in IPv4 networks, but eventually organizations will have to pick between two protocols created specifically for IPv6 as the use of this newer IP protocol grows.DHCP, which dates back to 1993, is an automated way to assign IPv4 addresses, but when IPv6 was designed, it was provided with an auto-configuration feature dubbed SLAAC that could eventually make DHCP irrelevant. To complicate matters, a new DHCP – DHCPv6 – that performs the same function as SLAAC was independently created for IPv6.
[ Now read 20 hot jobs ambitious IT pros should shoot for. ]
Deciding between SLAAC and DHCPv6 isn’t something admins will have to do anytime soon, since the uptake of IPv6 has been slow, but it is on the horizon.To read this article in full, please click here
More than 120 million Microsoft Office accounts have moved from on-premises to the cloud since the launch of Microsoft Office 365. Many of those accounts belong to users in large enterprises that weren’t fully prepared for the transition. The fact is as many as 30 to 40 percent of enterprises struggle with some level of application performance as they make the shift to cloud.Some of the signs of poor performance (and the source of users’ frustration) include Outlook responding slowly when the user tries to open messages, VoIP calls over Skype for Business having rough spots, and documents being slow to open, close and save in Word. Performance problems in the Office applications manifest in many other ways, as well.To read this article in full, please click here
The financial services industry is experiencing a period of dramatic change as a result of the growth in digitalization and its effect on customer behavior. In an emerging landscape made up of cryptocurrencies, frictionless trading, and consolidated marketplace lending, traditional banks have found themselves shaken by the introduction of new, disruptive, digitally-native and mobile-first brands.With a reputation as being somewhat conservative and slow to innovate, many financial service providers are now modernizing and improving their systems, transforming their new business models and technologies in an effort to stay ahead of the more agile challengers snapping at their heels.To read this article in full, please click here
After nearly four years of slashing at each other in court with legal swords Cisco and Arista have agreed to disagree, mostly.To settle the litigation mêlée, Arista has agreed to pay Cisco $400 million, which will result in the dismissal of all pending district court and International Trade Commission litigation between the two companies. [ Related: How to plan a software-defined data-center network ]
For Arista the agreement should finally end any customer fear, uncertainty and doubt caused by the lawsuit. In fact Zacks Equity Research wrote the settlement is likely to immensely benefit Arista.To read this article in full, please click here
Network packet brokers (NPB) have played a key role in helping organizations manage their management and security tools. The tool space has exploded, and there is literally a tool for almost everything. Cybersecurity, probes, network performance management, forensics, application performance, and other tools have become highly specialized, causing companies to experience something called “tool sprawl” where connecting a large number of tools into the infrastructure creates a big complex mesh of connections.Ideally, every tool would receive information from every network device, enabling it to have a complete view of what’s happening, who is accessing what, where they are coming in from, and when events occurred.To read this article in full, please click here
Cisco is moving rapidly toward its ultimate goal of making SD-WAN features ubiquitous across its communication products, promising to boost network performance and reliability of distributed branches and cloud services.The company this week took a giant step that direction by adding Viptela SD-WAN technology to the IOS XE software that runs its core ISR/ASR routers. Over a million of ISR/ASR edge routers, such as the ISR models 1000, 4000 and ASR 5000 are in use by organizations worldwide.[ Related: MPLS explained -- What you need to know about multi-protocol label switching]To read this article in full, please click here
For years it has been normal practice for organizations to store as much data as they can. More economical storage options combined with the hype around big data encouraged data hoarding, with the idea that value would be extracted at some point in the future.With advances in data analysis many companies are now successfully mining their data for useful business insights, but the sheer volume of data being produced and the need to prepare it for analysis are prime reasons to reconsider your strategy. To balance cost and value it’s important to look beyond data hoarding and to find ways of processing and reducing the data you’re collecting.To read this article in full, please click here
If you’re reading this, you’ve got RF power. Power is a necessity for networking, allowing us to charge our batteries, connect millions of devices, communicate over long distances and keep our signals clear.Don’t believe me? Kill the power and see what happens to your network.But with great RF power comes great responsibility. Power management is the art and science of optimizing input and output signals to maximize the efficiency and performance of RF devices – and it’s no easy feat. Each networking device has its own unique power requirements. Higher data rates often mean more power consumption and complexity, which can introduce losses that reduce reliability and increase cost. Low data rate devices, such as those supporting the Internet of Things (IoT), draw very little power in order to conserve every millisecond of precious battery power.To read this article in full, please click here
It’s in our phones, TVs, toasters, cars, watches, toothbrushes – even in the soles of our shoes. The internet is everywhere. Right?Well, no. About 47 percent of the global population of 7.6 billion people doesn’t have internet access, as tough as that is for those of us in internet-rich locales to imagine. But companies are working on ways to bridge this digital divide, and systems based on low-earth-orbit (LEO) satellites are becoming a big part of the conversation.The benefits of satellite internet are obvious in places where land-based network infrastructure doesn’t exist. But while systems based on high-orbit satellites need only minimal ground equipment to reach remote places, a range of complications – including cost, speed and performance – prevent them from being a global solution. LEO systems aim to get past the problems by getting closer to earth.To read this article in full, please click here
When most people encounter headlines about high-profile cloud outages, they think about the cloud vendor's name, or how the negative publicity might affect stock prices. I think about the people behind the scenes—the ones tasked with fixing the problem and getting customer systems back up and running.Despite their best efforts, the occasional outage is inevitable. The internet is a volatile place, and nobody is completely immune to this danger. Fortunately, there are some straightforward steps businesses can take to guard against the possibility of unplanned downtime.Here are four ways to avoid cloud outages while improving security and performance in the process:To read this article in full, please click here
When I stepped into the field of networking, everything was static and security was based on perimeter-level firewalling. It was common to have two perimeter-based firewalls; internal and external to the wide area network (WAN). Such layout was good enough in those days.I remember the time when connected devices were corporate-owned. Everything was hard-wired and I used to define the access control policies on a port-by-port and VLAN-by-VLAN basis. There were numerous manual end-to-end policy configurations, which were not only time consuming but also error-prone.There was a complete lack of visibility and global policy throughout the network and every morning, I relied on the multi router traffic grapher (MRTG) to manual inspect the traffic spikes indicating variations from baselines. Once something was plugged in, it was “there for life”. Have you ever heard of the 20-year-old PC that no one knows where it is but it still replies to ping? In contrast, we now live in an entirely different world. The perimeter has dissolved, resulting in perimeter-level firewalling alone to be insufficient.To read this article in full, please click here
As distributed resources from wired, wireless, cloud and Internet of Things networks grow, the need for a more intelligent network edge is growing with it.Network World’s 8th annual State of the Network survey shows the growing importance of edge networking, finding that 56% of respondents have plans for edge computing in their organizations. [ Related: How to plan a software-defined data-center network ]
Typically, edge networking entails sending data to a local device that includes compute, storage and network connectivity in a small form factor. Data is processed at the edge, and all or a portion of it is sent to the central processing or storage repository in a corporate data center or infrastructure-as-a-service (IaaS) cloud.To read this article in full, please click here
Cisco today laid out $2.35 billion in cash and stock for network identity, authentication security company Duo.According to Cisco, Duo helps protect organizations against cyber breaches through the company’s cloud-based software that verifies the identity of users and the health of their devices before granting access to applications with the idea of preventing breaches and account takeover.A few particulars of the deal include:
Cisco currently provides on-premises network access control via its Identity Services Engine (ISE) product. Duo's software as a service-based (SaaS) model will be integrated with Cisco ISE to extend ISE to provide cloud-delivered application access control.
By verifying user and device trust, Duo will add trusted identity awareness into Cisco's Secure Internet Gateway, Cloud Access Security Broker, Enterprise Mobility Management, and several other cloud-delivered products.
Cisco's in-depth visibility of over 180 million managed devices will be augmented by Duo's broad visibility of mobile and unmanaged devices.
Cisco said that Integration of its network, device and cloud security platforms with Duo Security’s zero-trust authentication and access products will let customers to quickly secure users to any application on any networked device. In fact, about 75% of Duo’s customers are up and running in less than Continue reading
Recently, I was reading a blog post by Ivan Pepelnjak on intent-based networking. He discusses that the definition of intent is "a usually clearly formulated or planned intention" and the word “intention” is defined as ’what one intends to do or bring about." I started to ponder over his submission that the definition is confusing as there are many variations.To guide my understanding, I decided to delve deeper into the building blocks of intent-based networking, which led me to a variety of closed-loop automation solutions. After extensive research, my view is that closed-loop automation is a prerequisite for intent-based networking. Keeping in mind the current requirements, it’s a solution that the businesses can deploy. To read this article in full, please click here
In our companion article, “Making network-services deals: Sourcing and service-delivery strategies that work,” we examine how enterprises should approach sourcing and designing managed network service arrangements under current outsourcing market conditions by applying certain best practices to help identy the optimal providers and service delivery approaches.To read this article in full, please click here(Insider Story)
We all know and appreciate DNS as the domain name system that maps names like Networkworld.com to the IP address that a browser actually connects to in order to get content from a website. DNS is obviously a foundational piece of the internet. However, the technology is a bit stale and needs a refresh to keep up with the times.Legacy DNS is a simple protocol. It is essentially a phonebook that maps a domain name to an IP address. Most commercial DNS products or services in the market today are based on an open-source software product called BIND put out by the Internet Software Consortium. The name BIND stands for “Berkeley Internet Name Daemon” because the software originated in the early 1980s at the University of California at Berkeley. Not much about the DNS protocol has changed since then.To read this article in full, please click here
When selecting VPN routers, small businesses want ones that support the VPN protocols they desire as well as ones that fit their budgets, are easy to use and have good documentation.To read this article in full, please click here(Insider Story)
Internet architecture doesn't need continuous paths between endpoints, says NASA in an announcement that may one day change the way the internet is envisioned.The U.S. government space agency says Delay or Disruption Tolerant Networking (DTN) — something it’s been working on for disruption-prone space internet applications — doesn’t need continuous network connectivity, unlike traditional internet.Importantly, it says the delay and fault-tolerant technology could be used down on Earth, too. The networking protocol suite concept would be particularly well suited to internet in remote locations, it says in a press release, related to demonstrations of the technology.To read this article in full, please click here
High-speed Ethernet is quickly becoming the networking norm as customer data-center servers grow to handle a ton of traffic from new, smarter applications, IoT devices, video and more.To read this article in full, please click here(Insider Story)