By creating a single view of all network data, you can do things better like correlating threat information to identify real attacks or keep a log of packet statistics to better diagnose intermittent networking problems.However, to turn data into value with the limitations of traditional systems, we must be creative with the solution. We must find ways to integrate the different repositories in various appliances. It’s not an easy task but an architectural shift that I’ve written about in the past, SASE (Secure Access Service Edge), should help significantly.SASE is a new enterprise networking technology category introduced by Gartner in 2019. It represents a change in how we connect our sites, users, and cloud resources.To read this article in full, please click here
During its inception, we had the early adopters and pure SD-WAN players. Soon it became obvious that something was missing, and that missing component was “security.” However, security vendors have highlighted the importance of security from the very beginning.Today, the market seems to be moving in the direction where the security vendors are focusing on delivering SD-WAN features around pervasive security. The Magic Quadrant for WAN Edge Infrastructure has made a substantial prediction. It states, “By 2024, 50% of new firewall purchases in distributed enterprises will utilize SD-WAN features with the growing adoption of cloud-based services, up from less than 20% today.”To read this article in full, please click here
In a cloud-centric world, users and devices require access to services everywhere. The focal point has changed. Now it is the identity of the user and device as opposed to the traditional model that focused solely on the data center. As a result, these environmental changes have created a new landscape that we need to protect and connect.This new landscape is challenged by many common problems. The enterprises are loaded with complexity and overhead due to deployed appliances for different technology stacks. The legacy network and security designs increase latency. In addition, the world is encrypted; this dimension needs to be inspected carefully, without degrading the application performance.To read this article in full, please click here
Today the telecom industry has identified the need for faster end-user-data rates. Previously users were happy to call and text each other. However, now mobile communication has converted our lives in such a dramatic way it is hard to imagine this type of communication anymore.Nowadays, we are leaning more towards imaging and VR/AR video-based communication. Therefore, considering such needs, these applications are looking for a new type of network. Immersive experiences with 360° video applications require a lot of data and a zero-lag network.To give you a quick idea, VR with a resolution equivalent to 4K TV resolution would require a bandwidth of 1Gbps for a smooth play or 2.5 Gbps for interactive; both requiring a minimal latency of 10ms and minimal delay. And that's for round-trip time. Soon these applications will target the smartphone, putting additional strains on networks. As AR/VR services grow in popularity, the proposed 5G networks will yield the speed and the needed performance.To read this article in full, please click here
The WAN consists of network and security stacks, both of which have gone through several phases of evolution. Initially, we began with the router, introduced WAN optimization, and then edge SD-WAN. From the perspective of security, we have a number of firewall generations that lead to network security-as-a-service. In today’s scenario, we have advanced to another stage that is more suited to today’s environment. This stage is the convergence of network and security in the cloud.For some, the network and security trends have been thought of in terms of silos. However, the new market category of secure access service edge (SASE) challenges this ideology and recommends a converged cloud-delivered secure access service edge.To read this article in full, please click here
In this day and age, demands on networks are coming from a variety of sources, internal end-users, external customers and via changes in the application architecture. Such demands put pressure on traditional architectures.To deal effectively with these demands requires the network domain to become more dynamic. For this, we must embrace digital transformation. However, current methods are delaying this much-needed transition. One major pain point that networks suffer from is the necessity to dispense with manual working, which lacks fabric wide automation. This must be addressed if organizations are to implement new products and services ahead of the competition.To read this article in full, please click here
There’s a buzz in the industry about a new type of product that promises to change the way we secure and network our organizations. It is called the Secure Access Service Edge (SASE). It was first mentioned by Gartner, Inc. in its hype cycle for networking. Since then Barracuda highlighted SASE in a recent PR update and Zscaler also discussed it in their earnings call. Most recently, Cato Networks announced that it was mentioned by Gartner as a “sample vendor” in the hype cycle.To read this article in full, please click here
Microsoft has introduced a new virtual WAN as a competitive differentiator and is getting enough tracking that AWS and Google may follow. At present, Microsoft is the only company to offer a virtual WAN of this kind. This made me curious to discover the highs and lows of this technology. So I sat down with Sorell Slaymaker, Principal Consulting Analyst at TechVision Research to discuss. The following is a summary of our discussion.But before we proceed, let’s gain some understanding of the cloud connectivity.Cloud connectivity has evolved over time. When the cloud was introduced about a decade ago, let’s say, if you were an enterprise, you would connect to what's known as a cloud service provider (CSP). However, over the last 10 years, many providers like Equinix have started to offer carrier-neutral collocations. Now, there is the opportunity to meet a variety of cloud companies in a carrier-neutral colocation. On the other hand, there are certain limitations as well as cloud connectivity.To read this article in full, please click here
Actions speak louder than words. Reliable actions build lasting trust in contrast to unreliable words. Imagine that you had a house with a guarded wall. You would feel safe in the house, correct? Now, what if that wall is dismantled? You might start to feel your security is under threat. Anyone could have easy access to your house.In the same way, with traditional security products: it is as if anyone is allowed to leave their house, knock at your door and pick your locks. Wouldn’t it be more secure if only certain individuals whom you fully trust can even see your house? This is the essence of zero-trust networking and is a core concept discussed in my recent course on SDP (software-defined perimeter).To read this article in full, please click here
Networking has gone through various transformations over the last decade. In essence, the network has become complex and hard to manage using traditional mechanisms. Now there is a significant need to design and integrate devices from multiple vendors and employ new technologies, such as virtualization and cloud services.Therefore, every network is a unique snowflake. You will never come across two identical networks. The products offered by the vendors act as the building blocks for engineers to design solutions that work for them. If we all had a simple and predictable network, this would not be a problem. But there are no global references to follow and designs vary from organization to organization. These lead to network variation even while offering similar services.To read this article in full, please click here
We are living in a hyperconnected world where anything can now be pushed to the cloud. The idea of having content located in one place, which could be useful from the management’s perspective, is now redundant. Today, the users and data are omnipresent.The customer’s expectations have up-surged because of this evolution. There is now an increased expectation of high-quality service and a decrease in customer’s patience. In the past, one could patiently wait 10 hours to download the content. But this is certainly not the scenario at the present time. Nowadays we have high expectations and high-performance requirements but on the other hand, there are concerns as well. The internet is a weird place, with unpredictable asymmetric patterns, buffer bloat and a list of other performance-related problems that I wrote about on Network Insight. [Disclaimer: the author is employed by Network Insight.]To read this article in full, please click here
Deploying zero trust software-defined perimeter (SDP) architecture is not about completely replacing virtual private network (VPN) technologies and firewalls. By and large, the firewall demarcation points that mark the inside and outside are not going away anytime soon. The VPN concentrator will also have its position for the foreseeable future.A rip and replace is a very aggressive deployment approach regardless of the age of technology. And while SDP is new, it should be approached with care when choosing the correct vendor. An SDP adoption should be a slow migration process as opposed to the once off rip and replace. As I wrote about on Network Insight [Disclaimer: the author is employed by Network Insight], while SDP is a disruptive technology, after discussing with numerous SDP vendors, I have discovered that the current SDP landscape tends to be based on specific use cases and projects, as opposed to a technology that has to be implemented globally. To start with, you should be able to implement SDP for only certain user segments.To read this article in full, please click here
Networks were initially designed to create internal segments that were separated from the external world by using a fixed perimeter. The internal network was deemed trustworthy, whereas the external was considered hostile. However, this is still the foundation for most networking professionals even though a lot has changed since the inception of the design.More often than not the fixed perimeter consists of a number of network and security appliances, thereby creating a service chained stack, resulting in appliance sprawl. Typically, the appliances that a user may need to pass to get to the internal LAN may vary. But generally, the stack would consist of global load balancers, external firewall, DDoS appliance, VPN concentrator, internal firewall and eventually LAN segments.To read this article in full, please click here
In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations.The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia.A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects.To read this article in full, please click here
As the cloud service providers and search engines started with the structuring process of their business, they quickly ran into the problems of managing the networking equipment. Ultimately, after a few rounds of getting the network vendors to understand their problems, these hyperscale network operators revolted.Primarily, what the operators were looking for was a level of control in managing their network which the network vendors couldn’t offer. The revolution burned the path that introduced open networking, and network disaggregation to the work of networking. Let us first learn about disaggregation followed by open networking.Disaggregation
The concept of network disaggregation involves breaking-up of the vertical networking landscape into individual pieces, where each piece can be used in the best way possible. The hardware can be separated from the software, along with open or closed IP routing suites. This enables the network operators to use the best of breed for the hardware, software and the applications.To read this article in full, please click here
I recently shared my thoughts about the role of open source in networking. I discussed two significant technological changes that we have witnessed. I call them waves, and these waves will redefine how we think about networking and security.The first wave signifies that networking is moving to the software so that it can run on commodity off-the-shelf hardware. The second wave is the use of open source technologies, thereby removing the barriers to entry for new product innovation and rapid market access. This is especially supported in the SD-WAN market rush.To read this article in full, please click here
BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, there’s a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.The problem with BGP is that it's not capacity or performance aware and therefore its decisions can sink the application’s performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the network’s performance.To read this article in full, please click here
The transformation to the digital age has introduced significant changes to the cloud and data center environments. This has compelled the organizations to innovate more quickly than ever before. This, however, brings with it both – the advantages and disadvantages.The network and security need to keep up with this rapid pace of change. If you cannot match with the speed of the digital age, then ultimately bad actors will become a hazard. Therefore, the organizations must move to a zero-trust environment: default deny, with least privilege access. In today’s evolving digital world this is the primary key to success.To read this article in full, please click here
With the introduction of cloud, BYOD, IoT and virtual offices scattered around the globe, the traditional architectures not only hold us back in terms of productivity but also create security flaws that leave gaps for compromise.The network and security architectures that are commonly deployed today are not fit for today's digital world. They were designed for another time, a time of the past. This could sound daunting...and it indeed is.What we had in the past?
Traditionally, we have had a static network and security perimeter with clear network and security demarcation points. In terms of security, the perimeter-based approach never worked. It did, however, create a multi-billion-dollar industry. But the fact is, it neither did, not will it provide competent security.To read this article in full, please click here
The Internet was designed to connect things easily, but a lot has changed since its inception. Users now expect the internet to find the “what” (i.e., the content), but the current communication model is still focused on the “where.”The Internet has evolved to be dominated by content distribution and retrieval. As a matter of fact, networking protocols still focus on the connection between hosts that surfaces many challenges.The most obvious solution is to replace the “where” with the “what” and this is what Named Data Networking (NDN) proposes. NDN uses named content as opposed to host identifiers as its abstraction.How the traditional IP works
To deliver packets from a source to a destination, IP needs to accomplish two phases of operation. The first phase is the routing plane also known as the control plane. This phase enables the routers to share routing updates and select the best path to construct the forwarding information table (FIB). The second phase is the forwarding plane also known as the data plane. This is the phase where forwarding to the next hop is executed upon FIB examination.To read this article in full, please click here