Archive

Category Archives for "Ed Koehler’s Blog"

Privacy and Visibility – The dichotomy of encryption and inspection

The encoding or encryption of communications and information is a very old practice. The concept is relatively simple. One of the easiest examples is simply to reverse the alphabet, A for Z, B for X and so on. The reverse function is the ‘key’ to deciphering the message. We needn’t go into the detailed but fascinating history of the evolution of cryptography and the concept and method of the key. Instead we only need to touch on a few key historical milestones and how they have impacted the world today.

Cryptography is indeed an old practice. The ancient Romans would write encrypted messages on strips of cloth that were wrapped around wooden staffs of various widths. They would then send just the cloth strip with the courier. Only if the right staff was used could the message be deciphered. Here the ‘key’ is the width of the staff. That information would either be known or communicated to the receiver ahead of time so that they would have the right staff on hand to decipher the message. Obviously if anyone intercepted the information regarding the width of the staff, they could also decipher the message if they intercepted that as well. Continue reading

Advanced Persistent Threats

titleAdvanced Persistent Threats

Coming to a network near you, or maybe your network!

 

There are things that go bump in the night and that is all they do. But once in a while things not only go bump in the night, they can hurt you. Sometimes they make no bump at all! They hurt you before you even realize that you’re hurt. No, we are not talking about monsters under the bed or real home intruders; we are talking about Advanced Persistent Threats. This is a major trend that has been occurring at a terrifying pace across the globe. It targets not the typical servers in the DMZ or the Data Center, but the devices at the edge. More importantly, it targets the human at the interface. In short, the target is you.

Now I say ‘you’ to highlight the fact that it is you, the user who is the weakest link in the security chain. And like all chains, the security chain is only as good as its weakest link. I also want to emphasize that it is not you alone, but myself or anyone or any device for that matter that accesses the network and uses its Continue reading

Establishing a confidential Service Boundary with Avaya’s SDN Fx

Cover

 

Security is a global requirement. It is also global in the fashion in which it needs to be addressed. But the truth is, regardless of the vertical, the basic components of a security infrastructure do not change. There are firewalls, intrusion detection systems, encryption, networking policies and session border controllers for real time communications. These components also plug together in rather standard fashions or service chains that look largely the same regardless of the vertical or vendor in question. Yes, there are some differences but by and large these modifications are minor.

So the questions begs, why is security so difficult? As it turns out, it is not really the complexities of the technology components themselves, although they certainly have that. It turns out that the real challenge is deciding exactly what to protect and here each vertical will be drastically different. Fortunately, the methods for identifying confidential data or critical control systems are also rather consistent even though the data and applications being protected may vary greatly.

In order for micro-segmentation as a security strategy to succeed, you have to know where the data you need to protect resides. You also need to know how it flows through Continue reading

What’s the Big Deal about Big Data?

Title

It goes without saying that knowledge is power. It gives one the power to make informed decisions and avoid miscalculation and mistakes. In recent years the definition of knowledge has changed slightly. This change is the result of increases in the ease and speed in computation as well as the shear volume of data that these computations can be exercised against. Hence, it is no secret that the rise of computers and the Internet has contributed significantly to enhance this capability.
The term that is often bantered about is “Big Data”. This term has gained a certain mystique that is comparable to cloud computing. Everyone knows that it is important. Unless you have been living in a cave, you most certainly have at least read about it. After all, if such big names as IBM, EMC and Oracle are making a focus of it then it must have some sort of importance to the industry and market as a whole. When pressed for a definition of what it is however, many folks will often struggle. Note that the issue is not that it deals with the computation of large amounts of data as its name implies, but more so that Continue reading

‘Dark Horse’ Networking – Private Networks for the control of Data

Dark HorseNext Generation Virtualization Demands for Critical Infrastructure and Public Services

 

Introduction

In recent decades communication technologies have realized significant advancement. These technologies now touch almost every part of our lives, sometimes in ways that we do not even realize. As this evolution has and continues to occur, many systems that have previously been treated as discrete are now networked. Examples of these systems are power grids, metro transit systems, water authorities and many other public services.

While this evolution has brought on a very large benefit to both those managing and using the services, there is the rising spectre of security concerns and the precedent of documented attacks on these systems. This has brought about strong concerns about this convergence and what it portends for the future. This paper will begin by discussing these infrastructure environments that while varied have surprisingly common theories of operation and actually use the same set or class of protocols. Next we will take a look at the security issues and some of the reasons of why they exist. We will provide some insight to some of the attacks that have occurred and what impacts they have had. Then we will discuss the traditional Continue reading

The evolution of E-911

                                                 NG911 and the evolution of ESInet

 

If you live within North America and have ever been in a road accident or had a house fire then you are one of the fortunate ones who had the convenience and assurance of 911 services. I am old enough to remember how these types of things were handled prior to 911. Phones (dial phones!) had dozens of stickers for Police, Fire and Ambulance. If there were no stickers then one had to resort to a local phone book that hopefully had an emergency services section. To think of how many lives that has been saved by this simple three digit number is simply boggling. Yet to a large degree we all now take this service for granted and assume it will just work as it always has regardless of the calling point. We also seem to implicitly assume that all of the next generation capabilities and intelligence that is available today can just automatically be utilized within its framework. This article is intended to provide a brief history of 911 services and how they have evolved up to the current era of E911. It will also talk about the upcoming challenges Continue reading

How would you like to do IP Multicast without PIM or RP’s? Seriously, let’s use Shortest Path Bridging and make it easy!

 

Why do we need to do this? What’s wrong with today’s network?

Anyone who has deployed or managed a large PIM multicast environment will relate to the response to this question. PIM works on the assumption of an overlay protocol model. PIM stands for Protocol Independent Multicast, which means that it can utilize any IP routing table to establish a reverse path forwarding tree. These routes can be created with any independent unicast routing protocol such as RIP or OSPF, or even be static routes or combinations thereof. In essence, there is an overlay of the different protocols to establish a pseudo-state within the network for the forwarding of multicast data. As any network engineer who has worked with large PIM deployments will attest, they are sensitive beasts that do not lend themselves well to topology changes or expansions of the network delivery system. The key word in all of this is the term ‘state’. If it is lost, then the tree truncates and the distribution service for that length of the tree is effectively lost. Consequently, changes need to be done carefully and be well tested and planned. And this is all due to the fact that the Continue reading

Seamless Data Migration with Avaya’s VENA framework

There are very few technologies that come along which actually make things easier for IT staff. This is particularly true with new technology introductions. Very often, the introduction of a new technology is problematic from a systems service up time perspective. With networking technologies in particular, new introductions often involve large amounts of intermittent down time and a huge amount of human resources to properly plan the outages and migration processes to assure minimal down time. More so than any other, network core technologies tend to be the most disruptive due to their very nature and function. Technologies like MPLS are a good example. It requires full redesign of the network infrastructure as well very detailed design within the network core itself to provide connectivity. While some argue that things like MPLS-TP helps to alleviate this, it is not without cost – and the distruption remains.

IEEE 802.1aq or Shortest Path Bridging (SPB for short) is one of those very few technologies that can introduced in a very seamless fashion with minimal disruption or down time. It can also be introduced with minimal redesign of the existing network if so desired. A good case point example is a recent Continue reading

Next Generation Mesh Networks

 

The proper design of a network infrastructure should allow for a number of key traits that are very desirable in an overall network design. First, the infrastructure needs to provide redundancy and resiliency without a single point of failure. Second, the infrastructure must be scalable in both geographic reach as well as bandwidth and throughput capacity.

Ideally, as one facet of the network is improved, such as resiliency; it should also improve on bandwidth and throughput capacity as well. Certain technologies work on the premise of an active/standby method. In this manner, there is one primary active link – all other links are in a standby state that will only become active upon the primary links failure. Examples of this kind of approach are 802.1d spanning tree and its descendants rapid and multiple spanning trees in the layer 2 domain and non-equal cost distance vector routing technologies such as RIP.

While these technologies do provide resiliency and redundancy they do so at the assumption that half of the network infrastructure is unusable and that a state of failure needs to occur in order to leverage those resources. As a result, it becomes highly desirable to implement active/active resiliency Continue reading

IPv6 Deployment Practices and Recommendations

Communications technologies are evolving rapidly. This pace of evolution, while slowed somewhat by economic circumstances, still moves forward at a dramatic pace. This is indicative to the fact that while the ‘bubble’ of the 1990’s is past, society and business as a whole has arrived to the point where communications technologies and their evolution are a requirement for proper and timely interaction with the human environment.

This has profound impact on a number of foundations upon which the premise of these technologies rest. One of the key issues is that of the Internet Protocol, commonly referred to simply as ‘IP’. The current widely accepted version of IP is version 4. The protocol, referred to as IPv4 has served as the foundation to the current Internet since its practical inception in the public arena. As the success of the Internet attests, IPv4 has performed its job well and has provided the evolutionary scope to adapt over the twenty years that has transpired. Like all technologies though IPv4 is reaching the point where further evolution will become difficult and cumbersome if not impossible. As a result, IPv6 was created as a next generation evolution to the IP protocol to address these issues.

Continue reading

Storage as a Service – Clouds of Data

Storage as a Service (SaaS) – How in the world do you?

There is a very good reason why cloud storage has so much hype. It simply makes sense. It has an array of attractive use case models. It has a wide range of potential scope and purpose making it as flexible as the meaning of the bits stored. But most importantly, it has a good business model that has attracted some major names into the market sector.

If you read the blog posts and articles, most will say that Cloud Storage will never be accepted due to the lack of security & accountability. The end result is that many CISO’s & CIO’s have decided that it is just too difficult to prove due diligence for compliance. As a result, they have not widely embraced the cloud model. Now while this is correct, it is not totally true. As a matter of fact most folks are actually using Cloud Storage within their environment. They just don’t equate it as such. This article is intended to provide some insight into the use models of SaaS as well as some of the technical and business considerations that need to be made in Continue reading

Infiniband and it’s unique potential for Storage and Business Continuity

It’s one of those technologies that many have only had cursory awareness of. It is certainly not a ‘mainstream’ technology in comparison to IP, Ethernet or even Fibre Channel. Those who have awareness of it know Infiniband as a high performance compute clustering technology that is typically used for very short interconnects within the Data Center. While this is true, it’s uses and capacity have been expanded into many areas that were once thought to be out of its realm. In addition, many of the distance limitations that have prevented it’s expanded use are being extended. In some instances to rather amazing distances that rival the more Internet oriented networking technologies. This article will look closely at this networking technology from both historical and evolutionary perspectives. We will also look at some of the unique solutions that are offered by its use.

Not your mother’s Infiniband

The InfiniBand (IB) specification defines the methods & architecture of the interconnect that establishes the interconnection of the I/O subsystems of next generation of servers, otherwise known as compute clustering. The architecture is based on a serial, switched fabric that currently defines link bandwidths between 2.5 and 120 Gbits/sec. It effectively resolves the Continue reading

Data Storage: The Foundation & potential Achilles Heel of Cloud Computing

In almost anything that you read about Cloud Computing, the statement that it is ‘nothing new’ is usually made at some point. The statement then goes on to qualify Cloud Computing as a cumulative epiphenomenon that more so serves as a single label to a multi-faceted substrate of component technologies than it does to a single new technology paradigm. All of them used together comprise the constitution of what could be defined as a cloud. As the previous statement makes apparent the definition is somewhat nebulous. Additionally, I could provide a long list of the component technologies within the substrate that could ‘potentially’ be involved. Instead, I will filter out the majority and focus on a subset of technologies that could be considered ‘key’ components to making cloud services work.

If we were to try to identify the most important component out of this substrate, most would agree that it is something known as virtualization. In the cloud, virtualization occurs at several levels. It can range from ‘what does what’ (server & application virtualization) to ‘what goes where’ (data storage virtualization) to ‘who is where’ (mobility and virtual networking). When viewed as such, one could even come to the conclusion Continue reading