When docker launches a linux container it will, by default, assign it a private IP address out of RFC 1918 space. It connects this container to the host OS using a bridged interface (docker0). Connectivity between the outside world and the container depends on NAT.
Outbound traffic is NATed using the host’s IP address. Inbound traffic requires explicit port mapping rules that map a port on the host to a port in the container. Given that typically one runs multiple containers in the same host there needs to be a map between a host port (in the dynamic port range) and a service port on the container.
For example, the HTTP service port (80) in container-1 will be mapped to port 49153 while container-2 would see its HTTP port mapped to host port 49154. Ports that are not explicitly mapped cannot receive incoming traffic. Also containers within the same host will see different IP address ports than containers across different hosts (not very ‘cloudy’).
This is the reason why using a network virtualization solution such as OpenContrail is so appealing. OpenContrail, replaces docker’s networking implementation which can be disabled by using –net=none. It provides each container its own IP address in Continue reading
I’ve initially joined Juniper Networks in 2001 and over the years i’ve had the opportunity to establish a relationship with a few of the field people, specially in Europe where i just happen to know a lot of the old timers that built up Juniper’s business in the region.
Over the past few weeks i’d a couple of conversations with some of them that forced me to try to distill my perspective on the current trends in the networking industry to a small set of observations. Often the question that starts the conversation is how I see the applicability of OpenStack and OpenContrail to the key networking markets: carrier, enterprise and cloud/content provider. The question often implies a certain doze of healthy skepticism.
OpenStack and OpenContrail are tools; the evolution that we are seeing at that moment in the industry is deeper than that.
The traditional workflow for a network deployment is to go through architecture, design and operations phases. Traditionally the architecture group selects the top level goals and the technology approach for the deployment and produces an architecture document; from that document the design team then starts working on qualification of equipment, detailed design and operational guide; when Continue reading
I’m often asked by some of my colleagues at Juniper as well as potential customers about whether OpenContrail is applicable to the enterprise virtualization market. This market is today dominated by VMWare while OpenContrail has chosen to focus its energy at OpenStack. The question often comes in the form as to whether I see enterprise adopting OpenStack for virtualization. The answer is, of course, “no”.
To quote an analyst report, “The shift to SaaS is the leading agent of change” in enterprise I.T. This is the main driver of transformation, not OpenStack. While the traditional approach used to be for enterprises to buy software packages and install them on premise, this is now becoming a quaint approach to doing business. I.T. management and operations, like just about everything else, is more efficient at scale. It is simple to understand that it is cheaper to administer 1000 instances of a CRM application “as-a-service” than for 1000 enterprises to do so themselves.
It is also intuitive to understand that the organization that developed a particular software application is then one that can most effectively administer, manage it and maintain it. From an economical perspective, safe some exceptions, if an Continue reading
Several articles, including one in the Wall Street Journal
hit the press last week regarding RedHat policy of only supporting RedHat guests in RedHat Linux, VMWare or HyperV Hosts.
While this policy had probably been around for a while, several RedHat customers i work with have recently changed their deployment plans towards having dual hypervisor sulotions (ubuntu + RHEL) in order to be able to run RHEL hosts under support.
RedHat seems to be using this tatic to stem its market share loss in the virtualization and OpenStack hypervisor space. In a blog post, RedHat seems to imply that its competitors providing Linux hosts “cavalierly compile and ship, untested OpenStack offerings”. Ironically, several people that i spoke with last week have echoed the opinion that RHEL 6.x is rather problematic for a cloud deployment, questioning whether it can be used in production.
One cloud provider that i spoke with, immediatly replied that they had to replaced the kernel and KVM versions in their CentOS 6.x version when i questioned thier choice of OS distribution. This seems to match the general consensus of what I hear through the grapevine. I understand than an anecdote is not data but in the Continue reading
At the OpenStack summit in Atlanta this week there was a very interesting phenomenon. Vendors that have been traditionally positioned in the I.T space seemed to be directing their energy around OpenStack on the carrier / telecom space; while vendors traditionally in this space where doing the best they could to get beyond it and into non-traditional I.T deployments.
As an example, canonical’s booth was primary advertising their “Carrier Class OpenStack” and RedHat seemed very interested in NFV; with several senior developers organizing a cross project NFV subteam to focus on how OpenStack can be a better fit for carrier data-centers.
The traditional telecom vendors on the other hand seemed to be rather less sanguine on the NFV market. At least when it comes to the timelines required to get to production deployments: 2018 seems to be a reasonable target.
I don’t currently have access to market research data; but i would be very curious to take a look at it and how it is being interpreted. Either the I.T. vendors are over-investing or the traditionally Service Provider focused vendors are under-investing in this space. Cisco, for instance, which is typically quite business savvy is nowhere to Continue reading
There was a lot happening at the OpenStack summit in Atlanta this week. I got the opportunity to meet several of the most active OpenContrail developers; and envangilize the project with several people that are looking for an OpenStack networking solution that meets their needs.
The buzz on Neutron can be sumarized by: the default implementation of neutron doesn’t work. Many users find that running neutron service rack with l3-agent and dhcp agent isn’t working out for them: the neutron router is a choke point for traffic; there is no resiliency and some of the services (e.g. DHCP) are prone to melt down. This seemed to be the rought consensus of those who i spoke with (admitedly a rather un-scientific sample).
It is easy to explain the advantages of the OpenContrail implementation in this context. By implementing a fully distributed router implementation as well as distributing the DHCP, metadata proxy and floatingip functionality, OpenContrail solves most of the current pain points of Neutron.
On the other side, some of the users I spoke to where often concerned with the relativly small size of the community. Hopefully this weeks annoucement of the OpenContrail Advisory Board will help aliviate this concern. Continue reading
Docker is a tool that simplifies the process of building container images. One of the issues with OpenStack is that building glance images is an off-line process. It is often difficult to track the contents of the images, how they where created and what software they contain. Docker also does not depend on virtualization; it creates linux container images that can be run directly by the host OS. This provides a much more efficient use of memory as well as better performance. It is a very attractive solution for DC operators that run a private infrastructure that serves in-house developed applications.
In order to run Docker as an openstack “hypervisor” start with devstack on ubuntu 12.04LTS. devstack includes a docker installer that will add a debian repository with the latest version of the docker packages.
After cloning the devstack repository one can issue the command:
tools/docker/install_docker.sh
For OpenContrail there isn’t yet a similar install tool. I built the OpenContrail packages from source and installed them manually, modifying the configuration files in order to have config, control and compute-node components all running locally.
Next, I edited the devstack localrc file to have the following settings:
VIRT_DRIVER=docker disable_service n-net enable_service neutron Continue reading
I’d written previously on how to use OpenContrail with Linux network namespaces. I managed to find the cycles to put together a configuration wrapper that can be used as a pre-start and post-stop scripts when starting a daemon out of init.d. The scripts are in a python package available in github.
As in the previous post, the test application i used was the apache web server. But most Linux services follow a rather similar pattern when it comes to their init scripts.
I started by installing two bare metal servers with the OpenContrail community packages; one server running the configuration service and both of them running both control-node and compute-node components.
For this exercise, the objective was to be able to select the routing for the outbound traffic for a specific application. For this purpose, I started by creating two virtual-networks, one used for incoming traffic and separate one to be used for outbound traffic for a specific application. The script network_manage.py can be used for this purpose; it can create and delete virtual-networks as well as add and delete external route targets.
After creating an inbound and app-specific outbound networks, one can use the netns-daemon-start script to create Continue reading
By now just about everyone has realized that OpenFlow is just vaporware. Technically, there was never any content behind the hype. The arguments used to promote OpenFlows revolutionary properties where simply the ignorance of all previous technologies that used the exact same design ideas from SS7 to PBB-TE.
Rumor has it that even the most religious OpenFlow supporters from Mountain View to Tokyo have realized that OpenFlow is pretty much dead. If you look back at it, it was a pretty silly set of assumptions to start with: that hardware design and not software the the limiting factor in network devices; and that you can define a low-level forwarding language based on the concept of a TCAM match that is going to be efficient across general purpose CPUs; ASICs and NPUs. Both assumptions can easily be proven to be false.
But OpenFlow’s promise was “too good to be true”. So a lot of people preferred to ignore any hard questions in search of the illusory promises of a revolution in networking. By now though, everyone gets it.
As an industry, what is the expected reaction to the OpenFlow hangover ? One would expect a more down-to-earth approach. Instead we get “Segment Continue reading
According to news reports, credit card information from Target’s point of sales systems was stolen after hackers gained access to the systems of an HVAC contractor that had remote access to Target’s network.
Network virtualization is an important tool that can be used to prevent (or at the very least place barriers) to similar attacks in the future. Increasingly retail stores deploy multiple applications that must be accessible remotely. HVAC systems are an example, but retail locations also often support signage applications (advertisement panels), wifi guest networks, etc.
Most of these applications will contain a mix of physical systems on the branch, applications running in the data-center, as well a remote access to contractors.
From a network segmentation perspective, it is important to be able to create virtual networks that can span the WAN and the data-center. The obvious technology choice for network virtualization in the branch is to be use MPLS L3VPN. It is a technology that is supported in CE devices and that can be deployed over a enterprise or carrier managed private network.
The branch office CE will need to be configured with multiple VLANs, per virtual-network, where physical systems reside. In order to have a Continue reading
Recently, one customer prospect asked the Contrail team to build a POC lab using only non-Juniper network gear. The team managed to find a cisco ASR 900 as a loaner device and we had to make that device work as a data-center gateway.
Typically we use the Juniper MX as a the data-center gateway in our clusters. When you use an MX, the system somehow feels dated. It does feel like a 10+ year old design, which it is. But it is incredibly solid and feature rich. So one ends up accepting that it feels a bit dated as a tradeoff to its “swiss army knife” powers.
The cisco ASR 900 belongs to the 1k family and runs IOS as a user space process on Linux. I’d not used IOS in 3 years. My first impression was: this artifact belongs to the Computer History Museum. In fact the CHM (which is a fantastic museum) has several pieces in exhibition that are more recent that 1984, the year IOS debuted.
And IOS (even the version 15 in this loaner box) is a history trip. You get to see a routing table that precedes classes internet addresses, the config still outputs “bgp Continue reading