I first met Elisa Jasinska when she had one of the coolest job titles I ever saw: Senior Packet Herder. Her current job title is almost as cool: Senior Network Toolsmith @ Netflix – obviously an ideal guest for the Software Gone Wild podcast.
In our short chat she described some of the tools she’s working on, including an adaptation of pmacct to environments with numerous BGP exit points (more details in her NANOG presentation).
One of the confusing aspects of Internet operation is the difference between the types of providers and the types of peering. There are three primary types of peering, and three primary types of services service providers actually provide. The figure below illustrates the three different kinds of peering. One provider can agree to provide transit […]
“The most interesting part of building our house was choosing the brick and trim,” explains Randy Cross, Director of Product Line Management at Avaya, “but in Texas with clay soils, the most IMPORTANT element was the foundation.” This podcast explains that much of the SDN hype today centers on the outer elements of SDN – […]
The post Show 202 – Avaya & The Critical Importance of the SDN Underlay – Sponsored appeared first on Packet Pushers Podcast and was written by Ethan Banks.
Original content from Roger's CCIE Blog Tracking the journey towards getting the ultimate Cisco Certification. The Routing & Switching Lab Exam
This Cisco MPLS Tutorial will guide you through building the simple MPLS topology below. This consists of a 3 router MPLS core and two remote sites in the same VRF running OSPF as the PE=CE routing protocol. This will be quite a long post as I will be taking you through every single verification along […]
Post taken from CCIE Blog
Original post MPLS Tutorial – including verifications
Great set of instructions for installing Kali Linux in VMware Player.
Originally posted on Cyber Warrior+:
First we need to download Kali from http://kali.org/downloads/. If you have a 64-bit capable computer (like me), then you probably will want the 64-bit version of Kali for performance reasons. Expand the drop down menu’s to find the version you need. Select the 64-bit version ONLY if you have a 64-bit computer.
View original 968 more words
SDN a Mixed Bag for U.S. Enterprises
Juniper Networks recently surveyed 400 enterprise IT “decision makers” in government, education, financial services and healthcare about their SDN adoption plans. The results were split: While almost 53 percent have plans to deploy SDN, the other half (48 percent) has no plans to adopt the technology.
Nearly three-quarters of those who plan to implement SDN say they will do so within the next year. Their motivations are the perceived SDN benefits of improved network performance and efficiency (26 percent), simplified network operations (19 percent), and cost savings on operations (13 percent). The survey does not delve into how much of these enterprise networks will be SDN-enabled. Indeed, 63 percent of those surveyed said business networks in the next five years will be a mix of software-defined and traditional.
The gap between the perceived benefits and reality on the ground may be inhibiting SDN deployments. The survey respondents cited cost (50 percent), difficulty integrating with existing systems (35 percent), security concerns (34 percent), and lack of skills from existing employees (28 percent) as the top challenges to SDN adoption.
We recently published a program that we wrote in conjunction with our friends at MetaCloud: the VXLAN Flooder, or vxfld. vxfld is the basis of our Lightweight Network Virtualization feature (new with Cumulus Linux 2.2!), as well as MetaCloud’s next generation OpenStack networking. It enables easy to deploy, scalable virtual switched networks built on top of an L3 fabric.
Of course, vxfld is just the latest in a series of contributions! There are projects we’ve written from scratch, such as ONIE, the Open Network Install Environment, which we contributed to the Open Compute Project. Like Prescriptive Topology Manager, which simplifies the deployment of large L3 networks. And ifupdown2, a rewrite of Debian’s tool for configuring networks that greatly simplifies large, complicated networking configurations.
And then there are our contributions back to the upstream projects that we include in Cumulus Linux. These include (in order of decreasing number of contributions) the Quagga routing protocol suite, the Linux Continue reading
The Moscone Center in San Francisco is a popular place for technical events. Apple’s World Wide Developer Conference (WWDC) is an annual user of the space. Cisco Live and VMworld also come back every few years to keep the location lively. This year, both conferences utilized Moscone to showcase tech advances and foster community discussion. Having attended both this year in San Francisco, I think I can finally state the following with certainty.
It’s time for tech conferences to stop using the Moscone Center.
Let’s face it. If your conference has more than 10,000 attendees, you have outgrown Moscone. WWDC works in Moscone because they cap the number of attendees at 5,000. VMworld 2014 has 22,000 attendees. Cisco Live 2014 had well over 20,000 as well. Cramming four times the number of delegates into a cramped Moscone Center does not foster the kind of environment you want at your flagship conference.
The main keynote hall in Moscone North is too small to hold the large number of audience members. In an age where every keynote address is streamed live, that shouldn’t be a problem. Except that people still want to be involved and close to the event. At both Cisco Continue reading
Building a private cloud infrastructure tends to be a cumbersome process: even if you do it right, you oft have to deal with four to six different components: orchestration system, hypervisors, servers, storage arrays, networking infrastructure, and network services appliances.
Read more ...Last week in Chicago, at the annual SIGCOMM flagship research conference on networking, Arbor collaborators presented some exciting developments in the ongoing story of IPv6 roll out. This joint work (full paper here) between Arbor Networks, the University of Michigan, the International Computer Science Institute, Verisign Labs, and the University of Illinois highlighted how both the pace and nature of IPv6 adoption has made a pretty dramatic shift in just the last couple of years. This study is a thorough, well-researched, effective analysis and discussion of numerous published and previously unpublished measurements focused on the state of IPv6 deployment.
The study examined a decade of data reporting twelve measures drawn from ten global-scale Internet datasets, including several years of Arbor data that represents a third to a half of all interdomain traffic. This constitutes one of the longest and broadest published measurement of IPv6 adoption to date. Using this long and wide perspective, the University of Michigan, Arbor Networks, and their collaborators found that IPv6 adoption, relative to IPv4, varies by two orders of magnitude (100x!) depending on the measure one looks at and, because of this, care must really be taken when looking at individual measurements of IPv6. For example, examining only the fraction of IPv6 to IPv4 Continue reading
With the current interest in network automation, it’s imperative that the correct tools are chosen for the right tasks. It should be acknowledged that there isn’t a single golden bullet approach and the end solution will be very much based on customer requirements, customer abilities, customer desire to learn and an often overlooked fact; the abilities of the incumbent or services provider.
The best projects are always delivered with a KISS! Keep It Simple Stupid.
Note – I have used the term ‘playbooks’ as a generic term to define an automation set of tasks. Commonly known as a runbook, playbook and recipe.
Because a services provider may have delivered an automation project using a bulky generic work-flow automation tool in the past, it does not mean it is the correct approach or set of skills for the current and future set of network automation requirements. To exercise this, let’s create a hypothetical task of hanging a picture on the wall. We have many choices when it comes to this task, for example: bluetak, sellotape, gaffer tape, masking tape, duct tape or my favourite, use a glue gun! However, the correct way would be to frame Continue reading
Over the last few months, there’s been increased attention on networks and how they interconnect. CloudFlare runs a large network that interconnects with many others around the world. From our vantage point, we have incredible visibility into global network operations. Given our unique situation, we thought it might be useful to explain how networks operate, and the relative costs of Internet connectivity in different parts of the world.
The Internet is a vast network made up of a collection of smaller networks. The networks that make up the Internet are connected in two main ways. Networks can connect with each other directly, in which case they are said to be “peered”, or they can connect via an intermediary network known as a “transit provider”.
At the core of the Internet are a handful of very large transit providers that all peer with one another. This group of approximately twelve companies are known as Tier 1 network providers. Whether directly or indirectly, every ISP (Internet Service Provider) around the world connects with one of these Tier 1 providers. And, since the Tier 1 providers are all interconnected themselves, from any point on the network you should be able to reach any other point. That's what makes the Internet the Internet: it’s a huge group of networks that are all interconnected.
To be a part of the Internet, CloudFlare buys bandwidth, known as transit, from a number of different providers. The rate we pay for this bandwidth varies from region to region around the world. In some cases we buy from a Tier 1 provider. In other cases, we buy from regional transit providers that either peer with the networks we need to reach directly (bypassing any Tier 1), or interconnect themselves with other transit providers.
CloudFlare buys transit wholesale and on the basis of the capacity we use in any given month. Unlike some cloud services like Amazon Web Services (AWS) or traditional CDNs that bill for individual bits delivered across a network (called "stock"), we pay for a maximum utilization for a period of time (called "flow"). Typically, we pay based on the maximum number of megabits per second we use during a month on any given provider.
Most transit agreements bill the 95th percentile of utilization in any given month. That means you throw out approximately 36 not-necessarily-contiguous hours worth of peak utilization when calculating usage for the month. Legend has it that in its early days, Google used to take advantage of these contracts by using very little bandwidth for most of the month and then ship its indexes between data centers, a very high bandwidth operation, during one 24-hour period. A clever, if undoubtedly short-lived, strategy to avoid high bandwidth bills.
Another subtlety is that when you buy transit wholesale you typically only pay for traffic coming in (“ingress") or traffic going out (“egress”) of your network, not both. Generally you pay which ever one is greater.
CloudFlare is a caching proxy so egress (out) typically exceeds ingress (in), usually by around 4-5x. Our bandwidth bill is therefore calculated on egress so we don't pay for ingress. This is part of the reason we don't charge extra when a site on our network comes under a DDoS attack. An attack increases our ingress but, unless the attack is very large, our ingress traffic will still not exceed egress, and therefore doesn’t increase our bandwidth bill.
While we pay for transit, peering directly with other providers is typically free — with some notable exceptions recently highlighted by Netflix. In CloudFlare's case, unlike Netflix, at this time, all our peering is currently "settlement free," meaning we don't pay for it. Therefore, the more we peer the less we pay for bandwidth. Peering also typically increases performance by cutting out intermediaries that may add latency. In general, peering is a good thing.
The chart above shows how CloudFlare has increased the number of networks we peer with over the last three months (both over IPv4 and IPv6). Currently, we peer around 45% of our total traffic globally (depending on the time of day), across nearly 3,000 different peering sessions. The chart below shows the split between peering and transit and how it's improved over the last three months as we’ve added more peers.
We don't disclose exactly what we pay for transit, but I can give you a relative sense of regional differences. To start, let's assume as a benchmark in North America you'd pay a blended average across all the transit providers of $10/Mbps (megabit per second per month). In reality, we pay less than that, but it can serve as a benchmark, and keep the numbers round as we compare regions. If you assume that benchmark, for every 1,000Mbps (1Gbps) you'd pay $10,000/month (again, acknowledge that’s higher than reality, it’s just an illustrative benchmark and keeps the numbers round, bear with me).
While that benchmark establishes the transit price, the effective price for bandwidth in the region is the blended price of transit ($10/Mbps) and peering ($0/Mbps). Every byte delivered over peering is a would-be transit byte that doesn't need to be paid for. While North America has some of the lowest transit pricing in the world, it also has below average rates of peering. The chart below shows the split between peering and transit in the region. While it's gotten better over the last three months, North America still lags behind every other region in the world in terms of peering..
While we peer nearly 40% of traffic globally, we only peer around 20-25% in North America. Assuming the price of transit is the benchmark $10/Mbps in North America without peering, with peering it is effectively $8/Mbps. Based only on bandwidth costs, that makes it the second least expensive region in the world to provide an Internet service like CloudFlare. So what's the least expensive?
Europe's transit pricing roughly mirrors North America's so, again, assume a benchmark of $10/Mbps. While transit is priced similarly to North America, in Europe there is a significantly higher rate of peering. CloudFlare peers 50-55% of traffic in the region, making the effective bandwidth price $5/Mbps. Because of the high rate of peering and the low transit costs, Europe is the least expensive region in the world for bandwidth.
The higher rate of peering is due in part to the organization of the region's “peering exchanges”. A peering exchange is a service where networks can pay a fee to join, and then easily exchange traffic between each other without having to run individual cables between each others' routers. Networks connect to a peering exchange, run a single cable, and then can connect to many other networks. Since using a port on a router has a cost (routers cost money, have a finite number of ports, and a port used for one network cannot be used for another), and since data centers typically charge a monthly fee for running a cable between two different customers (known as a "cross connect"), connecting to one service, using one port and one cable, and then being able to connect to many networks can be very cost effective.
The value of an exchange depends on the number of networks that are a part of it. The Amsterdam Internet Exchange (AMS-IX), Frankfurt Internet Exchange (DE-CIX), and the London Internet Exchange (LINX) are three of the largest exchanges in the world. (Note: these links point to PeeringDB.com which provides information on peering between networks. You'll need to use the username/password guest/guest in order to login.)
In Europe, and most other regions outside North America, these and other exchanges are generally run as non-profit collectives set up to benefit their member networks. In North America, while there are Internet exchanges, they are typically run by for-profit companies. The largest of these for-profit exchanges in North America are run by Equinix, a data center company, which uses exchanges in its facilities to increase the value of locating equipment there. Since they are run with a profit motive, pricing to join North American exchanges is typically higher than exchanges in the rest of the world.
CloudFlare is a member of many of Equinix's exchanges, but, overall, fewer networks connect with Equinix compared with Europe's exchanges (compare, for instance, Equinix Ashburn, which is their most popular exchange with about 400 networks connected, versus 1,200 networks connected to AMS-IX). In North America the combination of relatively cheap transit, and relatively expensive exchanges lowers the value of joining an exchange. With less networks joining exchanges, there are fewer opportunities for networks to easily peer. The corollary is that in Europe transit is also cheap but peering is very easy, making the effective price of bandwidth in the region the lowest in the world.
Asia’s peering rates are similar to Europe. Like in Europe, CloudFlare peers 50-55% of traffic in Asia. However, transit pricing is significantly more expensive. Compared with the benchmark of $10/Mbps in North America and Europe, Asia's transit pricing is approximately 7x as expensive ($70/Mbps, based on the benchmark). When peering is taken into account, however, the effective price of bandwidth in the region is $32/Mbps.
There are three primary reasons transit is so much more expensive in Asia. First, there is less competition, and a greater number of large monopoly providers. Second, the market for Internet services is less mature. And finally, if you look at a map of Asia you’ll see a lot of one thing: water. Running undersea cabling is more expensive than running fiber optic cable across land so transit pricing offsets the cost of the infrastructure to move bytes.
Latin America is CloudFlare's newest region. When we opened our first data center in Valparaíso, Chile, we delivered 100 percent of our traffic over transit, which you can see from the graph above. To peer traffic in Latin America you need to either be in a "carrier neutral" data center — which means multiple network operators come together in a single building where they can directly plug into each other's routers — or you need to be able to reach an Internet exchange. Both are in short supply in much of Latin America.
The country with the most robust peering ecosystem is Brazil, which also happens to be the largest country and largest source of traffic in the region. You can see that as we brought our São Paulo, Brazil data center online about two months ago we increased our peering in the region significantly. We've also worked out special arrangements with ISPs in Latin America to set up facilities directly in their data centers and peer with their networks, which is what we did in Medellín, Colombia.
While today our peering ratio in Latin America is the best of anywhere in the world at approximately 60 percent, the region's transit pricing is 8x ($80/Mbps) the benchmark of North America and Europe. That means the effective bandwidth pricing in the region is $32/Mbps, or approximately the same as Asia.
Australia is the most expensive region in which we operate, but for an interesting reason. We peer with virtually every ISP in the region except one: Telstra. Telstra, which controls approximately 50% of the market, and was traditionally the monopoly telecom provider, charges some of the highest transit pricing in the world — 20x the benchmark ($200/Mbps). Given that we are able to peer approximately half of our traffic, the effective bandwidth benchmark price is $100/Mbps.
To give you some sense of how out-of-whack Australia is, at CloudFlare we pay about as much every month for bandwidth to serve all of Europe as we do to for Australia. That’s in spite of the fact that approximately 33x the number of people live in Europe (750 million) versus Australia (22 million).
If Australians wonder why Internet and many other services are more expensive in their country than anywhere else in the world they need only look to Telstra. What's interesting is that Telstra maintains their high pricing even if only delivering traffic inside the country. Given that Australia is one large land mass with relatively concentrated population centers, it's difficult to justify the pricing based on anything other than Telstra's market power. In regions like North America where there is increasing consolidation of networks, Australia's experience with Telstra provides a cautionary tale.
While we keep our pricing at CloudFlare straight forward, charging a flat rate regardless of where traffic is delivered around the world, actual bandwidth prices vary dramatically between regions. We’ll continue to work to decrease our transit pricing, and increasing our peering in order to offer the best possible service at the lowest possible price. In the meantime, if you’re an ISP who wants to offer better connectivity to the increasing portion of the Internet behind CloudFlare’s network, we have an open policy and are always happy to peer.
Introduction
In the best of worlds we would all be using native IPv6 now, or at least dual
stack. That is not the case however and IPv4 will be around for a long time yet.
During that time that both protocols exist, there will be a need to translate
between the two, like it or not.
Different Types of NAT
Before we begin, let’s define some different forms of NAT:
NAT44 – NAT from IPv4 to IPv4
NAT66 – NAT from IPv6 to IPv6
NAT46 – NAT from IPv4 to IPv6
NAT64 – NAT from IPv6 to IPv4
The most commonly used type is definitely NAT44 but here we will focus on translating
between IPv4 and IPv6.
NAT64
There are two different forms of NAT64, stateless and statefull. The stateless version
maps the IPv4 address into an IPv6 prefix. As the name implies, it keeps no state.
It does not save any IP addresses since every v4 address maps to one v6 address.
Here is a comparison of stateless and statefull NAT64:
DNS64
When resolving names to numbers in IPv4, A records are used. When doing the same
in IPv6, AAAA records are used. When using NAT64, the device doing Continue reading
It's now been 3 months since I transitioned from Networking to Software. This is a retrospective piece on my reasons for giving up on Networking.
You might be reading this thinking:
"another networking guy moving to software... network engineering is doomed".
If you are, stop thinking right now. There is one important thing about my story that is very different. I've been writing software for longer than I have been doing networking albeit not in a professional capacity. Software Engineering is where my passion lies right now and let me explain why...
DevOps for Networking is still, very slowly, becoming reality. Elsewhere DevOps is very much in full swing. Tools like:
Vagrant, Packer, Puppet, Chef, SaltStack, Ansible, Fig, Docker, Jenkins/TravisCI, Dokku, Heroku, OpenShift (the list goes on)...
have redefined how I work and being in an environment where I can build things with them day to day is a dream come true for me.
I get gersburms just thinking about building Continous Integration/Continous Delivery Pipelines, Automated creation of Dev/Test environments and Configuration as Code.
Software-Defined Networking was the turning point in my career. It enabled me to make the switch in career paths Continue reading
It's now been 3 months since I transitioned from Networking to Software. This is a retrospective piece on my reasons for giving up on Networking.
VMware announced the vCloud Hosted Services a while back and it was mostly known as vCheese for short. This week it was rebranded as "vCloud Air Network" and that is too much of a mouthful to keep saying as well. Don't these marketing people live in the real world ? Lets me share my suggestion .......
The post Rant: VMware vCheese Becomes vChAir – Logo Parody appeared first on EtherealMind.
I was finally catching up on a number of posts I'd saved to read later and noticed the prevalent use of "Northbound" and "Southbound". I'm now starting to question whether these terms are necessary or accurate.
Let's start with the Oxford English Dictionary definition of these terms.
northbound | ˈnɔːθbaʊnd | adjective travelling or leading towards the north: northbound traffic.
southbound | ˈsaʊθbaʊnd | adjective travelling or leading towards the south: southbound traffic | the southbound carriageway of the A1.
As our interfaces are static and can't travel one can assume the intent of these adjectives in our context is to indicate that the interfaces are leading in the specified direction.
Categorizing an API by directionality is rather perplexing IMHO.
Specify directionality without a reference point is misleading For example, OVSDB is a northbound API for Open vSwitch but southbound API for an SDN controller.
For SDN controllers, there are two types of interfaces:
User-Facing or Application-Facing (formerly Northbound)
This API is designed to expose higher-order functions in such a way that they can easily be consumed by humans and programmers.
By this logic, we can include any "
I was finally catching up on a number of posts I'd saved to read later and noticed the prevalent use of "Northbound" and "Southbound". I'm now starting to question whether these terms are necessary or accurate.