In my last post, we wrapped up the base components required to deploy NSX. In this post, we’re going to configure some logical routing and switching. I’m specifically referring to this as ‘logical’ since we are only going to deal with VM to VM traffic in this post. NSX allows you to logically connect VMs at either layer 2 or layer 3. So let’s look at our lab diagram…
If you recall, we had just finished creating the transport zones at the end of the last post. The next step is to provision logical switches. Since we want to test layer 2 and layer 3 connectivity, we’re going to provision NSX in two separate fashions. The first method will be using the logical distributed router functionality of NSX. In this method, tenant 1 will have two logical switches. One for the app layer and one for the web layer. We will then use the logical distributed router to allow the VMs to route to one another. The 2nd method will be to have both the web and app VMs on the same logical layer 2 segment. We Continue reading
I’m always wondering if the addresses I’m assigning to interfaces aren’t already in DNS. So I came up with a little BASH script that takes a list of IP addresses and performs an nslookup on them to ensure they’re not in use already:
$nslookup < input-filename > output-filename
The addresses in the input file are carriage return delimited.
A better use for this would be to check if DNS entries already have an IP address assigned to them.
Having dual sites or multiple sites in Active/Active mode aims to offer elasticity of resources available everywhere in different locations, just as with a single logical data center. This solution brings as well the business continuity with disaster avoidance. This is achieved by manually or dynamically moving the applications and software framework where resources are available. When “hot”-moving virtual machines from one DC to another, there are some important requirements to take into consideration:
As with several other network and security services, the Continue reading
Firefly Perimeter is a virtual security appliance that provides security and networking services at the perimeter in virtualized private or public cloud environments. It runs as a virtual machine (VM) on a standard x86 server and delivers similar security and networking features available on branch SRX Series devices.
However not all the features that are supported by SRX hardware devices are supported. Here is the list of features supported by current firefly 12.1x46-d10 release.
Firefly Perimeter Hardware Specifications
Thanks to Juniper’s software evaluation program we can download the Firefly Perimeter security solution for free and test it out for 60 days. In this tutorial we are going to connect Firefly Perimeter to GNS3 and create a simple lab to test connectivity between two vSRX instances. As GNS3 has built-in support for VirtualBox and Qemu/KVM they both can used as hypervisor.
Firefly Perimeter virtual machines can be download here. You have to use your Juniper account to proceed the download but a valid service contract is not required to to download Firefly Perimeter virtual machine.
Picture 1 - Juniper Login Window
Notice that they Continue reading
Big Switch Networks (BSN) launches Version 4.0 of Big Cloud Fabric for hardware-centric SDN data centre fabric. The Data Centre Fabric solution clearly shows the maturity gained from 5 years of shipping products while adding innovation in switch hardware through Switch Light operating system. At the same time, they have completed the transition from platform to product. A product that really has what you need in a hardware-centric SDN platform and addresses nearly all of the issues the competitors have not addressed. And it is shipping now.
The post Big Switch Networks Launches Mature Hardware-Centric Data Centre SDN Solution appeared first on EtherealMind.
The race to make things just a little bit faster in the networking world has heated up in recent weeks thanks to the formation of the 25Gig Ethernet Consortium. Arista Networks, along with Mellanox, Google, Microsoft, and Broadcom, has decided that 40Gig Ethernet is too expensive for most data center applications. Instead, they’re offering up an alternative in the 25Gig range.
This podcast with Greg Ferro (@EtherealMind) and Andrew Conry-Murray (@Interop_Andrew) does a great job of breaking down the technical details on the reasoning behind 25Gig Ethernet. In short, the current 10Gig connection is made of four multiplexed 2.5Gig connections. To get to 25Gig, all you need to do is over clock those connections a little. That’s not unprecedented, as 40Gig Ethernet accomplishes this by over clocking them to 10Gig, albeit with different optics. Aside from a technical merit badge, one has to ask themselves “Why?”
High Hopes
As always, money is the factor here. The 25Gig Consortium is betting that you don’t like paying a lot of money for your 40Gig optics. They want to offer an alternative that is faster than 10Gig but cheaper than the next standard step up. By giving you a cheaper option Continue reading
The recent violence in Iraq and the government’s actions to block social media and other Internet services have put a spotlight on the Iraqi Internet. However, an overlooked but important dynamic in understanding the current Iraqi Internet is the central role Kurdish ISPs play in connecting the entire country to the global Internet.
In the past five years, the Internet of Iraq has gone from about 50 networks (routed prefixes) to over 600. And what is most noteworthy this that the growth has not occurred as a result of increased connectivity from the submarine cable landing at Al Faw, as would be expected in a typical environment. Instead the dominant players in the Iraqi wholesale market are two Kurdish ISPs that connect to the global Internet through Turkey and Iran: Newroz and IQ Networks. |
![]() |
Help from the Kurds
The Iraqi Kurdistan region contains four main cities: Erbil, Duhok, Zakho and Sulaymaniyah. Newroz covers the first three, while IQ Networks provides service in the last. However, it would be incorrect to simply classify these providers as city-level retail ISPs. They also carry significant amounts of traffic for the rest of the country.
![]() |
![]() |
From the relative peace and stability of Continue reading
With the World Cup at an end, so too is our latest round of data center expansion. Following deployments in Madrid, Milan and São Paulo, we are thrilled to announce our 28th data center in Medellin, Colombia. Most of Colombia’s 22 million Internet users are now mere milliseconds away from a CloudFlare data center.
Our deployment in Medellin is launched in partnership with Internexa, operators of the largest terrestrial communications network (IP backbone) in Latin America. Internexa operates over 28,000 km of fibre crossing seven countries in the continent. Our partnership was formed over a shared vision to build a better Internet—in this case, by localizing access to content within the region. Today, it is estimated that as much as 80% of content accessed in Latin America comes from overseas. It is with great pride that, as of now, all 2 million sites using CloudFlare are available locally over Internexa’s IP backbone. Let’s just say we’ve taken a bite out of this percentage (and latency)!
If your Internet service provider (ISP) is not connected to Internexa, Continue reading
That's Julian in the center waving at me to stop taking pictures. That's Michael faced away on his right |
Repeat guest and friend of the Packet Pushers Ron Fuller chats with Greg Ferro and Ethan Banks about the latest updates to both the hardware and software in the ever-growing and capable Cisco Nexus product line. We get a thorough update in this show, hitting lots and lots of highlights. Discussion What’s new with the […]
The post Show 197 – Cisco Nexus Updates with Ron Fuller – Sponsored appeared first on Packet Pushers Podcast and was written by Ethan Banks.
Nicolas Vermandé (VCDX#055) is practice lead for Private Cloud & Infrastructure at Kelway, a VMware partner. Nicolas covers the Software-Defined Data Center on his blog www.my-sddc.om,
This series of posts describes a specific use case for VMware NSX in the context of Disaster Recovery. The goal is to demonstrate the routing and programmability capabilities through a lab scenario. This first part presents the NSX components and details the use case. The second part will show how to deploy the lab and the third part will deal with APIs and show how to use python to execute REST API calls to recreate the required NSX components at the recovery site.
Introduction
When considering dual datacenter strategy with VMs recovery in mind, one important decision is whether to adopt an active/active or active/standby model. The former is generally much more complex to manage because it requires double the work in terms of procedures, testing and change controls. In addition, capacity management becomes challenging as you need to accommodate physical resources to be able to to run all workloads within whatever site. On top of that, stretched VLANs are sometimes deployed across datacenters so that recovered VMs can keep their IP addresses. This Continue reading
There are many algorithms that can be used to for flow-based hashing to provide the best load balancing method over multiple IP or Ethernet connections but I recently learned that Cuckoo Hashing the preferred method.
The post Response: Improving Flow Based Hashing on ECMP with Cuckoo hashing appeared first on EtherealMind.
The CCIE Routing & Switching Advanced Technologies Class v5 resumes Wednesday, July 23rd at 8:00 AM PDT (15:00 UTC) at live.ine.com, where we will be discussing MPLS Layer 3 VPN. In the meantime, you will find the streaming and download playlists have been updated and now includes over 63 hours of content.
We have some other great news as well. The CCIE R&S v5 Rack Control panel has been released with the built-in telnet, loading and saving configs and one click device configurations and reset requests. Also, new content will be posted this week to the workbook, including all new troubleshooting labs.