{LISP} IPv6 over IPv4 Transition

First an foremost, I want to thank Ivan Pepelnjak of ipspace.net for taking the time to read, correct and validate these concepts. Taking time out of his busy schedule to make sure this information...

[[ Summary content only, you can read everything now, just visit the site for full story ]]

CCIE Home Lab inventory

For reference here is the complete list of Cisco devices, including RAM, Flash, installed modules and IOS versions that I’ve used to build my home lab.

Device Platform RAM Flash Modules IOS
SW1 WS-C3560-24TS 128MB 32MB n/a c3560-advipservicesk9-mz.122-44.SE.bin
SW2 WS-C3550-24 64MB 16MB n/a c3550-ipservicesk9-mz.122-25.sec2.bin
SW3 WS-C3550-24 64MB 16MB n/a c3550-ipservicesk9-mz.122-25.sec2.bin
SW4 WS-C3550-48 64MB 16MB n/a c3550-ipservicesk9-mz.122-25.sec2.bin
R1 2610XM 128MB 32MB WIC-2T= c2600-adventerprisek9-mz.124-25d.bin
R2 2610XM 128MB 32MB WIC-2T= c2600-adventerprisek9-mz.124-25d.bin
R3 2651XM 160MB 32MB 2x WIC-2T= c2600-adventerprisek9-mz.124-25d.bin
R4 2801 256MB 64MB WIC-2T= c2801-adventerprisek9-mz.124-24.T4.bin
R5 1841 384MB 128MB WIC-2T= c1841-adventerprisek9-mz.124-24.T4.bin
R6 2691 256MB 64MB WIC-2T= c2691-adventerprisek9-mz.124-15.T14.bin
R7 3725 256MB 128MB 2x WIC-1T=, NM-2FE2W-V2= c3725-adventerprisek9-mz.124-15.T14.bin
BB1 2522 16MB 16MB n/a c2500-is-l.122-15.T17.bin
BB2 2520 16MB 16MB n/a c2500-is-l.122-15.T17.bin
BB3 2520 16MB 16MB n/a c2500-is-l.122-15.T17.bin
CON 2610 64MB 16MB NM-16A=, WIC-1T= c2600-ik9o3s3-mz.123-26.bin

Disable WordPress Plugins From the Shell

Lately I've been working with a separate instance of my WordPress site for development and testing of plugins, my theme, etc. I have a helper script that orchestrates the pulling of files and copying of the database from the production server into the dev server. I found that it would be nice to disable certain plugins that I don't want running in the dev instance (ie, plugins that notify search indexes when new posts are made) from within this script.

Building the lab – part 2

With all the equipment in the rack it is time to connect everything together. I’ve used a total of 11 back-to-back serial, 12 router-to-switch ethernet and 18 switch-to-switch ethernet connections. All together that makes almost a spaghetti of cabling :)

CCIE Home Lab cabling

CCIE Home Lab cabling

But with some Rip-Tie strap cable I could organize the cabling  in a pretty decent way. Below is the end result, without the console connections.

CCIE Lab Rack

CCIE Lab Rack

IPv6 over an IPv4 Internet Using DMVPN

Well this one really doesn’t need much of an explanation anymore, IPv6 is here and IPv4 has been here for a long time. In most networks, the two must co-exist side by side or one on top of the other...

[[ Summary content only, you can read everything now, just visit the site for full story ]]

Building the lab – part 1

I finally have all the equipment I need to build the lab. I also managed to get a Skeletek C24U rack, and it looks really nice. The fun already starts with assembling the rack as it comes in a relatively small box, including 6 pieces and a whole bunch of nuts and bolts.

Skeletek C24U

Skeletek C24U

After 45 minutes or so the rack is fully assembled and ready to rack the first pieces of Cisco gear.

Skeletek CCIE Home Lab Rack

Assembled Skeletek Rack

After about two hours later I also racked all the Cisco equipment I have into the rack, including two PDU’s for some power juice.

Skeletek CCIE Home Lab Rack

Skeletek CCIE Home Lab Rack

Next step will be to put all the cabling in place. And hopefully the two octal cables (cab-octal-async) I ordered will arrive shortly, so I can also connect all the console outputs to the terminal server/router.

IETF Provides New Guidance on IPv6 End-Site Addressing

I've always been at odds with the recommendation in RFC 3177 towards allocating /48 IPv6 prefixes to end-sites.  To me this seemed rather short-sighted, akin to saying that 640K of memory should be enough for anybody.  It's essentially equivalent to giving out /12s in the IPv4 world which in this day and age might seem completely ridiculous, but let us not forget that in the early days of IPv4 it wasn't uncommon to get a /16 or even a /8 in some cases.

Granted, I know there are quite a few more usable bits in IPv6 than there are in IPv4, but allocating huge swaths of address space simply because it's there and we haven't thought of all the myriad ways it could be used in the future just seems outright wasteful.

So you can imagine my surprise and also my elation last week when the IETF published RFC 6177 entitled 'IPv6 Address Assignment to End Sites'.  In it, the general recommendation of allocating /48s to end-sites that has long been the defacto standard since the original publication of RFC 3177 in 2001 has finally been reversed.

It seems that sanity has finally prevailed and Continue reading

Routing an IPv6 Core on Link-Local Addresses

Can routing an IPv6 Core on link-local addresses be done? Will IPv6 work in a network backbone that only has link-local addresses configured? For this test I’ll be using OSPFv3 but protocol itself...

[[ Summary content only, you can read everything now, just visit the site for full story ]]

LISP is serious business

Some perspective with regards to the maturity of the protocol specifications and implementations of the Locator/Identifier Separation Protocol (LISP):

The first LISP specification was published in January 2007 as an individual submission. After 13 revisions the Internet-Draft was adopted as an IETF LISP Working Group document. Within the LISP Working Group there have been 12 versions of the main Internet-Draft. Literally hundreds of contributors from a lot of different companies have made suggestions and fixed bugs to make the LISP specification what it is today.

The first implementation started at the Prague IETF conference in 2007. As of today there are about 10 implementations: Linux, Android, FreeBSD (OpenLISP), Zlisp, LISP-Click, FNSC FITELnet G21, IOS, NX-OS, IOS-XR and IOS-XE. Please note that not all of them have yet been released or are of production quality. I recommend using the implementations developed by Cisco because they are the most mature and feature rich implementations.

Better yet, recently Cisco announced to the world its first production software releases 15.1(4)M and 15.1(2)S which support the Locator/Identifier Separation Protocol (LISP). Cisco has committed to make LISP, as an emerging standard, available on all its major routers and switches in Continue reading

IPv4 Address Exhaustion Causing Harmful Effects on the Earth

Today, I received a very disturbing email on NANOG which was forwarded from a recipient on the Global Environment Watch (GEW) mailing list.  If this is true, we all need to take steps to make an orderly and smooth transition to IPv6 as quickly as possible, lest we suffer from the harmful effects described in this email.


From: Stephen H. Inden
To: Global Environment Watch (GEW) mailing list
Date: Fri, 1 Apr 2011 00:19:08 +0200
Subject: IPv4 Address Exhaustion Effects on the Earth

At a ceremony held on February 3, 2011 the Internet Assigned Numbers Authority (IANA) allocated the remaining last five /8s of IPv4 address space to the Regional Internet Registries (RIRs). With this action, the free pool of available IPv4 addresses was completely depleted.

Since then, several scientists have been studying the effects of this massive IPv4 usage (now at its peak) on the Earth.

While measuring electromagnetic fields emanating from the world's largest IPv4 Tier-1 backbones, NASA scientists calculated how the IPv4 exhaustion is affecting the Earth's rotation, length of day and planet's shape.

Dr. Ron F. Stevens, of NASA's Goddard Space Flight Center, said all packet switching based communications have some effect on the Earth's  Continue reading

Monitoring Direct Attached Storage Under ESXi

One of the first things I wanted to do with my ESXi lab box was to simulate a hard drive failure to see what alarms would be raised by ESXi. This exercise doesn't serve any purpose in the “real world” where ESXi hosts are likely to be using shared storage in all but the most esoteric of installations but since my lab box isn't using shared storage I wanted to make sure I understood the behavior of ESXi during a drive failure. This post is also a guide to my future self should a drive fail for real :-).