One of the topics I discussed in the IPv6 High Availability webinar is the problem of dual-stack deployments – what do you do when the end-to-end path for one of the protocol stacks breaks down. Happy eyeballs is one of the solutions, as is IPv6-only data center (Facebook is moving in that direction really fast). For more details, watch the short End-to-End High Availability in Dual Stack Networks demo video.
Occasionally I’d invite a vendor speaker (usually working for an interesting startup) to present in my Data Center Fabrics webinar series. Dan Backman from Plexxi was talking about affinity networking in 2013, and in the May 2015 update session we’ll have Dinesh Dutt from Cumulus Networks talking about their software platform, architectures you can build with whitebox (or britebox) switches running Cumulus Linux, exciting network automation options, and cool new features they’re constantly adding to their software.
One of my readers sent me this question:
After reading this blog post and a lot of blog posts about zero trust mode versus security zones, what do you think about replacing L3 Data Center core switches by High Speed Next Generation Firewalls?
Long story short: just because someone writes about an idea doesn’t mean it makes sense. Some things are better left in PowerPoint.
Read more ...I recently read a must-read blog post by Russ White in which he argued that you need to understand both theory and practice (see also Knowledge or Recipes and my other certification rants) and got a painful flashback of a discussion I had with a corner-cutting SE (fortunately he was an exception) almost two decades ago when I was teaching my Advanced OSPF course at Cisco.
Read more ...I was talking about “application-layer gateways” on firewalls and NAT boxes with a fellow engineer, and we came to an interesting conclusion: in most cases they are not gateways; they don’t add any significant functionality apart for payload fixups for those broken applications that think carrying network endpoint information in application packets is a good idea (I’m looking at you, SIP and FTP). These things should thus be called Application Layer Fixups or ALFs ;)
Whenever software switching nerds get together and start discussing the challenges of high-speed x86-based switching, someone inevitably mentions PF_RING, an open-source library that gives you blazingly fast packet processing performance on a Linux server.
I started recording a podcast with Luca Deri, the author of PF_RING, but we diverted into discussing ntopng, Luca’s network monitoring software. We quickly fixed that and recorded another podcast – this time, it’s all about PF_RING, and we discussed these topics:
Read more ...25 years ago when I started my networking career, mainframes were all the rage, and we were doing some crazy stuff with small distributed systems that quickly adapted to topology changes, and survived link, port, and node failures. We called them routers.
Yes, we were crazy and weird, but our stuff worked. We won and we built the Internet, proving that we can build networks bigger than any mainframe-based solution could ever hope to be.
Read more ...One of my readers was reading the Preparing an IPv6 Addressing Plan document on RIPE web site, and found that the document proposes two approaches to IPv6 addressing: encode location in high-order bits and subnet type in low-order bits (the traditional approach) or encode subnet type in high-order bits and location in low-order bits (totally counter intuitive to most networking engineers). His obvious question was: “Is anyone using type-first addressing in production network?”
Terastream project seems to be using service-first format; if you’re doing something similar, please leave a comment!
Read more ...A long long time ago Colin Dixon wrote the following tweet in response to my Controller Cluster Is a Single Failure Domain blog post:
He’s obviously right, but I wasn’t talking about interconnected domains, but failure domains (yeah, I know, you could argue they are the same, but do read on).
Read more ...Every now and then someone actually looks at the VXLAN packet format and eventually figures out that VXLAN encapsulation doesn’t provide any intrinsic security.
TL&DR Summary: That’s old news, the sky is not falling, and deploying VXLAN won’t make your network less secure than traditional VLAN- or MPLS-based networks.
Read more ...Whenever I’m running an SDDC workshop or doing on-site SDN/SDDC-related consulting, the question of hardware gateways between overlay virtual networks and physical world inevitably pops up.
My usual answer: You have to understand (A) what type of gateway you need, (B) what performance you need and (C) what form factor will give you that performance. For more details, watch the Hardware Gateways video from Scaling Overlay Virtual Networks webinar
In the last few months I ran into a sweet problem: dozens of organizations would like to have on-site SDN, SDDC or IPv6 workshop. Obviously I had to turn many of them down, and my calendar is almost full till early November.
A week ago I also found a solution: my friends at NIL Data Communications will start offering the same workshops with their instructors.
Read more ...One of the responses I got on my “What is Layer-2” post was
Ivan, are you saying to use L3 switches everywhere with /31 on the switch ports and the servers/workstation?
While that solution would work (and I know a few people who are using it with reasonable success), it’s nothing more than creative use of existing routing paradigms; we need something better.
Update 2015-04-22 14:30Z - Added a link to Cumulus Linux Redistribute Neighbor feature.
Read more ...You might remember my blog post claiming we had a system with SDN-like properties more than 20 years ago.
It turns out SDN is older than that – Rob Faulds found an old ComputerWorld ad from 1989 promoting AT&T SDN service, and it seems SDN was in operation as early as 1985.
DNS is a crucial component in modern scale-out application architectures, so when Alex Vayl and Kris Beevers from NSONE contacted me just as I was starting to work on my Active-Active Data Centers presentation, I was more than interested to hear what their solution can do.
The result: Episode 29 of Software Gone Wild in which we discussed a number of topics including:
Read more ...Here’s a short question I got from one of my readers:
I am a CCIE in SP/DC & working as Technical Architect in US. I follow your website but I don’t know where to start for SDN/Virtualization/Openstack…
I guess he’s not alone, so here’s a long list of resources I put together in the last 5+ years.
Before I get started: you’ll find links to most of these resources on ipSpace.net SDN Resources page.
Read more ...One of my readers sent me this question:
What is best practice to get a copy of the VM image from DC1 to DC2 for DR when you have subrate (155 Mbps in my case) Metro Ethernet services between DC1 and DC2?
The slow link between the data centers effectively rules out any ideas of live VM migration; to figure out what you should be doing, you have to focus on business needs.
Read more ...The video of my Troopers 15 IPv6 Microsegmentation presentation has been published on YouTube. As with the Automating Network Security video, it’s hard to read the slides; you might want to look at the slide deck on my public content web site.
You’ll find more about this topic, including tested Cisco IOS configurations, in IPv6 Microsegmentation webinar.
I was listening to one of the HP SDN Packet Pushers podcasts in which Greg made an interesting comment along the lines of “people say that OpenFlow doesn’t scale, but what HP does with its IMC is it verifies the amount of TCAM in the switches, checks whether it can install new flows, and throws an alert if it runs out of TCAM.”
Read more ...Iwan Rahabok sent me a link to a nice vRealize setup he put together to measure maximum utilization across all uplinks of a VMware host. Pretty handy when the virtualization people start deploying servers with two 10GE uplinks with all sorts of traffic haphazardly assigned to one or both of them.
Oh, if the previous paragraph sounds like Latin, and you should know a bit about vSphere/ESXi, take a hefty dose of my vSphere 6 webinar ;)