Ivan Pepelnjak

Author Archives: Ivan Pepelnjak

vLAG Caveats in Brocade VCS Fabric

Brocade VCS fabric has one of the most flexible multichassis link aggregation group (LAG) implementation – you can terminate member links of an individual LAG on any four switches in the VCS fabric. Using that flexibility is not always a good idea.

2015-01-23: Added a few caveats on load distribution

Read more ...

SDN Router @ Spotify on Software Gone Wild

Imagine you need a data center WAN edge router with multiple 10GE uplinks. You’d probably go for an ASR or a MX-series router, right? How about using a 2 Tbps ToR switch and an SDN solution to make it work with full Internet routing table?

If you happen to have iTunes on your computer, please spend 10 seconds rating the podcast before you start listening to it. Thank you!

Read more ...

Pick a Topic for NSX Deep Dive Software Gone Wild Episode

Dmitri Kalintsev, one of the networking guys from VMware NSX team, has kindly agreed to do an NSX technical deep dive Software Gone Wild episode… and you have the opportunity to tell him what you’d like to hear. It’s as easy as writing a comment, and we’ll pick one of the most popular topics.

Do keep in mind that we plan to do a technical deep dive, and it has to fit within an hour or so or nobody will ever listen to it, so please keep your suggestions focused. “Troubleshooting NSX”, “NSX Design”, or “NSX versus ACI ” is not what we’re looking for ;)

Palo Alto Virtual Firewalls on Software Gone Wild

One of the interesting challenges in the Software-Defined Data Center world is the integration of network and security services with the compute infrastructure and network virtualization. Palo Alto claims to have tightly integrated their firewalls with VMware NSX and numerous cloud orchestration platforms - it was time to figure out how that’s done, so we decided to go on a field trip into the scary world of security.

Read more ...

Latency: the Killer of Spread-Out Application Stack Ideas

A few months ago I described how bandwidth limitations shatter the dreams of spread-out application stacks with elements residing (or being dynamically migrated) between data centers. Today let’s focus on bandwidth’s ugly cousin: latency.

TL&DR Summary: Spreading the server components of an application across multiple locations (multiple data centers or hybrid cloud deployments) can easily result in dismal performance even when there’s plenty of bandwidth available.

Read more ...

How Does MPLS-TE Interact with QoS

MPLS Traffic Engineer is sometimes promoted as a QoS solution (it seems bandwidth calendaring is a permanent obsession of some networking engineers, and OpenFlow is no more a solution than MPLS-TE was ;), but in reality it’s pretty hard to make the two work together seamlessly (just ask anyone who had to implement auto-bandwidth MPLS-TE in a large network).

Not surprisingly, we addressed the topic during our MPLS Tech Talk.

BGP Deaggregation with Conditional Route Injection

Whenever there’s a weird request to do something totally illogical with BGP, there’s a knob in Cisco IOS to get it done (and increase the heartburn of CCIE candidates). Conditional Route Injection (the ability to insert more specific prefixes into BGP without having them in the IP routing table) is one of them.

Keep in mind: being a MacGyver is not a long-term strategy. Just because you can doesn’t mean that you should.

Read more ...

That’s It for 2014

A dozen webinars, tens of public presentations and on-site workshops, numerous highly interesting ExpertExpress sessions, three books and over 250 blog posts. That should be enough for a year; it’s time to go offline.

I hope your company has a New Year freeze (and not let’s upgrade everything over New Year policy), so you’ll be able to do the same and enjoy some time during the rest of the year with your loved ones. See you in 2015!

VRF Lite on Nexus 5600

One of the networking engineers using my ExpertExpress to validate their network design had an interesting problem: he was building a multi-tenant VLAN-based private cloud architecture with each tenant having multiple subnets, and wanted to route within the tenant network as close to the VMs as possible (in the ToR switch).

He was using Nexus 5600 as the ToR switch, and although there’s conflicting information on the number of VRFs supported by that switch (verified topology: 25 VRFs, verified maximum: 1000 VRFs, configuration guide: 64 VRFs), he thought 25 VRFs (tenant routing domains) might be enough.

Read more ...