Author Archives: Ivan Pepelnjak
Author Archives: Ivan Pepelnjak
In the last video from the Network Programmability webinar Matt Oswalt answered numerous questions from the audience.
Load sharing in MPLS networks is always an interesting topic, and we couldn’t possibly avoid it during our MPLS-focused Tech Talks – watch the video.
After discussing the load sharing intricacies we briefly dabbled with the concept of entropy labels.
For whatever reason (subliminal messages from vendor marketing departments?), I’m constantly brooding about the vendor lock-in, its inevitability, and the way supposedly disruptive companies try to use the fear of lock-in to persuade na├»ve customers to buy their products.
Read more ...Brocade VCS fabric has one of the most flexible multichassis link aggregation group (LAG) implementation – you can terminate member links of an individual LAG on any four switches in the VCS fabric. Using that flexibility is not always a good idea.
2015-01-23: Added a few caveats on load distribution
Read more ...Every time I write about unequal traffic distribution across a link aggregation group (LAG, aka Etherchannel or Port Channel) or ECMP fabric, someone asks a simple question “is there no way to reshuffle the traffic to make it more balanced?”
TL&DR summary: there are ways to do it, and some vendors already implemented them.
Read more ...Imagine you need a data center WAN edge router with multiple 10GE uplinks. You’d probably go for an ASR or a MX-series router, right? How about using a 2 Tbps ToR switch and an SDN solution to make it work with full Internet routing table?
If you happen to have iTunes on your computer, please spend 10 seconds rating the podcast before you start listening to it. Thank you!
Read more ...I got a lengthy email from one of my readers a while ago, essentially asking a simple question: assuming I want to go return to my studies and move further than CCIE I currently hold, should I go for CCDE or the new VMware’s VCIX-NV?
Well, it’s almost like “do you believe in scale-up or scale-out?” ;) Both approaches have their merits.
Read more ...Every major hypervisor and networking vendor has an overlay virtual networking solution. Obviously they’re not identical, and some of them work better than others in large-scale environments – an interesting challenge we tried to address in the Scaling Overlay Virtual Networks webinar. As always, we started by identifying the potential problems.
Olivier Hault sent me an interesting challenge:
I cannot find any simple network-layer solution that would allow me to use total available bandwidth between a Hypervisor with multiple uplinks and a Network Attached Storage (NAS) box.
TL&DR summary: you cannot find it because there’s none.
Read more ...What’s the difference between network programmability and SDN? Matt Oswalt explained his view on the topic in the Network Programmability 101 webinar.
Dmitri Kalintsev, one of the networking guys from VMware NSX team, has kindly agreed to do an NSX technical deep dive Software Gone Wild episode… and you have the opportunity to tell him what you’d like to hear. It’s as easy as writing a comment, and we’ll pick one of the most popular topics.
Do keep in mind that we plan to do a technical deep dive, and it has to fit within an hour or so or nobody will ever listen to it, so please keep your suggestions focused. “Troubleshooting NSX”, “NSX Design”, or “NSX versus ACI ” is not what we’re looking for ;)
One of the interesting challenges in the Software-Defined Data Center world is the integration of network and security services with the compute infrastructure and network virtualization. Palo Alto claims to have tightly integrated their firewalls with VMware NSX and numerous cloud orchestration platforms - it was time to figure out how that’s done, so we decided to go on a field trip into the scary world of security.
Read more ...A few months ago I described how bandwidth limitations shatter the dreams of spread-out application stacks with elements residing (or being dynamically migrated) between data centers. Today let’s focus on bandwidth’s ugly cousin: latency.
TL&DR Summary: Spreading the server components of an application across multiple locations (multiple data centers or hybrid cloud deployments) can easily result in dismal performance even when there’s plenty of bandwidth available.
Read more ...MPLS Traffic Engineer is sometimes promoted as a QoS solution (it seems bandwidth calendaring is a permanent obsession of some networking engineers, and OpenFlow is no more a solution than MPLS-TE was ;), but in reality it’s pretty hard to make the two work together seamlessly (just ask anyone who had to implement auto-bandwidth MPLS-TE in a large network).
Not surprisingly, we addressed the topic during our MPLS Tech Talk.
One of the main complaints I was continuously getting about my free content is that there’s simply too much of it, and that it’s impossible to find what one is looking for.
New Year holidays gave me enough time to implement a project that has been on my to-do list for almost a year: total redesign of the free content web site. Feedback highly appreciated!
Whenever there’s a weird request to do something totally illogical with BGP, there’s a knob in Cisco IOS to get it done (and increase the heartburn of CCIE candidates). Conditional Route Injection (the ability to insert more specific prefixes into BGP without having them in the IP routing table) is one of them.
Keep in mind: being a MacGyver is not a long-term strategy. Just because you can doesn’t mean that you should.
Read more ...A dozen webinars, tens of public presentations and on-site workshops, numerous highly interesting ExpertExpress sessions, three books and over 250 blog posts. That should be enough for a year; it’s time to go offline.
I hope your company has a New Year freeze (and not let’s upgrade everything over New Year policy), so you’ll be able to do the same and enjoy some time during the rest of the year with your loved ones. See you in 2015!
After discussing the basics of MPLS, MPLS-TE and LDP, and the relationship between FECs, LDP and BGP, Seamus and myself focused on another interesting topic: how MPLS protocol stack uses RSVP to implement traffic engineering.
One of the networking engineers using my ExpertExpress to validate their network design had an interesting problem: he was building a multi-tenant VLAN-based private cloud architecture with each tenant having multiple subnets, and wanted to route within the tenant network as close to the VMs as possible (in the ToR switch).
He was using Nexus 5600 as the ToR switch, and although there’s conflicting information on the number of VRFs supported by that switch (verified topology: 25 VRFs, verified maximum: 1000 VRFs, configuration guide: 64 VRFs), he thought 25 VRFs (tenant routing domains) might be enough.
Read more ...The edited videos for Scaling Overlay Virtual Networking webinar are available on ipSpace.net Content site. Nuage Networks sponsored the webinar; the videos are thus publicly available (without registration).