Recently there was a conversation in the Cumulus community (details in the debriefing below) about the best way to build a redundant backup IP link for multi-chassis link aggregation (MLAG). Like all good consulting-led blogs, we have a healthy dose of pragmatism that goes with our recommendations and this technology is no different. But if you’re looking for the short answer, let’s just say: it depends.
The MLAG backup IP feature goes by many names in the industry. In Cisco-land you might call this the “peer keepalive link,” in Arista-ville you might call this the “peer-address heartbeat” and in Dell VLTs it is known as the “backup destination.” No matter what you call it, the functionality offered is nearly the same.
Before we get into the meat of the recommendation, let’s talk about what the backup IP is designed to do. The backup IP link provides an additional value for MLAG to monitor, so a switch knows if its peer is reachable. Most implementations use this backup IP link solely as a heartbeat, meaning that it is not used to synchronize MAC addresses between the two MLAG peers. This is also the case with Cumulus Continue reading
Ekahau Hat (photo courtesy of Sam Clements)
You may have noticed quite a few high profile departures from Ekahau recently. A lot of very visible community members, concluding Joel Crane (@PotatoFi), Jerry Olla (@JOlla), and Jussi Kiviniemi (@JussiKiviniemi) have all decided to move on. This has generated quite a bit of discussion among the members of the wireless community as to what this really means for the company and the product that is so beloved by so many wireless engineers and architects.
Putting the people aside for a moment, I want to talk about the Ekahau product line specifically. There was an undercurrent of worry in the community about what would happen to Ekahau Site Survey (ESS) and other tools in the absence of the people we’ve seen working on them for so long. I think this tweet from Drew Lentz (@WirelessNerd) best exemplifies that perspective:
Intel revealed its first chipsets designed for artificial intelligence in large data centers.
VMware bought Intrinsic, an application security startup, in its fifth acquisition in three months,...
The Linux Foundation’s Confidential Computing Consortium is a who’s who of cloud providers,...
I've been publishing on Etherealmind for more than ten years and its been quite a journey. I've changed and its time move on.
The post Closing the Shutters on EtherealMind appeared first on EtherealMind.
MEF unveiled the first standardized definition for SD-WAN. The the definition stands to help to...
I spent a lot of time during this summer figuring out the details of NSX-T, resulting in significantly updated and expanded VMware NSX Technical Deep Dive material… but before going into those details let’s do a brief walk down the memory lane ;)
You might remember a startup called Nicira that was acquired by VMware in mid-2012… supposedly resulting in the ever-continuing spat between Cisco and VMware (and maybe even triggering the creation of Cisco ACI).
Read more ...I want to thank both Bhushan Pai, and Matt Karnowski, who joined VMware from the Avi Networks acquisition, for helping with the Avi Networks setup in my VMware Cloud on AWS lab and helping with some of the details in this blog.
Humair Ahmed, Sr. Technical Product Manager, VMware NSBU
Bhushan Pai, Sr. Technical Product Manager, VMware NSBU
Matt Karnowski , Product Line Manager, VMware NSBU
With the recent acquisition of Avi Networks, a complete VMware solution leveraging advanced load balancing and Application Delivery Controller (ADC) capabilities can be leveraged. In addition to load balancing, these capabilities include global server load balancing, web application firewall (WAF) and advanced analytics and monitoring.
In this blog, we walk through an example of how the Avi Networks load balancer can be leveraged within a VMware Cloud on AWS software-defined data center (SDDC).
IBM's Steve Fields explained that software optimization can improve hardware performance by up to...
The tenth meeting of Africa Peering and Interconnection Forum (AfPIF) kicked off in Balaclava, Mauritius, with participants celebrating the achievements and looking forward to further collaboration.
Andrew Sullivan, the President and CEO of the Internet Society, opened by highlighting the importance of the meeting, which helps create a community that supports the growth of the Internet in Africa, identifies challenges, and ensures that understanding spreads.
In his speech, he noted that traffic exchanged inside Africa has expanded enormously as a result of the work done by AfPIF over the years. One of AfPIF goals is to increase the level of local content exchanged locally to 80% by 2020.
Sullivan, who has extensive experience working with international Internet bodies, emphasized the need for a robust community in Africa, led by Af-IX, that will continue working together to ensure that the Internet is built in Africa, according to the needs of Africans and the African network experience.
The annual meeting, brings together chief technology officers, peering coordinators and business development managers from the African region, Internet service providers and operators, telecommunications policymakers and regulators, content providers, Internet exchange point (IXP) operators, infrastructure providers, data center managers, National Research and Education Networks (NRENs), Continue reading
CenturyLink's network outage impacted as many as 22 million customers across 39 states, and at...
If you follow me on Twitter ( https://twitter.com/danieldibswe), you know I have been doing a lot of SD-WAN lately and I recently built my own lab. In this lab, I wanted to try a feature known as service chaining. What is service chaining? It’s a method of sending traffic through one or more services, such as a firewall, before the traffic takes the “normal” path towards its destination.
Before we dive deeper in, let me show the topology in use:
When I tested this feature, the data plane was working perfectly but my traceroute looked very strange. The traceroute was also not finishing.
root@B1-S1:/# traceroute 10.1.2.10 traceroute to 10.1.2.10 (10.1.2.10), 30 hops max, 60 byte packets 1 10.1.1.1 (10.1.1.1) 6.951 ms 36.355 ms 39.604 ms 2 10.1.0.2 (10.1.0.2) 11.775 ms 15.047 ms 15.535 ms 3 10.0.0.18 (10.0.0.18) 28.540 ms 28.538 ms 28.532 ms 4 10.1.2.10 (10.1.2.10) 41.748 ms 41.746 ms 41.736 Continue reading
The 111 Cybersecurity Tech Accord companies compete daily but all agree on the big picture:...