Cloud-managed WLAN offers many benefits, but traditional controller-based WiFi still has its advantages.
One of the toughest challenges you can face as a networking engineer is trying to understand what the customer really needs (as opposed to what they think they’re telling you they want).
For example, the server team comes to you saying “we need 5 VLANs between these 3 data centers”. What do you do?
Read more ...The 2017 CIGI-Ipsos Global Survey on Internet Security and Trust paints a bleak picture of the current state of trust online. A majority of those surveyed said they are more concerned about their privacy than the year before, with an almost even split between those “much more concerned” and those only “somewhat more concerned”. When asked whether they agree with the statement “overall, I trust the Internet”, only 12% of respondents strongly agreed and a further 43% somewhat agreed. This means only a little more than half agreed that they trust the Internet, and some even expressed some reservation by choosing to respond “somewhat agree”.
In the previous post, I talked about OpenVPN TCP and UDP tunnels and why you should not be using TCP. In this post, I’m going to talk about optimizing the said tunnels to get the most out of them.
Believe it or not, the default OpenVPN configuration is likely not optimized for your link. It probably works but its throughput could possibly be improved if you take the time to optimize it.
A tunnel has 2 ends! Optimizing one end, does not necessarily optimizes the other. For the proper optimization of the link, both ends of the tunnel should be in your control. That means when you are using OpenVPN in server mode serving different clients that you do not have control over, the best you could do is to optimize your own end of the tunnel and use appropriate default settings suitable for the most clients.
Below are some techniques that could be used to optimize your OpenVPN tunnels.
In today’s world where most connections are either encrypted or pre-compressed (and more commonly both), you probably should think twice before setting up compression on top of your vpn tunnel.
While it still could be an effective way Continue reading
So far in the previous articles, we’ve covered the initial objections to LACP a deep dive on the effect on traffic patterns in an MLAG environment without LACP/Static-LAG. In this article we’ll explore how LACP differs from all other available teaming techniques and then also show how it could’ve solved a problem in this particular deployment.
I originally set out to write this as a single article, but to explain the nuances it quickly spiraled beyond that. So I decided to split it up into a few parts.
• Part1: Design choices – Which NIC teaming mode to select
• Part2: How MLAG interacts with the host
• Part3: “Ships in the night” – Sharing state between host and upstream network
An important element to consider is LACP is the only uplink protocol supported by VMware that directly exchanges any network state information between the host and its upstream switches. An ESXi host is also sortof a host, but also sortof a network switch (in so far as it does forward packets locally and makes path decisions for north/south traffic); here in lies the problem, we effectively have network devices forwarding packets between each other, but Continue reading