When I finally1 managed to get SR Linux running with netsim-tools, I wanted to test how it interacts with Cumulus VX and FRR in an OSPF+BGP lab… and failed. Jeroen Van Bemmel quickly identified the culprit: MTU. Yeah, it’s always the MTU (or DNS, or BGP).
I never experienced a similar problem, so of course I had to identify the root cause:
This article was originally posted on the Packet Pushers Ignition site on July 9, 2021. The ascendance of Software Defined Networking (SDN) has catalyzed a renaissance in specialized hardware designed to accelerate and offload workloads from general-purpose CPUs. Decoupling network transport and services via software-defined abstraction layers lets a new generation of programmable networking hardware […]
The post Marvell’s OCTEON 10 Challenges All Comers For DPU Supremacy appeared first on Packet Pushers.
The world is strange today. Despite the Covid-19 crisis all over the world, most ISPs are fighting a battle to deliver more bandwidth on a daily basis.
All pushed ISPs to their bandwidth limits, leaving ISP’s no option but to look for upgrades, for everlasting bandwidth demands. There, they are having another set of problems, facing them in this completely new and strange world. Chip shortage, logistic and labor health issues caused higher prices and no stock availability. Here in IP ArchiTechs, we are spending lot of our time finding a good solution for our customers and to help them overcome these hard times. Whether that is our regular Team meeting or just a chat with our colleagues in almost any occasion someone mentions something about new solution to improve capacity and performance for our customers.
Starting with a thought, what is available as a platform today, and of course it’s ready to be shipped immediately after you checkout and pay one thing obviously was just in front of me. X86 server, dozens of them. They are left from the time when we were buying new hardware just because new generation Continue reading
If you find smart NICs interesting, you’ll like the latest blog post by James Hamilton explaining how AWS emulated Xen environment on Nitro hardware to keep old VM instances running on new hardware.
Hello my friend,
Network Function Virtualisation (NFV) is not a new topic. There are numerous blogpost and articles, even in our blog, which review this topic. Yet, there is much more we can cover. Today we’ll share some insights on one of the very interesting products existing on the market today: 6WIND vRouter Turbo Router. We have a limited amount of days to write a few articles under our evaluation license. Hence, we’ll focus only on the most critical elements.
1
2
3
4
5 No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.
It absolutely is. In fact, Linux is the real home for automation systems, as in many cases it hosts the tools you create in Ansible, Python, Bash, Go or any other language. At the same time, in order to effectively work with Linux, you need to know how to automate management and operation of Linux operating system itself. And you will be absolutely capable to do that, once you attend our Continue reading
A while ago my friend Nicola Modena sent me another intriguing curveball:
Imagine a CTO who has invested millions in a super-secure data center and wants to consolidate all compute workloads. If you were asked to run a BGP Route Reflector as a VM in that environment, and would like to bring OSPF or ISIS to that box to enable BGP ORR, would you use a GRE tunnel to avoid a dedicated VLAN or boring other hosts with routing protocol hello messages?
While there might be good reasons for doing that, my first knee-jerk reaction was:
Hello my friend,
In the previous blogpost we covered the installation of Proxmox as a core platform for building open source virtualisation environment. Today we’ll continue this discussion and will show how to create a multi server cloud in order to better spread the load and provide resiliency for your applications.
1
2
3
4
5 No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.
In many cases, Linux is a major driving power behind modern clouds. In fact, if you look across all current big clouds, such as Amazon Web Services, Google Cloud Platform, Microsoft Azure, you will see Linux everywhere: on servers and on network devices (e.g., data centre switches). Therefore, knowledge how to deal with Linux and how to automate it is crucial to be successful in automation current IT systems.
At our trainings, advanced network automation and automation with Nornir (2nd step after advanced network automation), we give you detailed knowledge of all the technologies relevant:
Hello my friend,
Just the last week we finished our Zero-to-Hero Network Automation Training, which was very intensive and very interesting. The one could think: it is time for vacation now!.. Not quite yet. We decided to use the time wisely and upgrade our lab to bring possibilities for customers to use it. Lab upgrade means a major infrastructure project, which involves brining new hardware, changing topology and new software to simplify its management. Sounds interesting? Jump to details!
1
2
3
4
5 No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.
Each and every element of your entire IT landscape requires two actions. It shall be monitored and it shall be managed. Being managed means that the element shall be configured and this is the first step for all sort of automations. Configuration management is a perfect use case to start automating your infrastructure, which spans servers, network devices, VMs, containers and much more. And we are here to help you to do Continue reading
A few weeks ago I got an excited tweet from someone working at Oracle Cloud Infrastructure: they launched full-blown layer-2 virtual networks in their public cloud to support customers migrating existing enterprise spaghetti mess into the cloud.
Let’s skip the usual does everyone using the applications now have to pay for Oracle licenses and I wonder what the lock in might be when I migrate my workloads into an Oracle cloud jokes and focus on the technical aspects of what they claim they implemented. Here’s my immediate reaction (limited to the usual 280 characters, because that’s the absolute upper limit of consumable content these days):
VMware's next CEO has two tasks: to construct a narrative about VMware's role and value as a company in a post-hypervisor world, and to integrate its various fiefdoms into a cohesive set of products that can provide greater utility when used together than when used individually.
The post VMware After Gelsinger: Integrating Fiefdoms For A Post-Hypervisor World appeared first on Packet Pushers.
I always claimed that VMware Fault Tolerance makes no sense. After all, the only thing it does is protect a VM against a server hardware failure… in the world where software crashes are way more common, and fat fingers cause most of the outages.
But wait, it gets worse, the whole thing is incredibly complex – you might like this description Minh Ha left as a comment to my Fifty Shades of High Availability blog post.
Last week I described how I configured PVLAN on a Linux bridge. After checking the desired partial connectivity with ios_ping I wanted to verify it with LLDP neighbors. Ansible ios_facts module collects LLDP neighbor information, and it should be really easy using those facts to check whether port isolation works as expected.
---
- name: Display LLDP neighbors on selected interface
hosts: all
gather_facts: true
vars:
target_interface: GigabitEthernet0/1
tasks:
- name: Display neighbors gathered with ios_facts
debug:
var: ansible_net_neighbors[target_interface]
Alas, none of the routers saw any neighbors on the target interface.
This is an article from the VMware from Scratch series During the process of preparation to Install Tanzu Kubernetes Grid Integrated Edition (TKGI v1.8) on vSphere with NSX-T Data Center (v3.0.2) one of the steps is to use Ops Manager to deploy Harbor Container Registry (in this case v2.1.0). The process of deployment ended with Harbor error several times so I’m sharing here my solution in order to ease things out for you giving the fact that I didn’t come across any solution googling around. In the process, the Harbor Registry product tile is downloaded from the VMware Tanzu network portal, imported
The post VMware TKGI – Deployment of Harbor Container Registry fails with error appeared first on How Does Internet Work.
I wanted to test routing protocol behavior (IS-IS in particular) on partially meshed multi-access layer-2 networks like private VLANs or Carrier Ethernet E-Tree service. I recently spent plenty of time creating a Vagrant/libvirt lab environment on my Intel NUC running Ubuntu 20.04, and I wanted to use that environment in my tests.
Challenge-of-the-day: How do you implement private VLAN functionality with Vagrant using libvirt plugin?
There might be interesting KVM/libvirt options I’ve missed, but so far I figured two ways of connecting Vagrant-controlled virtual machines in libvirt environment:
Remember my rants about VMware and firewall vendors promoting crazy solutions that work best in PowerPoint and cause more headaches than anything else (excluding increased vendor margins and sales team bonuses, of course)?
Here’s another we-don’t-need-all-that-complexity real-life story coming from one of my long-term subscribers:
Every now and then I call someone’s baby ugly (or maybe it was their third cousin’s baby and they nonetheless feel offended). In such cases a common resort is to cite business or market needs to prove how ignorant and clueless I am. Here’s a sample LinkedIn comment talking about my ignorance about the need for smart NICs:
The rise of custom silicon by Presando [sic], Mellanox, Amazon, Intel and others confirms there is a real market need.
Now let’s get something straight: while there are good reasons to use tons of different things that might look inappropriate, irrelevant or plain stupid to an outsider, I don’t believe in real market need argument being used to justify anything without supporting technical facts (tell me why you need that stuff and prove to me that using it is the best way of solving a problem).
Several engineers formerly working for a large virtualization vendor were pretty upset with me when I claimed that the virtualization consultants promote “disaster recovery using stretched VLANs” designs instead of alternatives that would implement proper separation of failure domains.
Guess what… it’s even worse than I thought.
Here’s a sequence of comments I received after reposting one of my “disaster recovery doesn’t need stretched VLANs” blog posts on LinkedIn sometime in late 2019:
Got this question from one of ipSpace.net subscribers:
Do we really need those intelligent datacenter switches for underlay now that we have NSX in our datacenter? Now that we have taken a lot of the intelligence out of our underlying network, what must the underlying network really provide?
Reading the marketing white papers the answer would be IP connectivity… but keep in mind that building your infrastructure based on information from vendor white papers usually gives you the results your gullibility deserves.
Read more ...