After initiating Cisco DNA Appliance version 126.96.36.199 and starting an upgrade towards 188.8.131.52 in order to get to 184.108.40.206 I got a strange issue where the appliance system update went fine but the switch to 220.127.116.11 was disabled until Application Updates did not finish. The real issue here was that Application Updates of Cloud Connectivity – Data Hub got stuck on 12% for 4 days without timing out or finishing. Tried several appliance reboots from CIMC which didn’t help. Below are the steps that helped sort out Application Updates issues with container pods being stuck at the point of pooling
The post Cisco DNA Upgrade Issues – Application Update Stuck appeared first on How Does Internet Work.
Well… It will reboot your whole switch stack at once, In case you were wondering. But it has a neat feature of automatic rollback to the previous IOS XE version if something goes south with the newly upgraded switches. The same goes for non-stacked Cisco Catalyst C9200 and C9300 switches, but the question was, and the answer is hard to find if the stack would reload members sequentially or it would just reload all members at once. The answer is of course the least good option which makes the upgrade impossible without network outage even if other devices are connected
It was a year of big changes in every way. I was fortunate enough to be surrounded by great professionals working on huge projects and then even to get the chance to switch to some completely new technologies that I never really worked with before. It was great, it is still very intense and from my perspective, all changes were for the better. But as with all periods with a lot of action, all those draft articles on this blog’s queue didn’t yield as much new material as I wanted. It was a year of almost no writing but a
This article describes the strange workaround of switching VMware NSX-T enabled cluster from using vSphere Enterprise Plus license to vSphere Standard license with vDS licensed through NSX-T. I really hope that you will not need to go through this as it is quite like bringing the whole environment up from scratch. But if you have two clusters with enough resources it will enable you to do it without downtime. Environment on which this was tested is vSphere 7.0.2 and NSX-T 3.1.2 NSX-T as a network and security platform enables network functions to be virtualised on your vSphere cluster. The way
The post Switch vSphere Enterprise Plus license to vSphere Standard on a NSX-T enabled cluster appeared first on How Does Internet Work.
Doing a lot on Nexus 9000 series datacenter boxes (N9K) lately? Sure you’re missing the good old ‘wr’ command to save your last startup-config into running-config. NXOS architecture guys decided that you should be really well concentrated when deciding to save your nice new configuration to survive device reboot and type: N9K_1(config)# copy running-config startup-config. Just typing ‘wr’ into the console would be too nice right? Let’s use the alias configuration and bring that command back to the box. N9K_1(config)# copy running-config startup-config 100% Copy complete, now saving to disk (please wait)... Copy complete. N9K_1(config)# If you try ‘wr’:
The post Missing good old ‘wr’ command on N9K? let’s bring it back! appeared first on How Does Internet Work.
NSX-T v3.0.1 and v3.1.3 were used to try the stuff described below As always with network engineers, even when working with SDN/SSDC solutions, sooner or later you will be asked to troubleshoot connectivity across your hops. And if working with VMware NSX-T platform, your next-hop for the North-South Datacenter traffic will almost always be NSX-T EDGE Transport Node VM. It will be really useful then to be able to get some packet traces out of that box in order to troubleshoot the traffic issues in detail. One of the examples would be simple routing or some sort of Loadbalancing traffic
Intro It’s a short list of things that you should probably know when installing VMware NSX-T. Of course, installing NSX-T should be done by following the official documentation. This here is just a few additional points that could help. It’s for your peace of mind afterward. This is an article from the VMware from Scratch series NSX Manager is a Cluster of three VMs You should end up having three NSX-Manager VMs in a cluster when you finish NSX-T installation. The first one will be deployed via .ovf file from vCenter, the other two direct from first NSX Manager GUI
This is an article from the VMware from Scratch series During the process of preparation to Install Tanzu Kubernetes Grid Integrated Edition (TKGI v1.8) on vSphere with NSX-T Data Center (v3.0.2) one of the steps is to use Ops Manager to deploy Harbor Container Registry (in this case v2.1.0). The process of deployment ended with Harbor error several times so I’m sharing here my solution in order to ease things out for you giving the fact that I didn’t come across any solution googling around. In the process, the Harbor Registry product tile is downloaded from the VMware Tanzu network portal, imported
The post VMware TKGI – Deployment of Harbor Container Registry fails with error appeared first on How Does Internet Work.
It’s one of those articles aimed at the people with Cisco ACI experience who don’t bother with reading all the install and other guides again while going through n’th time of building and ACI fabric, like me. When it comes to Cisco ACI, you really should. There’s a small change with the physical build of the third generation of APIC server where 10G SFP interfaces from APIC towards the Leaf switches (used for fabric discovery and later for the in-band controller to fabric communication) where 4x10G card is built in the server and not like 2x10G on M2/L2 and other
The post New ACI deployment? Watch out when connecting APICs to Leafs appeared first on How Does Internet Work.
SDDC – Software-Defined Data Centers Times of Software Defined everything has long since arrived, the need to implement many appliances, two or more for each network function, is not so popular anymore. The possibility to manage packet forwarding, load balancing and security of network traffic inside the datacenter from one simple web console is showing finally that things can be managed in a simpler way after all. All vendors in the networking world tried to come up with their own way of centralizing data center management, as it ends up, all of them did it, some better than the others.
The post Software-defined data center and what’s the way to do it appeared first on How Does Internet Work.
I made it to the list of Cisco Champions for 2020 which is now the third year in a row! The primary reason I could again be selected between the first 100 Cisco champs for 2020 in the early acceptance process is the stuff that I shared through this blog and because of the contact with people that got to me directly via my blog comments or e-mail. Again, 2019 was another year full of great projects and big challenges with new technologies. We finally break the barrier of NFV and Automation and got some great stuff done using automation
This blog was selected as a finalist in Cisco BLOG Awards in the Best Analysis category, the category for resources that provide insightful discussions and help for networking architects around the world. Fancy right? Do you agree? Go and vote, it’s the second one on the list ? https://www.ciscofeedback.vovici.com/se/705E3ECD18791A68
The post Best Analysis Finalist – Cisco IT Blog Awards for 2019 appeared first on How Does Internet Work.
API Calls method The fancy way of configuring Cisco ACI Fabric is by using Python script for generating API calls. Those API calls are then used to configure Cisco ACI by pushing those calls to APIC controller using POSTMAN (or similar tool). Configuration changes done this way are those that you are doing often and without much chance of making mistakes. You write a Python script and that script will take your configuration variables and generate API call that will configure the system quickly and correctly every time. The thing is that you need to take the API call example
If you are configuring Cisco ACI datacenter fabric it will sooner or later get to the point that you need to configure multiple objects inside the GUI which will, by using the click-n-click method, take a huge amount of time. While using POSTMAN to create multiple objects of the same type is the preferred method that everybody is speaking about (because you can generate REST API calls using Python or something similar), the quickest way to do it is using POST of JSON configuration file directly through the GUI. POSTing JSON config example As described above, the POST of JSON
If you check Juniper configuration guide for SRX firewall clustering, there will be a default example of redundancy-group weight values which are fine if you have one Uplink towards outside and multiple inside interfaces on that firewall. set chassis cluster redundancy-group 0 node 0 priority 100 set chassis cluster redundancy-group 0 node 1 priority 1 set chassis cluster redundancy-group 1 node 0 priority 100 set chassis cluster redundancy-group 1 node 1 priority 1 set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5 weight 255 set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight 255 set chassis cluster redundancy-group 1 interface-monitor ge-5/0/5 weight 255
This article describes the simplest way to enable MACSec using preconfigured static key-string. The example was tried on Catalyst 3850 and should work on other switches too. There is another article that I wrote years ago which describes a more complex implementation with dot1x etc. MACSec Media Access Control Security is the way to secure point-to-point Ethernet links by implementing data integrity check and encryption of Ethernet frame. When you configure MACsec on a switch interface (and of course, on the other switch connected to that interface), all traffic going through the link is secured using data integrity checks and encryption.
Sometimes you will have some L2 domains (Bridge Domains – BD) in your datacenter that will be used with hardware appliances like F5 NLB or something like an additional firewall, WAF or something similar. That is the case where ACI will not route or bridge but the only L3 point of exit from that kind of segment would be on actual hardware appliance outside ACI Fabric – connected to the Leaf port. We will take an example here and use it throughout the article where BIG IP F5 NLB is used as an L3 termination of L2 BD 10.10.10.0/24. F5
The post How to Advertise a Route from ACI Layer2 BD Outside the Fabric? appeared first on How Does Internet Work.
APIC Controller Cluster You actually need three APIC controller servers to get the cluster up and running in complete and redundant ACI system. You can actually work with only two APICs and you will still have a cluster quorum and will be able to change ACI Fabric configuration. Loosing One Site In the MultiPod, those three controllers need to be distributed so that one of them is placed in the secondary site. The idea is that you still have a chance to keep your configuration on one remaining APIC while losing completely primary site with two APICs. On the other
What is MultiPod? ACI MultiPod was first designed to enable the spread of ACI Fabric inside a building (into two or more Pods), let’s say in two rooms at different floors, without the need to connect all the Leafs from one room to all the Spines in the other room. It was a way of simplifying the cabling and all that comes with building spread CLOS topology fabric stuff. MultiPod also saves some Leaf ports giving the fact that Pod to Pod connection through Multicast enabled IPN network connects directly to Spines. People soon realized that MultiPod will be a great solution
The post ACI MultiPod and how to build MultiDatacenter with Cisco ACI appeared first on How Does Internet Work.