It started with an interesting question tweeted by @pilgrimdave81
I’ve seen on Cisco NX-OS that it’s preferring a (ospf->bgp) locally redistributed route over a learned EBGP route, until/unless you clear the route, then it correctly prefers the learned BGP one. Seems to be just ooo but don’t remember this being an issue?
Ignoring the “why would you get the same route over OSPF and EBGP, and why would you redistribute an alternate copy of a route you’re getting over EBGP into BGP” aspect, Peter Palúch wrote a detailed explanation of what’s going on and allowed me to copy into a blog post to make it more permanent:
Today's Tech Bytes podcast is an interview with Fortinet customer Batteries Plus, a retailer that specializes in batteries, chargers, and lighting, about its SD-WAN and SD-Branch deployments. Fortinet is our sponsor for this episode.
The post Tech Bytes: Batteries Plus Powers Its Branches With Fortinet SD-WAN (Sponsored) appeared first on Packet Pushers.
Gigamon adds a human touch to a new SaaS NDR offering, the IEEE finalizes 802.3cu for faster speeds over single-mode optical fiber, US service providers roll out managed SASE services, and more IT news in this week's Network Break podcast.
The post Network Break 338: Breach In Progress? Gigamon Operators Are Standing By; IEEE Finalizes New Ethernet Standard appeared first on Packet Pushers.
No-go zone: U.S. President Joe Biden told Russian President Vladimir Putin that some types of cyberattacks are off-limits during a meeting at the G7 summit in Switzerland recently, Reuters reports. Destructive attacks by Russian hackers on U.S. critical infrastructure must end, Biden said. It’s unclear if the talk will have much of an effect. Banned […]
The post The Week in Internet News: Biden Warns Putin About Some Cyberattacks appeared first on Internet Society.
Hello my friend,
The new year we start with a new topic, which is a configuration analysis of the multivendor networks. We have a passion both to create our own open source tools and to use existing, creating by other teams and project. Today we will start dive in one of such a tool.
1
2
3
4
5 No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.
In software development we have a concept called CI/CD (Continuous Integration/Continuous Delivery). In a nutshell, it’s a methodology, which incorporates mandatory testing of configuration (code, software version, etc) before bringing it to production. The main idea behind it is that automated testing and validation will make sure that code is stable and fit for purpose. Automated testing? That’s where the automation comes to the stage.
And automation is something what we are experts in. And you can benefit from that expertise as well.
In our network automation training we follow zero to hero approach, where we Continue reading
One of ipSpace.net subscribers sent me this question after watching the EVPN Technical Deep Dive webinar:
Do you have a writeup that compares and contrasts the hardware resource utilization when one uses flood-and-learn or BGP EVPN in a leaf-and-spine network?
I don’t… so let’s fix that omission. In this blog post we’ll focus on pure layer-2 forwarding (aka bridging), a follow-up blog post will describe the implications of adding EVPN IP functionality.
This article is totally unrelated to networking, and describes how medical researchers misuse machine learning hype to publish two-column snake oil. Any correlation with AI/ML in networking is purely coincidental.
Stumbled upon a must-read article: Is Your Consultant a Parasite?
For an even more snarky take on the subject, enjoy the Ten basic rules for dealing with strategy consultants by Simon Wardley.
Have you ever tried to SSH to an network device and recieved the dreaded Unable to negotiate with <user> port 22: no matching key exchange method found. Their offer: <key-algorithm>. In this post ill cover how to work around this issue. Key Algorithms Specify the Key...continue reading
Kubernetes workloads are highly dynamic, ephemeral, and are deployed on a distributed and agile infrastructure. Application developers, DevOps teams, and site reliability engineers (SREs) often require better visibility of their different microservices, what their dependencies are, how they are interconnected, and which other clients and applications access them. This makes Kubernetes observability challenges unique. While Kubernetes helps to meet the needs of deploying and managing distributed applications, its observability challenges require a Kubernetes-native approach.
Traditional monitoring and observability solutions create data silos by collecting data at different levels (e.g. infrastructure, cluster, and application levels), or from a large number of ephemeral objects that generate data across a distributed environment. Traditional monitoring and observability solutions then stitch this data together to provide a near real-time snapshot view. This approach is not scalable given the high volume of granular data generated at each level, as well as Kubernetes’ distributed nature. It also starts to become expensive and budget unfriendly to run traditional monitoring solutions, as they require higher resource consumption (high-performance memory, more compute, and higher bandwidth).
In contrast, a Kubernetes-native observability solution can visualize all information with all relationship context intact and provide a high-fidelity view of the environment. This Continue reading
On today's Heavy Networking, sponsored by Palo Alto Networks, we hear from Salesforce about the evolution of its branch network to SD-WAN. Salesforce was able to trade MPLS for Internet broadband, get more bandwidth for less money, employ application-based steering and policy enforcement, and more. Our guests are Georgi Stoev, Sr. Network Architect at Salesforce; and Kumar Ramachandran, Senior Vice President at Palo Alto Networks.
The post Heavy Networking 583: How Salesforce Evolved Its Branch Network With Prisma SD-WAN (Sponsored) appeared first on Packet Pushers.
I read a piece on LifeHacker yesterday that made me shake my head a bit. I’m sure the title SMART Goals Are Overrated was designed to get people to click on it, so from that perspective it succeeded. Wading into the discourse there was an outline of how SMART goals were originally designed for managers to give tasks to employees and how SMART doesn’t fit every goal you might want to set, especially personal aspirational ones. Since I have a lot of experience with using SMART goals both for myself and for others I wanted to give some perspective on why SMART may not be the best way to go for everything but you’re a fool if you don’t at least use it as a measuring tool.
As a recap, SMART is an acronym for the five key things you need to apply to your goal:
Kubernetes workloads are highly dynamic, ephemeral, and are deployed on a distributed and agile infrastructure. Application developers, DevOps teams, and site reliability engineers (SREs) often require better visibility of their different microservices, what their dependencies are, how they are interconnected, and which other clients and applications access them. This makes Kubernetes observability challenges unique. While Kubernetes helps to meet the needs of deploying and managing distributed applications, its observability challenges require a Kubernetes-native approach.
Traditional monitoring and observability solutions create data silos by collecting data at different levels (e.g. infrastructure, cluster, and application levels), or from a large number of ephemeral objects that generate data across a distributed environment. Traditional monitoring and observability solutions then stitch this data together to provide a near real-time snapshot view. This approach is not scalable given the high volume of granular data generated at each level, as well as Kubernetes’ distributed nature. It also starts to become expensive and budget unfriendly to run traditional monitoring solutions, as they require higher resource consumption (high-performance memory, more compute, and higher bandwidth).
In contrast, a Kubernetes-native observability solution can visualize all information with all relationship context intact and provide a high-fidelity view of the environment. This Continue reading