Multicast PIM Dense mode vs PIM Sparse mode is one of the most important things for every Network Engineer who deploys IP Multicast on their networks. Because these two design option is completely different and the resulting impact can be very high. In this post, we will look at, which situation, which one should be used, and why.
Although we will not explain PIM Dense or PIM Sparse mode in detail in this post, very briefly we will look at them and then compare them for clarity. First of all, you should just know both PIM Dense and PIM Sparse are the PIM Deployment models.
PIM Dense mode work based on push and prune. Multicast traffic is sent everywhere in the network where you enable PIM Dense mode.
This is not necessarily bad.
In fact, as a network designer, we don’t think there is bad technology. They have use cases
If Multicast receivers are everywhere or most of the places in the network, then pushing the traffic everywhere is not a bad thing.
Because when you push, you don’t build a shared tree, you don’t need to deal with the RP – Rendezvous Point, because Multicast Continue reading
The orbiting satellite transmits and receives its information to a location on Earth called the Network Operations Center (NOC). NOC is connected to the Internet so all communications made from the customer location (satellite dish) to the orbiting satellite will flow through the NOC before they reached the Internet and the return traffic from the Internet to the user will follow the same path.
Data over satellite travels at the speed of light and Light speed is 186,300 miles per second. The orbiting satellite is 22,300 miles above earth (This is true for the GEO-based satellite)
The data must travel this distance 4 times:
1. Computer to satellite
2. Satellite to NOC/Internet
3. NOC/Internet to satellite
4. Satellite to computer
This adds a lot of time to the communication. This time is called “Latency or Delay” and it is almost 500 milliseconds. This may not be seen so much, but some applications like financial and real-time gaming don’t like latency.
Who wants to pull a trigger, and wait for half a second for the gun to go off?
But, latency is related to which orbit the Continue reading
In this post, we will look at what is CCIE Service Provider v5.0, what comes with it, which technologies we need to learn, what is the difference between CCIE SP v4 and CCIE SP v5, why you should study for CCIE Service Provider v5, when you should study for CCIE SP exam, after which certificate you should aim it for, we will look at all of these questions.
The CCIE Service Provider v5 lab exam is testing skillsets related to the service Provider solutions integration, interoperation, configuration, and troubleshooting in complex networks. CCIE SP v5 is the latest version of the CCIE Service Provider lab exam. When the candidates pass this exam, they get their CCIE number.
This certification syllabus covers most, if not all real-life Service Provider network technologies.
From the technology standpoint, the biggest difference between CCIE SPv4.1 and the CCIE SP v5.0 exam is Network Programmability and Automation Module. It is 20% of the entire exam, thus very important in the CCIE Service Provider exam. You can access Orhan Ergun’s CCIE SP Network Continue reading
I see some people have been asking what other people are thinking about Orhan Ergun’s CCIE Enterprise course, thus starting today to share what other people share about us on their blog posts as well. Not just on social media, but with these blog posts, because they are able to share more thoughts about us, I think it is very valuable feedback for everyone.
I would like to start with the website ‘ samovergre.com ‘.
He is our CCIE Enterprise student and you can find his CCIE study plan on this page. He is sharing feedback about our CCIE Enterprise training and other study materials he uses for his CCIE Enterprise study.
One thing that was very important there was that He understand the uniqueness of our CCIE Enterprise Training. It is the design part.
Everyone can teach you how to configure routers or routing protocols, but a design mindset is a completely unique thing and for years, if you are a Network Engineer, probably you have heard about our CCDE training and its success too.
Now, we continue delivering our design knowledge and experience to our CCIE students as well and Continue reading
Information Security was one of the fields that Cisco systems used to, and still heavily participating.
Now a days not just information security, but cyber security as well, is a field that Cisco is going in and training many
of their engineers to profession.
The main difference summarizes the concept of both the domains, that is for information security, it is mainly
About securing the network components and assets from unauthorized access starting from physical access
Towards the control access, and by that it means accessing the nodes controlling the network, and affecting it.
Cyber security on the other hand is about protecting the same components from attacks, inside and outside attacks.
The attacks aim is usually either stealing sensitive data, or sabotage network components, or sometimes “both”.
Information Security wise, or IT Security wise, Cisco have been there for years, and they’ve been famous for their IT Security programs including the old obsolete CCNA Security, and the CCNP/CCIE Security programs that are still valid and refreshing till now a day.
Cyber Security wise, Cisco have evolved and developed their programs to present the CyberOps programs that includes the:
Why Core or Backbone is used in Networking?. Before we start explaining this question, let’s note that these two terms are used interchangeably. Usually, Service Providers use Backbone, and Enterprise Networks use Core terminology but they are the same thing.
The Key Characteristics of the Core, the Backbone part of the networks are:
Redundancy in this module is very important.
Most of the Core Network deployments in ISP networks are based on Full Mesh or Partial Mesh.
The reason for having full mesh physical connectivity in the Core network Continue reading
Multicast BIER – RFC8279
Bit Index Explicit Replication – BIER is an architecture that provides optimal multicast forwarding through a “BIER domain” without requiring intermediate routers to maintain any multicast-related per-flow state. BIER also does not require any explicit tree-building protocol for its operation.
So, it removes the need for PIM, MLDP, P2MP LSPs RSVP, etc.
A multicast data packet enters a BIER domain at a “Bit-Forwarding Ingress Router” (BFIR), and leaves the BIER domain at one or more “Bit-Forwarding Egress Routers” (BFERs).
The BFIR router adds a BIER header to the packet.
The BIER header contains a bit-string in which each bit represents exactly one BFER to forward the packet to.
The set of BFERs to which the multicast packet needs to be forwarded is expressed by setting the bits that correspond to those routers in the BIER header.
The obvious advantage of BIER is that there is no per-flow multicast state in the core of the network and there is no tree building protocol that sets up trees on-demand based on users joining a multicast flow.
In that sense, BIER is potentially applicable to many services where Multicast is used.
Many Service Providers currently investigating Continue reading
A challenge of large-scale telecom networks is increasing the variety of proprietary hardware and launching new services that may demand the installation of new hardware. This challenge requires additional floor space, power, cooling, and more maintenance. With evolving virtualization technologies in this decade, NFV focuses on addressing the telecom problems by implementing network functions into software that can run on server hardware or hypervisors.
Furthermore, by using NFV, installing new equipment is eliminated and it will be related to the health of underlay servers and the result is lower CAPEX and OPEX.
There are many benefits when operators use NFV in today’s networks. One of them is Reducing time-to-market to deploy new services to support changing business requirements and market opportunities for new services.
Decoupling physical network equipment from the functions that run on them will help telecom companies to consolidate network equipment types onto servers, storage, and switches that are in data centers. In NFV architecture, the responsibility for handling specific network functions (e.g. IPSEC/SSL VPN) that run in one Continue reading
Bilateral Peering is when two networks negotiate with each other and establish a direct BGP peering session. In one of the previous posts, Settlement Free Peering was explained, in this post, both Bilateral and Multilateral Peering will be explained and both are deployment modes of Settlement Free Peering.
This is generally done when there is a large amount of traffic between two networks. Tier 1 Operators just do Bilateral Peering as they don’t want to peer with anyone, other than other Tier 1 Operators. The rest of the companies are their potential customers, not their peers.
As mentioned above, Bilateral Peering offers the most control, but some networks with very open peering policies may wish to simplify the process, and simply “connect with everyone”. To help facilitate this, many Exchange Points offer “multilateral peering exchanges”, or an “MLPE”.
Content Delivery Network companies replicate content caches close to a large user population. They don’t provide Internet access or transit service to the customers or ISPs but distribute the content of the content providers. Today, many Internet Service Providers started their own CDN businesses as well. An example is Level 3. Level 3 provides its CDN services from its POP locations which are spread all over the World.
Content distribution networks reduce latency and increase service resilience (Content is replicated to more than one location). More popular contents are cached locally and the least popular ones can be served from the origin
Before CDNs, the contents were served from the source locations which increased latency, thus reducing throughput. Contents were delivered from the central site. User requests were reaching the central site where the source was located.
Figure 1 – Before CDN
Figure 2 – After CDN
Amazon, Akamai, Limelight, Fastly, and Cloudflare are the largest CDN providers which provide services to different content providers all over the world. Also, some major content providers such Continue reading
Currently, in 2022, the CCDE exam version is version 3. There are many new changes in CCDE v3 compared to CCDE v2 and in this blog post, some are the new changes will be explained, also for the things that stay the same will be highlighted as well. Also, I will share my takes in the post about these changes.
Before starting the technical changes, let’s start with the exam result announcement change.
CCDE v2 exam has been announced in 8-12 weeks. This was effectively allowing CCDE exam candidates to schedule the exam two times maximum in a year.
Students wouldn’t schedule the exam if they fail because the announcement date and new exam date were usually overlapping.
This changed anymore.
With CCDE v3, exam results are announced in 48 hours. It is almost like CCIE exams.
CCDE v2 Lab/Practical exam was done in Professional Pearson Vue Centers. There were 300 of them and done in many different countries.
Unfortunately, this change may not be good for many exam takers as Cisco CCIE Lab locations are not available in many countries and are not as common as Continue reading
BGP Allowas-in feature needs to be understood well in order to understand the BGP loop prevention behavior, But also, why the BGP Allowas-in configuration might create a dangerous situation, and what are the alternatives of BGP Allowas-in will be explained in this post.
BGP Allow-as-in feature is used to allow the BGP speaker to accept the BGP updates even if its own BGP AS number is in the AS-Path attribute.
By default EBGP loop prevention is, if any BGP speaker sees its own AS Number in the BGP update, then the update is rejected, thus the advertisement cannot be accepted. But there might be situations to accept the prefixes, thus there are two options to overcome this behavior.
Either accepting the BGP update even if the AS number is in the AS-Path list, with the BGP Allow AS feature or changing the behavior with the BGP AS Override feature.
Without BGP Allowas, let’s see what would happen.
In this topology, Customer BGP AS is AS 100. The customer has two locations.
Service Provider, in the middle, let’s say providing MPLS VPN service for the customer.
As you can understand from the topology, Service Provider Continue reading
BGP AS Override needs to be understood well in order to understand the BGP loop prevention behavior, But why BGP AS Override might create a dangerous situation, and what are the alternatives of BGP AS Override will be explained in this post.
BGP AS Override feature is used to change the AS number or numbers in the AS Path attribute. Without BGP AS-Override, let’s see what would happen.
In this topology, Customer BGP AS is AS 100. The customer has two locations.
Service Provider, in the middle, let’s say providing MPLS VPN service for the customer.
As you can understand from the topology, Service Provider is running EBGP with the Customer, because they have different BGP Autonomous Systems.
The service provider in the above topology has BGP AS 200.
Left customer router, when it advertises BGP update message to the R2, R2 sends to R3 and when R3 sends to R4, R4 wouldn’t accept the BGP update,
When R4 receives that update, it will check the AS-Path attribute and would see its own BGP AS number in the AS Path.
Thus is by default rejected, due to EBGP loop prevention.
If the router sees its Continue reading
BGP Route Reflector – RR vs Confederation is one of the first things Network Engineers would like to understand when they learn both of these Internal BGP scalability mechanisms. For those who don’t know the basics of these mechanisms, please read BGP Route Reflector in Plain English and BGP Confederation Blog posts from the website first.
There are many differences when we compare Confederation vs Route Reflector and in this post, some of the items in the comparison chart will be explained.
Both of these techniques are used in Internal BGP for scalability purposes. But BGP RR changes the Full Mesh IBGP topologies to the Hub and Spoke. BGP confederation divides the Autonomous System into the sub-ASes but inside every Sub-AS, IBGP rules are applied.
Inside BGP Sub Autonomous System, full Mesh IBGP or Route Reflector is used. So, we consider BGP RR compare to Confederation to be more scalable because inside Sub-AS still full-mesh IBGP might be used.
If RR inside Sub-AS is deployed, then configuration complexity would increase.
BGP Route Reflector in order to prevent the routing loop Continue reading
In this post, we will compare BGP and EIGRP. We will look at some of the important aspects when we compare BGP vs EIGRP. Although EIGRP is used as an IGP and BGP is used mainly as an External routing protocol, we will compare from many different design aspects. Also, BGP can be used as an Internal IGP protocol as well and we will take that into consideration as well.
We prepared the above comparison chart for BGP vs EIGRP comparison. We will look at some of those important Comparison criteria from a design point of view.
One of the biggest reasons we choose BGP, not EIGRP is Scalability. BGP is used as a Global Internet routing protocol and as of 2022, the Global routing table size for IPv4 unicast prefixes is around 900 000. So almost a million prefixes we carry over BGP on the Internet.
So, proven scalability for BGP we can say. EIGRP usually can carry only a couple of thousands of prefixes, this is one of the reasons, EIGRPrp is used as an Internal dynamic routing protocol, not over the Internet.
AWS are known for their famous highly demanded Solutions Architect Associate (SAA) Certificate, and many thinks that it is the first step with AWS and Cloud Computing, the question now is it?, or is there any step that should be taken before, like the AWS Cloud Practitioner CLF exam?.
in this blog post we will discover and compare the agenda and the main pillars each exam teach you, and see if it worth skipping CLF and start directly with SAA.
Your very first chapter to start studying AWS CLF with will be the cloud concepts, this will give a general overview of what is the idea and concept of cloud computing, what would AWS provide regarding that, and are you about to experience.
Luckily this part is shared between both the exams of AWS CLF and SAA, and we’ll find a share for it here and there, to understand what we are about to start with such exams.
that makes them equal here, 1-1.
Having zero knowledge about cloud computing and the restrictions and differentiations that might occur with it, upon implementing a new network on the cloud for the first time will require Continue reading
OSPF Administrative Distance, or OSPF AD, is the key of electing OSPF among other routing protocols (if existed) leading to the same target within the same routing table, in this blog post we will discover the basics and types of Administrative Distances for OSPF across multiple different platform.
For Cisco systems operating systems, regardless of their platforms, all the IOS-XE, IOS-XR, and NX-OS OS’s treats OSPF based on the “AD” which has the value of “110”.
Now the most important thing is not just to know the numerical value which will be useless without knowing its order of preference among the other routing protocol Administrative Distances.
The values will be as follows regarding the Static and Dynamic Routing Protocols:
This Shows that OSPF routes to a specific target can be hidden if one of the dynamics (EIGRP or eBGP) routes was installed in the routing table, that also includes the Direct and Static as well.
Dealing with devices/platforms from Juniper Networks will get you to face and Continue reading
DMVPN – Dynamic Multipoint VPN and MPLS VPN are two of the most popular VPN mechanisms. In this post, we will look at DMVPN vs MPLS VPN comparison, from many different aspects. At the end of this post, you will be more comfortable positioning these private VPN mechanisms.
When we compare the two protocols, we look at many different aspects. For this comparison, I think very first we should say that DMVPN is a Cisco preparatory tunnel-based VPN mechanism but MPLS VPN is standard-based, RFC 2547, non-tunnel based VPN mechanism. Although, whether MPLS LSP is a tunnel or not is an open discussion in the networking community, we won’t start that discussion here again.
Another important consideration for MPLS VPN vs DMVPN is, that DMVPN can be set up over the Internet but MPLS VPN works over private networks, Layer 2 or Layer 3 based private networks. DMVPN tunnels can come up over the Internet and inside the tunnels routing protocols can run to advertise the Local Area Networks subnets.
But MPLS requires Private network underlay.
Figure – DMVPN Networks can run over Internet or Private Networks