Archive

Category Archives for "Networking"

IS-IS Design considerations on MPLS backbone

Using IS-IS with MPLS require some important design considerations. IS-IS as a scalable link state routing protocol has been used in the Service Provider networks for decades.

In fact, eight of the largest nine Service Providers use IS-IS routing protocol on their network as of today.

If LDP is used to setup an MPLS LSP, important IS-IS design considerations should be carefully understood.

As you might know IS-IS routing protocol uses IS-IS levels for hierarchy.

Similar to other routing protocol, synchronization is one of the consideration. IGP-LDP synchronization is required when MPLS LSP is setup with the LDP protocol. Otherwise routing black holes occur.

One of the important IS-IS design considerations when it is used with MPLS is PE devices loopback IP addresses are not sent into IS-IS Level1 domain in Multi-Level IS-IS design. This problem doesn’t happen in flat IS-IS design since you cannot summarize the prefixes in flat/single level IS-IS deployment.

In IS-IS L1 domain, internal routers only receive ATT (Attached) bit from the L1-L2 router. This bit is used for default route purpose.

If there is more than one L1-L2 router, still only default route is sent into Level1 subdomain/level.

Internal IS-IS Level 1 routers don’t know Continue reading

Your First Public Cloud Deployment Should Be Small

I’ve seen successful public (infrastructure) cloud deployments… but also spectacular failures. The difference between the two usually comes down to whether the team deploying into a public cloud environment realizes they’re dealing with an unfamiliar environment and acts accordingly.

Please note that I’m not talking about organizations migrating their email to Office 365. While that counts as public cloud deployment when an industry analyst tries to paint a rosy picture of public cloud acceptance, I’m more interested in organizations using compute, storage, security and networking public cloud infrastructure.

Read more ...

Tight Wi-Fi integration is key to successful SD-Branch

The promise of SD-Branch is that by collapsing network functionality in branch offices to a unified platform, enterprises can reap benefits in speed of deployment, ease of operation and cost. Since Wi-Fi is a critical piece of local area communications for many branch sites, improved integration, security and management of Wi-Fi is becoming increasingly important to evaluating the benefits of SD-Branch solutions.In branch offices, connected LAN devices and applications must be linked to the Internet via SD-WAN services. By integrating LAN and WAN connectivity, SD-WAN helps to simplify network management with a unified platform as compared to each function having its own unique management console.To read this article in full, please click here

AT&T Down on Low-Band 5G Speed

“The real speed boost typically comes with technologies like millimeter-wave [spectrum]," says...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Cleaning up with apt-get

Running apt-get commands on a Debian-based system is routine. Packages are updated fairly frequently and commands like apt-get update and apt-get upgrade make the process quite easy. On the other hand, how often do you use apt-get clean, apt-get autoclean or apt-get autoremove?These commands clean up after apt-get's installation operations and remove files that are still on your system but are no longer needed – often because the application that required them is no longer installed.[Get regularly scheduled insights by signing up for Network World newsletters.] apt-get clean The apt-get clean command clears the local repository of retrieved package files that are left in /var/cache. The directories it cleans out are /var/cache/apt/archives/ and /var/cache/apt/archives/partial/. The only files it leaves in /var/cache/apt/archives are the lock file and the partial subdirectory.To read this article in full, please click here

Cleaning up with apt-get

Running apt-get commands on a Debian-based system is routine. Packages are updated fairly frequently and commands like apt-get update and apt-get upgrade make the process quite easy. On the other hand, how often do you use apt-get clean, apt-get autoclean or apt-get autoremove?These commands clean up after apt-get's installation operations and remove files that are still on your system but are no longer needed – often because the application that required them is no longer installed.[Get regularly scheduled insights by signing up for Network World newsletters.] apt-get clean The apt-get clean command clears the local repository of retrieved package files that are left in /var/cache. The directories it cleans out are /var/cache/apt/archives/ and /var/cache/apt/archives/partial/. The only files it leaves in /var/cache/apt/archives are the lock file and the partial subdirectory.To read this article in full, please click here

Space-sourced power could beam electricity where needed

Capturing solar energy in space and then beaming it down to Earth could provide consistent electricity supplies in places that have never seen it before. Should the as-yet untested idea work and be scalable, it has applications in IoT-sensor deployments, wireless mobile network mast installs and remote edge data centers.The radical idea is that super-efficient solar cells collect the sun’s power in space, convert it to radio waves, and then squirt the energy down to Earth, where it is converted into usable power. The defense industry, which is championing the concept, wants to use the satellite-based tech to provide remote power for forward-operating bases that currently require difficult and sometimes dangerous-to-obtain, escorted fuel deliveries to power electricity generators.To read this article in full, please click here

Space-sourced power could beam electricity where needed

Capturing solar energy in space and then beaming it down to Earth could provide consistent electricity supplies in places that have never seen it before. Should the as-yet untested idea work and be scalable, it has applications in IoT-sensor deployments, wireless mobile network mast installs and remote edge data centers.The radical idea is that super-efficient solar cells collect the sun’s power in space, convert it to radio waves, and then squirt the energy down to Earth, where it is converted into usable power. The defense industry, which is championing the concept, wants to use the satellite-based tech to provide remote power for forward-operating bases that currently require difficult and sometimes dangerous-to-obtain, escorted fuel deliveries to power electricity generators.To read this article in full, please click here

How SD-Branch Enables Business Innovation

SD-branch can connect nearly any location, from a city office to a cabin in the woods. Here's how...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Introduction to Disaster Recovery

Businesses want to choose reliable equipments, components and technologies while designing a network. You may deploy most reliable equipments from your trusted vendor or deploy most mature technologies with carefully do not forget eventually every system fails!

Depends on where is your datacenter located, different disasters may happen. For U.S storm, tornado is not uncommon. I remember just couple years before because of major flooding, Vodafone couldn’t serve to their customer in Turkey for at least 1 day.

Thus, resiliency is an important aspect of the design plan.Resiliency means, how fast you can react to failure with the simplest explanation.

Disaster recovery is the response and remediation that a company follows after a planned or unplanned failure. Businesses often have a secondary datacenter used mostly for backup. If the company has multiple datacenter, they can be used as active/active though.

Secondary datacenter can take the responsibility in the case of a primary datacenter fails if it is used as backup.

Recovery time will depend on business requirements.For the mission critical applications, business may tolerate very short if not zero down time. Then the cost of the required equipments in primary and backup datacenters, skilled engineer who can Continue reading

4 people passed CCDE Lab with my CCDE training recently!

I realised just now that I didn’t share the names of the people who used my CCDE resources and got their CCDE numbers recently.

I know all of them, their capabilities, technical strength. I am happy to see that they are CCDE now.

Congrats to Ken Young , Jaroslaw Dobkowski , Malcolm Booden , Bryan Bartik.

Some of them used Self Paced CCDE Course , Some joined Instructor Led CCDE Training as well. I am honoured to hear good feedbacks from all and share their feedbacks in the related pages on the website.

In 2017, around 10 people passed the CCDE Lab exam with these people. And there was one cancelled exam on May 2017.

CCIE vs. CCDE

CCIE vs. CCDE is probably one of the most frequently asked questions by networking experts.

To get more information on CCDE contents and syllabus, you can check my Instructor Led CCDE or Self Paced CCDE course webpages.

How many times have you asked yourself or discussed this topic with your friends? Many times, right?

I have CCIE routing switching and/or service provider, should I continue to design certificates such as CCDE or should I study for another expert level certification, perhaps virtualization certification?

To illustrate my answer, let me give you an example.

Consider that you would build Greenfield network. (Usually, it is the same for Brownfield as well).

First, you need to understand the business, how many locations it has, where it is located, where is HQ or HQs, Datacenter, POP locations, and so on.

After that, you try to understand how the business can assist its consumers.

It can be retail, airport, stadium, or service provider network.

All these businesses have similar and different requirements,

For example, stadium architecture requires you to have ticketing systems, access control systems, and streaming the game, all of which are connected to the network. So, you need to understand the business requirements, Continue reading

VRF-Lite+GRE/dot1q or MPLS L3 VPN

I am going to create a new category on the blog which we will discuss together the different technologies,protocols, designs and architecture.

You can suggest a discussion topics and you all please welcome to join the discussions in the comment box of each topic.

I want to throw a first topic for the discussions !

Which Enterprise Architecture is more complex ? ( Did you read network complexity article in the blog ? )

MPLS VPN Technologies and Design are explained in detail in my Instructor Led CCDE and Self Paced CCDE course.

 

VRF-lite with GRE/dot1q or MPLS L3VPN ?

 

It is very subjective topic I think there is not absolute corrects thus please share your opinion.Collective of our answers will be creating a detail article and will provide a good resource for the people before they decide a particular technology,protocol,architecture.

UPDATE : Let me provide very brief overview for vrf-lite and MPLS VPNs.

How they can be carried through an overlay to provide data plane separation and how same tasks can be achieved with MPLS layer 3 VPNs.

Vrf-lite provides a control and data plane separation without requiring an MPLS as control or data plane. You don’t need an MPLS Continue reading

Red Hat Responds to Zombieload v2

Three Common Vulnerabilities and Exposures (CVEs) opened yesterday track three flaws in certain Intel processors, which, if exploited, can put sensitive data at risk.Of the flaws reported, the newly discovered Intel processor flaw is a variant of the Zombieload attack discovered earlier this year and is only known to affect Intel’s Cascade Lake chips.[Get regularly scheduled insights by signing up for Network World newsletters.] Red Hat strongly suggests that all Red Hat systems be updated even if they do not believe their configuration poses a direct threat, and it is providing resources to their customers and to the enterprise IT community.To read this article in full, please click here

Red Hat Responds to Zombieload v2

Three Common Vulnerabilities and Exposures (CVEs) opened yesterday track three flaws in certain Intel processors, which, if exploited, can put sensitive data at risk.Of the flaws reported, the newly discovered Intel processor flaw is a variant of the Zombieload attack discovered earlier this year and is only known to affect Intel’s Cascade Lake chips.[Get regularly scheduled insights by signing up for Network World newsletters.] Red Hat strongly suggests that all Red Hat systems be updated even if they do not believe their configuration poses a direct threat, and it is providing resources to their customers and to the enterprise IT community.To read this article in full, please click here

Intel Reveals AI ASICs, Claims $3.5B in AI Revenue in 2019

Intel estimates that the AI community’s demand for compute will increase 64 times during the next...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Day Two Cloud 023: Optimizing Multi-Cloud Connectivity With VeloCloud (Sponsored)

As more workloads get spread across private and public clouds, how do you operationalize and optimize connectivity, traffic design, and more? Can you stitch together services that reside in different clouds? What about the edge? On today's Day Two Cloud, sponsor VeloCloud, a VMware company, brings some answers. Our guest is Marco Murgia, Senior Director of Product Engineering.

Day Two Cloud 023: Optimizing Multi-Cloud Connectivity With VeloCloud (Sponsored)

As more workloads get spread across private and public clouds, how do you operationalize and optimize connectivity, traffic design, and more? Can you stitch together services that reside in different clouds? What about the edge? On today's Day Two Cloud, sponsor VeloCloud, a VMware company, brings some answers. Our guest is Marco Murgia, Senior Director of Product Engineering.

The post Day Two Cloud 023: Optimizing Multi-Cloud Connectivity With VeloCloud (Sponsored) appeared first on Packet Pushers.

Welcome to Insider Pro

For more than 50 years, IDG has earned the trust of its readers with authoritative coverage of the technology industry. Insider Pro is the natural evolution of the insightful coverage our publications have produced for decades.