Cisco Routers Sample BGP Configurations : Quick and Easy

Today I am going to talk about the configurations of BGP on Cisco Routers. I will explain some of the terms which we are going to use in the configurations. Please let me know if you guys required any specific configuration in BGP or you can share the design with us so that we can create the configurations accordingly.

Sometimes if it difficult to set up a BGP configurations in the lab or in the live environment, so here in this article i am just posting the sample configurations which will help you guys to configure BGP in you labs or in the live environment. There is no relevance of this configuration with any of the live networks. All IPs taken in the configuration is just a sample IP addresses taken.

BGP is a wide routing protocol which is used on to connect the WAN links between two different AS. AS stands for autonomous systems. Below is a sample BGP topology and is not relevant what sample configurations shared with you in the below article.

Fig 1.1- Sample BGP Topology


We have two kinds of BGP sessions; one is iBGP which is internal BGP and other is eBGP which Continue reading

Lenovo’s new workstation is indeed ‘Tiny’ but packs a punch

Windows users who work in tight spaces and looking for a small form factor workstation with multiple display ports and solid processing power have a new contender to check out: the new ThinkStation P320 Tiny.The workstation lives up to its name: At 1.4 x 7.1 by 7.2 inches, it's the smallest workstation on the market that is ISV (independent software vendor) certified, according to Rob Herman, the general manager of Lenovo's workstation business unit.The ISV certification is important. "We don't consider a machine to be a workstation unless it has ISV certification," according to Lloyd Cohen, an analyst with IDC.The U.S. government uses the same definition for workstations and for non-government users, software certifications mean that you can run CAD and CAM programs, for example, without worrying about crashing, Cohen noted. That's important if you're working on a complex design.To read this article in full or to leave a comment, please click here

IoT devices or humans?

A Swedish rail line can now collect fares by scanning its customers for embedded biometric chips. The primary benefit is the elimination of a physical ticket -- plus it’s harder to lose. It sounds futuristic, but my dogs have been sporting embedded chips for over a decade.If you think about it, physical tickets are kind of a silly. They are a surrogate for the person. The practice of scanning a ticket, instead of a person, was likely established when there just weren’t many viable alternatives. Technology now offers a more direct approach.To read this article in full or to leave a comment, please click here

Cavium makes its ARM for data centers push with new servers

The initial efforts to bring ARM-based processors in the data center were not terribly successful. Calxeda crashed and burned spectacularly after it bet on a 32-bit processor when the rest of the world had moved on to 64-bits. And HPE initially wanted to base its Project Moonshot servers on ARM but now uses Intel Xeon and AMD Opteron.That’s because the initial uses for ARM processors were low-performance applications, like basic LAMP stacks, file and print, and storage. Instead, one company has been quietly building momentum for high performance ARM processors, and it’s not Qualcomm.Cavium, a company steeped in MIPS-based embedded processors, is bringing its considerable experience and IP to the ARM processor with its ThunderX server ecosystem. ThunderX is the whole shootin’ match, an ARMv8-A 64-bit SoC plus motherboards, both single and dual socket. In addition to hardware, Cavium offers operating systems, development environments, tools, and applications.To read this article in full or to leave a comment, please click here

Cavium makes its ARM for data centers push with new servers

The initial efforts to bring ARM-based processors in the data center were not terribly successful. Calxeda crashed and burned spectacularly after it bet on a 32-bit processor when the rest of the world had moved on to 64-bits. And HPE initially wanted to base its Project Moonshot servers on ARM but now uses Intel Xeon and AMD Opteron.That’s because the initial uses for ARM processors were low-performance applications, like basic LAMP stacks, file and print, and storage. Instead, one company has been quietly building momentum for high performance ARM processors, and it’s not Qualcomm.Cavium, a company steeped in MIPS-based embedded processors, is bringing its considerable experience and IP to the ARM processor with its ThunderX server ecosystem. ThunderX is the whole shootin’ match, an ARMv8-A 64-bit SoC plus motherboards, both single and dual socket. In addition to hardware, Cavium offers operating systems, development environments, tools, and applications.To read this article in full or to leave a comment, please click here

Reducing data among proposed techniques to speed-up computers

Future computer systems need to be significantly faster than the supercomputers around today, scientists believe. One reason is because analyzing complex problems properly, such as climate modeling, takes increasing work. Massive quantities of calculations, performed at high speed, and delivered in mistake-free data analysis is needed for the fresh insights and discoveries expected down the road.Limitations, though, exist in current storage, processing and software, among other components.The U.S. Department of Energy’s four year $48 million Exascale Computing Project (ECP), started at the end of last year for science and national security purposes, plans to overcome those challenges. It explains some of the potential hiccups it will be running into on its Argonne National Laboratory website. Part of the project is being studied at the lab.To read this article in full or to leave a comment, please click here

Reducing data among proposed techniques to speed-up computers

Future computer systems need to be significantly faster than the supercomputers around today, scientists believe. One reason is because analyzing complex problems properly, such as climate modeling, takes increasing work. Massive quantities of calculations, performed at high speed, and delivered in mistake-free data analysis is needed for the fresh insights and discoveries expected down the road.Limitations, though, exist in current storage, processing and software, among other components.The U.S. Department of Energy’s four year $48 million Exascale Computing Project (ECP), started at the end of last year for science and national security purposes, plans to overcome those challenges. It explains some of the potential hiccups it will be running into on its Argonne National Laboratory website. Part of the project is being studied at the lab.To read this article in full or to leave a comment, please click here

IDG Contributor Network: New day or déjà vu

In July of last year, I believe that became the first to publicly suggest that Avaya should divest of the company’s data networking business. The one-year anniversary of my ”Cajun redux?” post is approaching and in a coincidence, around this same time Avaya will complete the sale of that part of the company to Extreme Networks. With this confluence of milestones, in this post will ask, does this signify a new day for Avaya or will, at some point down the road, we again be struck with that strange feeling of déjà vu?To read this article in full or to leave a comment, please click here

Thinking Through The Cognitive HPC Nexus With Big Blue

There are plenty of things that the members of the high performance community do not agree on, there is a growing consensus that machine learning applications will at least in some way be part of the workflow at HPC centers that do traditional simulation and modeling.

Some HPC vendors think the HPC and AI systems have either already converged or will soon do so, and others think that the performance demands (both in terms of scale and in time to result) on both HPC and AI will necessitate radically different architectures and therefore distinct systems for these two workloads. IBM,

Thinking Through The Cognitive HPC Nexus With Big Blue was written by Timothy Prickett Morgan at The Next Platform.

Why Cisco’s new intent-based networking could be a big deal

Scentsy, a $500 million manufacturer and seller of wickless candles, got an early look at what Cisco and some analysts are saying could be the next big thing in the network industry: Intent-based networking.“I think this could be a pretty big shift in terms of the paradigm of network management,” says Kevin Tompkins, network architect at the company. “We’re getting away from managing individual devices and into having a central, globally managed policy, all controlled from one place that pervades through the network.”+MORE AT NETWORK WORLD: Cisco brings intent based networking to the end-to-end network +To read this article in full or to leave a comment, please click here

Why Cisco’s new intent-based networking could be a big deal

Scentsy, a $500 million manufacturer and seller of wickless candles, got an early look at what Cisco and some analysts are saying could be the next big thing in the network industry: Intent-based networking.“I think this could be a pretty big shift in terms of the paradigm of network management,” says Kevin Tompkins, network architect at the company. “We’re getting away from managing individual devices and into having a central, globally managed policy, all controlled from one place that pervades through the network.”+MORE AT NETWORK WORLD: Cisco brings intent based networking to the end-to-end network +To read this article in full or to leave a comment, please click here

Cisco Advanced Malware Protection (AMP) Threat Grid Sandboxing

Cisco AMP so called as Advance Malware Protection is a term used for Malware file detection technology. AMP will provides you threat intelligence and analytics, point-in-time detection, continuous analysis, and retrospective security of malware files
 
AMP- Advance Malware Protection can be used at various levels of the network. It can be used as Threat Grid, Endpoints, Network. These all products actually make up an architecture and is not just a different products in the cisco portfolio.

In my earlier post i wrote about the Cisco AMP product for endpoints only. If you want to look that article, please go through the below link 
Cisco AMP for Endpoints

We have following various AMP features at Cloud, Endpoint, Networks, web and email. In this article i am only covering the AMP for Threat Grid.

  • AMP Threat Grid
  • AMP for Endpoints
  • AMP for Networks
  • AMP for Web
  • AMP for Email

AMP Threat Grid
AMP threat Grid can be used for appliances or in the cloud. Huge organisations with compliance and policy restrictions can analyze malware with the help of AMP Threat Grid locally by submitting samples to the appliance. It helps you effectively defend against both targeted attacks and threats from advanced malware Continue reading