Dell and Nutanix execs shared their visions of a new normal; Equinix expanded into Canada;...
The company is targeting Amazon first with its Anthos platform because it has a similar operational...
When you are building a data center fabric, should you run a control plane all the way to the host? This is question I encounter more often as operators deploy eVPN-based spine-and-leaf fabrics in their data centers (for those who are actually deploying scale-out spine-and-leaf—I see a lot of people deploying hybrid sorts of networks designed as “mini-hierarchical” designs and just calling them spine-and-leaf fabrics, but this is probably a topic for another day). Three reasons are generally given for deploying the control plane all on the hosts attached to the fabric: faster down detection, load sharing, and traffic engineering. Let’s consider each of these in turn.
Faster Down Detection. There’s no simple way for ToR switches to determine when the connection to a host has failed, whether the host is single or dual-homed. Somehow the set of routes reachable through the host must be related to the interface state, or some underlying fast hello state (such as BFD), so that if a link fails the ToR knows to pull the correct set of routes from the routing table. It’s simpler to just let the host itself advertise the correct reachability information; when the link fails, the routing session will Continue reading
I am put off by the mainstream media, the American president, and Twitter these days. We’re living in a media world that lacks nuance. Nearly all discussions are polarized. That polarization results in a mockery of clear thought. A polarized world views issues as binary. Good or evil. Red or blue. Masks or freedom. Shelter at home or open it all up.
No more anger, agendas, or simple-minded retweets for me. I want facts without bias and reflection on what that data might mean. I want difficult conversations with no clear answers today, in the hopes of progressing towards a decent answer eventually.
Thankfully, I’ve discovered a few folks having nuanced, engaging discussions that attempt to analyze the difficulties of our world honestly and thoroughly. If these sorts of conversations might be interesting to you, here’s what I’ve found so far.
On this long-form podcast, Eric interviews heterodox thinkers about both current events and goings-on in the scientific community, physics especially. Eric is a brutal interviewer at times, refusing to let folks go down obvious trains of thought, instead forcing them to get to the point with haste. This tactic, although often uncomfortable to listen Continue reading
Today's Tech Bytes podcast dives into using enriched flow--that is, flow records enhanced with logs and data from sources such as firewalls and directories--to improve your network performance monitoring and threat management. Our guest is Warren Caron, Sales Engineer at VIAVI Solutions. VIAVI is our sponsor for today's episode.
The post Tech Bytes: Improving NPM And Threat Management With Enriched Flow From VIAVI Solutions (Sponsored) appeared first on Packet Pushers.
According to the Gartner blog post, 2019 Network Resolution: Invest in Network Automation, the top network resolution of 2019 was network automation. This isn’t surprising since traditional automation of networking and security has always been a challenge due to the cumbersome processes, lack of governance, and limited or non-existent management tools.
Organizations that automate more than 70% of their network change activities will reduce the number of outages by at least 50% and deliver services to their business constituents 50% faster
VMware NSX-T Data Center solves this by enabling rapid provisioning of network and security resources with layered security and governance. By using various network automation tools, you can quickly and effectively keep up with the demands of your developers and application owners who expect a quick turnaround on resource requests. In this blog post we’ll look at how NSX-T Policy APIs simplifies network automation.
At the center of NSX network automation lies the single point of entry into NSX via REST APIs. Just like traditional REST APIs, NSX-T APIs support the following API verbs: GET, PATCH, POST, PUT, DELETE. The table below shows the usage:
Introduced in Continue reading
The deal in Canada follows a flurry of activity for Equinix aimed at bolstering its hyperscale...
Dell and Nutanix executives say the future of work will be more flexible — and will further blur...
There was a need of protocol which can sent the data over a medium that is lossy . In simple term lossy is medium where data can be lost or alter.If an error occurs, there are 2 ways it can be taken care:
Resent the data need to fulfill 2 condition to make it worth , first whether the receiver has received the packet and and second whether the packet it received was the same one the sender sent.
This method to sent signal by receiver to sender that pack is received is known as Acknowledgement (ACK). So the sender should send a packet , stop and wait until ACK arrives from receiver.Once Ack is received by sender, it sent another packet and wait for Ack and this process continues.
But this process of stop and wait gives us 2 problem to taken care
Lets take each problem one by one starting with second one i.e recognize duplicate packets .
IBM reportedly cut thousands of jobs; HPE slashed salaries; Microsoft revamped Its Azure VMware...
One of the attendees in our Building Network Automation Solutions online course sent me this question:
While building an automation tool using Python for CLI provisioning, is it a good idea to use SDK provided by device vendor, or use simple SSH libraries Netmiko/Paramiko and build all features (like rollback-on-failure, or error handling, or bulk provisioning) yourself.
The golden rule of software development should be “don’t reinvent the wheel”… but then maybe you need tracks to navigate in the mud and all you can get are racing slicks, and it might not make sense to try to force-fit them into your use case, so we’re back to “it depends”.
Juniper routers consider a directly configured IP as a “local” route, except when you use /32
mask. Then it is a “direct” route. This caused me some confusion when creating a policy to redistribute loopback IP addresses into BGP.
A router learns routes from a variety of sources - networks configured on the box, those learned from IS-IS, rumors of prefixes from BGP or RIP, etc. You can see the full list here.
When routes are learned from different sources, Junos uses “Route Preference Values” to decide which route source to prefer. (Cisco refers to this as Administrative Distance). If routes are otherwise identical, the route with the lowest preference will be installed into the FIB.
If you’re looking at the route table, you can narrow down displayed routes to look at a specific type, e.g. show route protocol direct
to see locally connected networks:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
vagrant@vqfx> show route protocol direct
inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, Continue reading
Juniper routers consider a directly configured IP as a “local” route, except when you use /32
mask. Then it is a “direct” route. This caused me some confusion when creating a policy to redistribute loopback IP addresses into BGP.
A router learns routes from a variety of sources - networks configured on the box, those learned from IS-IS, rumors of prefixes from BGP or RIP, etc. You can see the full list here.
When routes are learned from different sources, Junos uses “Route Preference Values” to decide which route source to prefer. (Cisco refers to this as Administrative Distance). If routes are otherwise identical, the route with the lowest preference will be installed into the FIB.
If you’re looking at the route table, you can narrow down displayed routes to look at a specific type, e.g. show route protocol direct
to see locally connected networks:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
vagrant@vqfx> show route protocol direct
inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, Continue reading
With 5 million CPU cores, 50,000 GPUs, and 483 petaFLOPs of cumulative performance, IBM claims it's...