Hello my friend,
For quite a while we were silent (literally, the last post was month ago), as we were busy with extremely busy with completion of an important customer engagement. It was successfully completed and we are back on track with writing new interesting and useful (we hope so :-)) materials for you! And today we talk about our favourite topic – automation. To be precise, automation of Nokia SR OS devices with a new Python library called pySROS created by Nokia folks.
1
2
3
4
5 No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.
If you have ever been in a situation, when there are much more work than hours in a day, then you know how important is to be able to do the job quickly and accurately. It is even better, if such job is done not by yourself, but rather by someone else or something else. This is exactly, where the automation of network and Continue reading
This week's Network Break talks about a new data center chassis from Juniper, why Akamai spent $600 million to buy security company Guardicore, what happened to Zoom's big acquisition of a contact center company, and more tech news.
The post Network Break 353: New Juniper Chassis Tops 400G; Akamai Spends Big Money On Microsegmentation appeared first on Packet Pushers.
This chapter explains the VPC Control-Plane operation when two EC2 instances within the same subnet initiate TCP session between themself. In our example, EC2 instances are launched in two different physical servers. Both instances have an Elastic Network Interface (ENI) card. The left-hand side EC2’s ENI has MAC/IP addresses cafe:0001:0001/10.10.1.11 and the right-hand side EC2’s ENI has MAC/IP addresses beef:0001:0001/10.10.1.22. Each physical server hosting EC2 instances has a Nitro Card for VPC [NC4VPC]. It is responsible for routing, data packets encapsulation/decapsulation, and Traffic limiting. In addition, Security Groups (SGs) are implemented in hardware on the Nitro card for VPC. AWS Control-Plane relies on the Mapping Service system decoupled from the network devices. It means that switches are unaware of Overlay Networks having no state information related to VPC’s, Subnets, EC2 Instances, or any other Overlay Network components. From the Control-Plane perspective, physical network switches participate in the Underlay Network routing process by advertising the reachability information of physical hosts, Mapping Service, and so on. From the Data-Plane point of view, they forwards packet based on the outer IP header.
Starting an EC2 instance triggers the Control-Plane process on a host. Figure 2-1 illustrates that Host-1 and Host-2 store information of their local EC2 instances into the Mapping cache. Then they register these instances into Mapping Service. You can consider the registration process as a routing update. We need to inform the Mapping Service about the EC2 instance’s a) MAC/IP addresses bind to ENI, b) Virtual Network Identifier (=VPC), c) the physical host IP, d) and the encapsulation mode (VPC tunnel header). If you are familiar with Locator/Id Separation Protocol LISP, you may notice that its Control-Plane process follows the same principles. The main difference is that switches in LISP-enabled networks have state information related to virtual/bare-metal servers running in a virtual network.
Figure 2-1: VPC Control-Plane Operation: Mapping Register.
Today on the Tech Bytes podcast we look at a global SD-WAN deployment for manufacturing company IMMI, which makes vehicle safety products, including products for school buses and military vehicles. Our guest is Tom Braden, VP of Enterprise Technology at IMMI, and our sponsor is HPE Aruba.
The post Tech Bytes: Global Manufacturer Taps Aruba EdgeConnect For SD-WAN, WAN Optimization appeared first on Packet Pushers.
Zero Trust rules by default block attempts to reach a resource. To obtain access, users need to prove they should be allowed to connect using signals like their identity, their device health, and other factors.
However, some workflows need a second opinion. Starting today, you can add new policies in Cloudflare Access that grant temporary access to specific users based on approvals for a set of predefined administrators. You can decide that some applications need second-party approval in addition to other Zero Trust signals. We’re excited to give your team another layer of Zero Trust control for any application — whether it’s a popular SaaS tool or you host it yourself.
Configuring appropriate user access is a challenge. Most companies start granting employee-specific application access based on username or email. This requires manual provisioning and deprovisioning when an employee joins or leaves.
When this becomes unwieldy, security teams generally use identity provider groups to set access levels by employee role. Which allows better provisioning and deprovisioning, but again starts to get clunky when application access requirements do not conform around roles. If a specific support rep needs access, then they need to be added to an existing Continue reading
Most of the public cloud training seems focused on developers. No surprise there, they are the usual beachhead public cloud services need to get into large organizations. Unfortunately, once the production applications start getting deployed into public cloud infrastructure, someone has to take over operations, and that’s where the fun starts.
For whatever reason, there aren’t that many resources helping the infrastructure operations teams understand how to deal with this weird new world, at least according to the feedback Jawed left on Azure Networking webinar:
Most of the public cloud training seems focused on developers. No surprise there, they are the usual beachhead public cloud services need to get into large organizations. Unfortunately, once the production applications start getting deployed into public cloud infrastructure, someone has to take over operations, and that’s where the fun starts.
For whatever reason, there aren’t that many resources helping the infrastructure operations teams understand how to deal with this weird new world, at least according to the feedback Jawed left on Azure Networking webinar:
We are 11! And we also may be a little bleary-eyed and giddy from a week of shipping.
Our Birthday Weeks are one of my favorite Cloudflare traditions — where we release innovations that help to build a better Internet. Just this week we tackled email security, expanded our network into office buildings, and entered into the Web3 world.
But these weeks also precipitate the most common questions I’m asked from my product and engineering peers across the industry: how do we do it? How do we get so much stuff out so quickly? That we are able to innovate — and innovate so quickly — is no happy accident. In fact, this capability has been very deliberately built into the DNA of Cloudflare. I want to touch on three of the reasons unique to us: one relates to our people, one relates to our technology, and one relates to our customers.
The seeds of innovative ideas start with our team. One of the core things we look for when hiring in every role at Cloudflare — be it engineering and product or sales or account — is curiosity. We seek people who approach a situation Continue reading
The inventory is at the core of Nornir holding all the hosts that tasks will be run against and the variables that will be used by those tasks. Before any tasks can be run by Nornir the inventory has to be initialised.
A little over two weeks ago, we shared extensive benchmarking results of edge networks all around the world. It showed that on a range of tests (TCP connection time, time to first byte, time to last byte), and on a range of measurements (p95, mean), that Cloudflare had some impressive network performance. But we weren't the fastest everywhere. So we made a commitment: we would improve in at least 10% of networks where we were not #1.
Today, we’re happy to tell you that we’ve delivered as promised. Of the networks where our average latency exceeded 100ms behind the leading provider during Speed Week, we’ve dramatically improved our performance. There were 61 networks; now, we’re the fastest in 29 of them. Of course, we’re not done yet — but we wanted to share with you the latest results, and explain how we did it.
In the process of quantifying network performance, it became clear where we were not the fastest everywhere. There were 61 country/network pairs where we more than 100ms behind the leading provider:
Once that was done, the fun began: we needed to go through the process of figuring out why we were slow — Continue reading