As we develop new products, we often push our operating system - Linux - beyond what is commonly possible. A common theme has been relying on eBPF to build technology that would otherwise have required modifying the kernel. For example, we’ve built DDoS mitigation and a load balancer and use it to monitor our fleet of servers.
This software usually consists of a small-ish eBPF program written in C, executed in the context of the kernel, and a larger user space component that loads the eBPF into the kernel and manages its lifecycle. We’ve found that the ratio of eBPF code to userspace code differs by an order of magnitude or more. We want to shed some light on the issues that a developer has to tackle when dealing with eBPF and present our solutions for building rock-solid production ready applications which contain eBPF.
For this purpose we are open sourcing the production tooling we’ve built for the sk_lookup hook we contributed to the Linux kernel, called tubular. It exists because we’ve outgrown the BSD sockets API. To deliver some products we need features that are just not possible using the standard API.
DEVASC Study Resources and Plan are available and detailed in the course of DEVASC 200-901 on out website.
The exam is not simple or foundational level, it is as always with Cisco, starts with you from scratch.
up to a solid level where you are capable of discussing and implementing a solution.
so studying and preparing should be careful and detailed as well.
Even though the exam is considered a Written one, but preparation are almost 30% written only
and by that i mean theoretical parts where you only get some concepts and leave, no implementations.
SO 70% of the preparation should be practical, coding and validating a lot, constructing and encoding requests
to communicate and work with Cisco platforms remotely.
studying should be by constructing and validating every code for every request and platform of Cisco mentioned in the exam agenda.
Constructing and sending API’s and requests will be by using:
Validating the results will always be through the same construction and pushing platform mentioned above.
How to Pass DEVASC? the new exam from Cisco, first version released in 2019, having an exam code of 200-901
the exam generally has 6 modules to study and focus on, teaching you data encoding languages for the first time,
introducing the Cisco Sandbox for practices, and start automation Cisco’s platforms over the Sandbox.
Skills learned with DEVASC
many encoding, programming, and automation skills, including:
the presence here for Cisco is not to just TEACH you DEVNET/DEVOPS
but to allow you to implement and practice most of the tools/techniques on their platform
using the FREE new sandbox service.
the first and the current version of the exam has the code of 200-901
it is kind of a written exam, why kind of?, because the exam questions can be:
What is DEVASC, a new question actually, DEVNET Associate from Cisco Systems is their first DEVOPS derived DEVNET certificate that was announced on June 9th – 2019.
it is the first version of the DEVASC exam that grants the Cisco Certified DEVNET Associate certificate,
and has the exam number of 200-901
DEVASC was not the only exam announced from Cisco regarding DEVNET, an entire new domain of knowledge and hierarchy was there as well.
DEVASC would be your first step in that hierarchy, then you will see DEVNET Professional which contains so many exams inside it.
one of them is mandatory, and a selective one of the others is required to become a CCDevP, that will be for another blog.
and the highest peak is the recently officially announced CCDevE, an 8-Hours LAB exam to validate how expert you are with Cisco DEVNET.
not just because it is a fresh branch, or not something that is generally provided by other vendors, but because the agenda of the DEVASC are very useful.
they do as always with Cisco, start from scratch telling you what is DEVOPS, DEVNET, DEVASC, Continue reading
It’s an established fact on the internet that we have ran out of IP(v4) addresses, and we are st
It’s an established fact on the internet that we have ran out of IP(v4) addresses, and we are st
Sander Steffann sent me an intriguing question a long while ago:
I was wondering if there are any downsides to setting “system mtu jumbo 9198” by default on every switch? I mean, if all connected devices have MTU 1500 they won’t notice that the switch could support longer frames, right?
That’s absolutely correct, and unless the end hosts get into UDP fights things will always work out (aka TCP MSS saves the day)… but there must be a reason switching vendors don’t use maximum frame sizes larger than 1514 by default (Cumulus Linux seems to be an exception, and according to Sébastien Keller Arista’s default maximum frame size is between 9214 and 10178 depending on the platform).
Sander Steffann sent me an intriguing question a long while ago:
I was wondering if there are any downsides to setting “system mtu jumbo 9198” by default on every switch? I mean, if all connected devices have MTU 1500 they won’t notice that the switch could support longer frames, right?
That’s absolutely correct, and unless the end hosts get into UDP fights things will always work out (aka TCP MSS saves the day)… but there must be a reason switching vendors don’t use maximum frame sizes larger than 1514 by default (Cumulus Linux seems to be an exception, and according to Sébastien Keller Arista’s default maximum frame size is between 9214 and 10178 depending on the platform).
I pass access tokens, authentication keys, and other secrets to Python scripts via environment variables rather than encode these values into the scripts themselves. If I was a real boy, I’d use a solution like Hashicorp Vault or other secrets management tool (there’s a bunch of them), but I haven’t yet found the motivation to learn such a tool.
I’m not sure I’d want to build and maintain such a tool if I did find the motivation. I’m sort of lazy sometimes is what I’m saying. So for now, environment variables it is.
PyCharm allows for the passing of environment variables from the IDE to a script, whether that script is running locally or in a remote SSH deployment you’ve configured for your project.
To set the environment variables, select Edit Configurations from the Run menu.
Or in the project bar above the code window, click the dropdown with your script name, and select Edit Configurations.
Either way brings up the following configuration window for the scripts in your project. In the Environment variables: field, click the icon.
That will bring up the following window you can use to configure the environment variables.
Fantastic. But how do we assign the Continue reading
When vendors build something new—or when you decide to go a different direction in your network—you have to figure out how to integrate these new things. Integration of this type often includes cultural, as well as technical, changes. William Collins joins Tom Ammon and Russ White to discuss his experience in integrating new technologies on Hedge 118.
The release of VMware NSX-T 3.2 and VMware Container Networking with Antrea 1.3.1-1.2.3 delivers on VMware’s vision to have heterogeneous Antrea clusters running anywhere integrate with NSX-T for centralized container policy management and visibility.
NSX-T becomes the single pane of glass for policy management when connected to Antrea clusters. The Antrea clusters could be running on VMware Tanzu platform, RedHat OpenShift or any upstream Kubernetes cluster. Inventory management, tagging, dynamic grouping and troubleshooting can be extended to Antrea clusters along with native Kubernetes network policies and Antrea network policies to be centrally managed by NSX-T.
Antrea to NSX-T interworking Architecture
Antrea NSX Adapter is a new component introduced to the standard Antrea cluster to make the integration possible. This component communicates with K8s API and Antrea Controller and connects to the NSX-T APIs. When a NSX-T admin defines a new policy via NSX APIs or UI, the policies are replicated to all the clusters as applicable. These policies will be received by the adapter which in turn will create appropriate CRDs using K8s APIs. The Antrea Controller which is watching these policies run the relevant computation and sends the results Continue reading
Today's Day Two Cloud is a sponsored episode with StrongDM, which helps engineers and IT professionals get access to databases, servers, Kubernetes clusters, switches, Web apps, and more from a desktop or laptop. We dive into StrongDM's proxy model, integrations with directories and ID stores, audit features, and more.
The post Day Two Cloud 134: Simplifying Infrastructure Access With StrongDM (Sponsored) appeared first on Packet Pushers.
I’m teaching a three-hour webinar on privacy over at Safari Books on Friday. From the description there—
Privacy is important to every IT professional, including network engineers—but there is very little training oriented towards anyone other than privacy professionals. This training aims to provide a high-level overview of privacy and how privacy impacts network engineers. Information technology professionals are often perceived as “experts” on “all things IT,” and hence are bound to face questions about the importance of privacy, and how individual users can protect their privacy in more public settings.
Please join me for this—it’s a very important topic largely ignored in the infrastructure space.
Got this question from one of my readers:
When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?
Short answer: you don’t.
Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.