On today’s Heavy Networking episode we talk with sponsor Console Connect, which provides software-defined interconnections for enterprises and service providers. Guests Paul Gampe and Jay Turner dig into the Console Connect catalog, including Internet On-Demand, CloudRouter, and some of the interesting partner integrations that provide unique connectivity options.
The post Heavy Networking 618: Building Virtual Networks With Console Connect (Sponsored) appeared first on Packet Pushers.
In this week’s issue of the Packet Pushers Human Infrastructure newsletter, there was an excellent blog post from Kam Lasater about how talking about technical debt makes us sound silly. I recommend you read the whole thing because he brings up some very valid points about how the way the other departments of the organization perceive our issues can vary. It also breaks down debt in a very simple format that takes it away from a negative connotation and shows how debt can be a leverage instrument.
To that end, I want to make a modest proposal to help the organization understand the challenges that IT faces with older systems and integration challenges. Except we need some new branding. So, I propose we start referring to technical debt as “underperforming technical investments”.
Technical debt is just a clever way to refer to the series of layered challenges we face from decisions that were made to accomplish tasks. It’s a burden we carry negatively throughout the execution of our job because it adds extra time to the process. We express it as debt because it’s a price that must be paid every time we need Continue reading
During CIO week we announced the general availability of our client-side security product, Page Shield. Page Shield protects websites’ end users from client-side attacks that target vulnerable JavaScript dependencies in order to run malicious code in the victim’s browser. One of the biggest client-side threats is the exfiltration of sensitive user data to an attacker-controlled domain (known as a Magecart-style attack). This kind of attack has impacted large organizations like British Airways and Ticketmaster, resulting in substantial GDPR fines in both cases. Today we are sharing details of how we detect these types of attacks and how we’re going to be developing the product into the future.
Magecart-style attacks are generally quite simple, involving just two stages. First, an attacker finds a way to compromise one of the JavaScript files running on the victim’s website. The attacker then inserts malicious code which reads personally identifiable information (PII) being entered by the site’s users, and exfiltrates it to an attacker-controlled domain. This is illustrated in the diagram below.
Magecart-style attacks are of particular concern to online retailers with users entering credit card details on the checkout page. Forms for online banking are also high-value Continue reading
A few weeks ago, Nick Buraglio and Chris Cummings invited me for an hour-long chat about netlab on the Modem Podcast1.
We talked about why one might want to use netlab instead of another lab orchestration solution and the high-level functionality offered by the tool. Nick particularly loved its IPAM features which got so extensive in the meantime that I had to write a full-blown addressing tutorial. But there’s so much more: you can also get a fully configured OSPFv2, OSPFv3, EIGRP, IS-IS, SRv6, or BGP lab built from more than a dozen different devices. In short (as Nick and Chris said): you can use netlab to make labbing less miserable.
netlab was known as netsim-tools when we were recording that podcast. ↩︎
A few weeks ago, Nick Buraglio and Chris Cummings invited me for an hour-long chat about netsim-tools on the Modem Podcast.
We talked about why one might want to use netsim-tools instead of another lab orchestration solution and the high-level functionality offered by the tool. Nick particularly loved its IPAM features which got so extensive in the meantime that I had to write a full-blown addressing tutorial. But there’s so much more: you can also get a fully configured OSPFv2, OSPFv3, EIGRP, IS-IS, SRv6, or BGP lab built from more than a dozen different devices. In short (as Nick and Chris said): you can use netsim-tools to make labbing less miserable.
We were complaining a few weeks ago that Intel had not put out a server processor roadmap of any substance in a long time, and instead of just leaving it at that, we created our own Xeon SP roadmap based on rumors, speculation, hunches, and desires. …
Intel Unfolds Xeon Roadmap With More Cores, Denser Transistors was written by Timothy Prickett Morgan at The Next Platform.
Calico Cloud has just celebrated its 1-year anniversary! And what better way to celebrate than to launch new features and capabilities that help users address their most urgent cloud security needs.
Over the past year, the Tigera team has seen rapid adoption of Calico Cloud for security and observability of cloud-native applications. With this new release, Calico Cloud becomes the first in the industry to offer the most comprehensive active cloud-native application security that goes beyond detecting threats to limit exposure and automatically mitigate risks in real time.
With news of new zero-day threats emerging almost every day (e.g. Argo CD, Chrome Browser), the current security approach needs to evolve. We need active build, deploy, and runtime security, all together, instead of using a siloed approach. Security threats, vulnerabilities, and risks for all three areas should be addressed together, by the same security platform, rather than using multiple disjointed tools. Calico Cloud does just that!
With Calico Cloud, you can reduce your cloud-native application’s attack surface, harness machine learning to combat runtime security risks from known and unknown zero-day threats, enable continuous compliance, and prioritize and mitigate the risks from vulnerabilities and attacks.
Let’s take a look Continue reading
Next year, with the launch of the “Grace” Arm server processors, Nvidia will have all of the compute and networking bases it cares about in the datacenter covered, and it will be selling its technology at a rapid pace. …
Can Nvidia Be The Biggest Chip Maker In The Datacenter? was written by Timothy Prickett Morgan at The Next Platform.
As we develop new products, we often push our operating system - Linux - beyond what is commonly possible. A common theme has been relying on eBPF to build technology that would otherwise have required modifying the kernel. For example, we’ve built DDoS mitigation and a load balancer and use it to monitor our fleet of servers.
This software usually consists of a small-ish eBPF program written in C, executed in the context of the kernel, and a larger user space component that loads the eBPF into the kernel and manages its lifecycle. We’ve found that the ratio of eBPF code to userspace code differs by an order of magnitude or more. We want to shed some light on the issues that a developer has to tackle when dealing with eBPF and present our solutions for building rock-solid production ready applications which contain eBPF.
For this purpose we are open sourcing the production tooling we’ve built for the sk_lookup hook we contributed to the Linux kernel, called tubular. It exists because we’ve outgrown the BSD sockets API. To deliver some products we need features that are just not possible using the standard API.
When configuring mutual TLS (mTLS) on the open source Kuma service mesh, users have a couple of different options. They can use a “builtin” certificate authority (CA), in which Kuma itself will generate a CA certificate and key for use in creating service-specific mTLS certificates. Users also have the option of using a “provided” CA, in which they must supply a CA certificate and key for Kuma to use when creating service-specific mTLS certificates. Both of these options are described on this page in the Kuma documentation. In this post, I’d like to explore the use of cert-manager as a “provided” CA for mTLS on Kuma.
Currently, Kuma lacks direct integration with cert-manager, so the process is a bit more manual than I’d prefer. If direct cert-manager integration is something you’d find useful, please consider opening an issue to that effect on the Kuma GitHub repository.
Assuming you have cert-manager installed already, the process for using cert-manager as the CA for a “provided” CA mTLS backend looks like this:
mesh
object for mTLS.I know these steps are really too high level to be useful Continue reading
DEVASC Study Resources and Plan are available and detailed in the course of DEVASC 200-901 on out website.
The exam is not simple or foundational level, it is as always with Cisco, starts with you from scratch.
up to a solid level where you are capable of discussing and implementing a solution.
so studying and preparing should be careful and detailed as well.
Even though the exam is considered a Written one, but preparation are almost 30% written only
and by that i mean theoretical parts where you only get some concepts and leave, no implementations.
SO 70% of the preparation should be practical, coding and validating a lot, constructing and encoding requests
to communicate and work with Cisco platforms remotely.
studying should be by constructing and validating every code for every request and platform of Cisco mentioned in the exam agenda.
Constructing and sending API’s and requests will be by using:
Validating the results will always be through the same construction and pushing platform mentioned above.