Heavy Networking 618: Building Virtual Networks With Console Connect (Sponsored)

On today’s Heavy Networking episode we talk with sponsor Console Connect, which provides software-defined interconnections for enterprises and service providers. Guests Paul Gampe and Jay Turner dig into the Console Connect catalog, including Internet On-Demand, CloudRouter, and some of the interesting partner integrations that provide unique connectivity options.

Heavy Networking 618: Building Virtual Networks With Console Connect (Sponsored)

On today’s Heavy Networking episode we talk with sponsor Console Connect, which provides software-defined interconnections for enterprises and service providers. Guests Paul Gampe and Jay Turner dig into the Console Connect catalog, including Internet On-Demand, CloudRouter, and some of the interesting partner integrations that provide unique connectivity options.

The post Heavy Networking 618: Building Virtual Networks With Console Connect (Sponsored) appeared first on Packet Pushers.

Intel’s custom Bitcoin processor could lead to chips for a supercomputing edge

Here’s one none of us saw coming: Intel is planning to launch a chip specifically designed for blockchain acceleration, including the mining Bitcoins, and much more. Intel has also announced the formation of a new custom compute group within its graphics business unit to develop the chip.In the blog post, Raja Koduri, senior vice president and general manager of the Accelerated Computing Systems and Graphics Group, announced the ASIC without using the Bonanza Mine name that would accelerate the algorithm specifically used in Bitcoin mining and blockchain in general.To read this article in full, please click here

Intel’s custom Bitcoin processor could lead to chips for a supercomputing edge

Here’s one none of us saw coming: Intel is planning to launch a chip specifically designed for blockchain acceleration, including the mining Bitcoins, and much more. Intel has also announced the formation of a new custom compute group within its graphics business unit to develop the chip.In the blog post, Raja Koduri, senior vice president and general manager of the Accelerated Computing Systems and Graphics Group, announced the ASIC without using the Bonanza Mine name that would accelerate the algorithm specifically used in Bitcoin mining and blockchain in general.To read this article in full, please click here

Technical Debt or Underperforming Investment?

In this week’s issue of the Packet Pushers Human Infrastructure newsletter, there was an excellent blog post from Kam Lasater about how talking about technical debt makes us sound silly. I recommend you read the whole thing because he brings up some very valid points about how the way the other departments of the organization perceive our issues can vary. It also breaks down debt in a very simple format that takes it away from a negative connotation and shows how debt can be a leverage instrument.

To that end, I want to make a modest proposal to help the organization understand the challenges that IT faces with older systems and integration challenges. Except we need some new branding. So, I propose we start referring to technical debt as “underperforming technical investments”.

I’d Buy That For A Dollar

Technical debt is just a clever way to refer to the series of layered challenges we face from decisions that were made to accomplish tasks. It’s a burden we carry negatively throughout the execution of our job because it adds extra time to the process. We express it as debt because it’s a price that must be paid every time we need Continue reading

Detecting Magecart-Style Attacks With Page Shield

Detecting Magecart-Style Attacks With Page Shield
Detecting Magecart-Style Attacks With Page Shield

During CIO week we announced the general availability of our client-side security product, Page Shield. Page Shield protects websites’ end users from client-side attacks that target vulnerable JavaScript dependencies in order to run malicious code in the victim’s browser. One of the biggest client-side threats is the exfiltration of sensitive user data to an attacker-controlled domain (known as a Magecart-style attack). This kind of attack has impacted large organizations like British Airways and Ticketmaster, resulting in substantial GDPR fines in both cases. Today we are sharing details of how we detect these types of attacks and how we’re going to be developing the product into the future.

How does a Magecart-style attack work?

Magecart-style attacks are generally quite simple, involving just two stages. First, an attacker finds a way to compromise one of the JavaScript files running on the victim’s website. The attacker then inserts malicious code which reads personally identifiable information (PII) being entered by the site’s users, and exfiltrates it to an attacker-controlled domain. This is illustrated in the diagram below.

Detecting Magecart-Style Attacks With Page Shield

Magecart-style attacks are of particular concern to online retailers with users entering credit card details on the checkout page. Forms for online banking are also high-value Continue reading

netsim-tools (now netlab) on the Modem Podcast

A few weeks ago, Nick Buraglio and Chris Cummings invited me for an hour-long chat about netlab on the Modem Podcast1.

We talked about why one might want to use netlab instead of another lab orchestration solution and the high-level functionality offered by the tool. Nick particularly loved its IPAM features which got so extensive in the meantime that I had to write a full-blown addressing tutorial. But there’s so much more: you can also get a fully configured OSPFv2, OSPFv3, EIGRP, IS-IS, SRv6, or BGP lab built from more than a dozen different devices. In short (as Nick and Chris said): you can use netlab to make labbing less miserable.


  1. netlab was known as netsim-tools when we were recording that podcast. ↩︎

netsim-tools on the Modem Podcast

A few weeks ago, Nick Buraglio and Chris Cummings invited me for an hour-long chat about netsim-tools on the Modem Podcast.

We talked about why one might want to use netsim-tools instead of another lab orchestration solution and the high-level functionality offered by the tool. Nick particularly loved its IPAM features which got so extensive in the meantime that I had to write a full-blown addressing tutorial. But there’s so much more: you can also get a fully configured OSPFv2, OSPFv3, EIGRP, IS-IS, SRv6, or BGP lab built from more than a dozen different devices. In short (as Nick and Chris said): you can use netsim-tools to make labbing less miserable.

Using the Linux host command to dig out DNS details

The host command on Linux systems can look up a variety of information available through the Domain Name System (DNS). It can find a host name if given an IP address or an IP address if given a host name plus a lot of other interesting details on systems and internet domains.The first query below tells us that the system associated with the address 192.168.0.18 is named “dragonfly”. The second tells us that 192.168.0.1 is the default router.$ host 192.168.0.18 18.0.168.192.in-addr.arpa domain name pointer dragonfly. $ host 192.168.0.1 1.0.168.192.in-addr.arpa domain name pointer router. To do the reverse, you can use commands like these:To read this article in full, please click here

Using the Linux host command to dig out DNS details

The host command on Linux systems can look up a variety of information available through the Domain Name System (DNS). It can find a host name if given an IP address or an IP address if given a host name plus a lot of other interesting details on systems and internet domains.The first query below tells us that the system associated with the address 192.168.0.18 is named “dragonfly”. The second tells us that 192.168.0.1 is the default router.$ host 192.168.0.18 18.0.168.192.in-addr.arpa domain name pointer dragonfly. $ host 192.168.0.1 1.0.168.192.in-addr.arpa domain name pointer router. To do the reverse, you can use commands like these:To read this article in full, please click here

Intel Unfolds Xeon Roadmap With More Cores, Denser Transistors

We were complaining a few weeks ago that Intel had not put out a server processor roadmap of any substance in a long time, and instead of just leaving it at that, we created our own Xeon SP roadmap based on rumors, speculation, hunches, and desires.

Intel Unfolds Xeon Roadmap With More Cores, Denser Transistors was written by Timothy Prickett Morgan at The Next Platform.

Calico Cloud: Active Build and Runtime Security for Cloud-Native Applications

Calico Cloud has just celebrated its 1-year anniversary! And what better way to celebrate than to launch new features and capabilities that help users address their most urgent cloud security needs.

Over the past year, the Tigera team has seen rapid adoption of Calico Cloud for security and observability of cloud-native applications. With this new release, Calico Cloud becomes the first in the industry to offer the most comprehensive active cloud-native application security that goes beyond detecting threats to limit exposure and automatically mitigate risks in real time.

With news of new zero-day threats emerging almost every day (e.g. Argo CD, Chrome Browser), the current security approach needs to evolve. We need active build, deploy, and runtime security, all together, instead of using a siloed approach. Security threats, vulnerabilities, and risks for all three areas should be addressed together, by the same security platform, rather than using multiple disjointed tools. Calico Cloud does just that!

With Calico Cloud, you can reduce your cloud-native application’s attack surface, harness machine learning to combat runtime security risks from known and unknown zero-day threats, enable continuous compliance, and prioritize and mitigate the risks from vulnerabilities and attacks.

Let’s take a look Continue reading

Production ready eBPF, or how we fixed the BSD socket API

Production ready eBPF, or how we fixed the BSD socket API
Production ready eBPF, or how we fixed the BSD socket API

As we develop new products, we often push our operating system - Linux - beyond what is commonly possible. A common theme has been relying on eBPF to build technology that would otherwise have required modifying the kernel. For example, we’ve built DDoS mitigation and a load balancer and use it to monitor our fleet of servers.

This software usually consists of a small-ish eBPF program written in C, executed in the context of the kernel, and a larger user space component that loads the eBPF into the kernel and manages its lifecycle. We’ve found that the ratio of eBPF code to userspace code differs by an order of magnitude or more. We want to shed some light on the issues that a developer has to tackle when dealing with eBPF and present our solutions for building rock-solid production ready applications which contain eBPF.

For this purpose we are open sourcing the production tooling we’ve built for the sk_lookup hook we contributed to the Linux kernel, called tubular. It exists because we’ve outgrown the BSD sockets API. To deliver some products we need features that are just not possible using the standard API.

Using cert-manager with Kuma for mTLS

When configuring mutual TLS (mTLS) on the open source Kuma service mesh, users have a couple of different options. They can use a “builtin” certificate authority (CA), in which Kuma itself will generate a CA certificate and key for use in creating service-specific mTLS certificates. Users also have the option of using a “provided” CA, in which they must supply a CA certificate and key for Kuma to use when creating service-specific mTLS certificates. Both of these options are described on this page in the Kuma documentation. In this post, I’d like to explore the use of cert-manager as a “provided” CA for mTLS on Kuma.

Currently, Kuma lacks direct integration with cert-manager, so the process is a bit more manual than I’d prefer. If direct cert-manager integration is something you’d find useful, please consider opening an issue to that effect on the Kuma GitHub repository.

Assuming you have cert-manager installed already, the process for using cert-manager as the CA for a “provided” CA mTLS backend looks like this:

  1. Define the root CA in cert-manager.
  2. Prepare the secrets for Kuma.
  3. Configure the Kuma mesh object for mTLS.

I know these steps are really too high level to be useful Continue reading

DEVASC Study Resources and Plan

DEVASC Exam for DEVNET Associate

DEVASC Study Resources and Plan are available and detailed in the course of DEVASC 200-901 on out website.

DEVASC Course and study plan

The exam is not simple or foundational level, it is as always with Cisco, starts with you from scratch.

up to a solid level where you are capable of discussing and implementing a solution.

so studying and preparing should be careful and detailed as well.

Even though the exam is considered a Written one, but preparation are almost 30% written only

and by that i mean theoretical parts where you only get some concepts and leave, no implementations.

SO 70% of the preparation should be practical, coding and validating a lot, constructing and encoding requests

to communicate and work with Cisco platforms remotely.

DEVASC and how to Study

studying should be by constructing and validating every code for every request and platform of Cisco mentioned in the exam agenda.

Constructing and sending API’s and requests will be by using:

  • Postman with XML, JSON, and YAML
  • CURL request using Git Bash CLI
  • Python Scripts from Python IDLE

Validating the results will always be through the same construction and pushing platform mentioned above.

Continue reading