Raspberry Pi 4 real-time network analytics

CanaKit Raspberry Pi 4 EXTREME Kit - Aluminum
This article describes how build an inexpensive Raspberry Pi 4 based server for real-time flow analytics of industry standard sFlow streaming telemetry. Support for sFlow is widely implemented in datacenter equipment from vendors including: A10, Arista, Aruba, Cisco, Edge-Core, Extreme, Huawei, Juniper, NEC, Netgear, Nokia, NVIDIA, Quanta, and ZTE.

In this example, we will use an 8G Raspberry Pi 4 running Raspberry Pi OS Lite (64-bit).  The easiest way to format a memory card and install the operating system is to use the Raspberry Pi Imager (shown above).
Click on the gear icon to set a user and password and enable ssh access. These initial settings allow the Rasberry Pi to be accessed over the network without having to attach a screen, keyboard, and mouse.

Next, follow instruction for installing Docker Engine (Raspberry Pi OS Lite is based on Debian 11).

The diagram shows how the sFlow-RT real-time analytics engine receives a continuous telemetry stream from industry standard sFlow instrumentation build into network, server and application infrastructure and delivers analytics through APIs and can easily be integrated with a wide variety of on-site and cloud, orchestration, DevOps and Software Defined Networking Continue reading

Setting up your own Cloud-GPU Server, Jupyter and Anaconda — Easy and complete walkthrough

< MEDIUM: https://medium.com/@raaki-88/setting-up-your-own-cloud-gpu-server-jupyter-and-anaconda-easy-and-complete-walkthrough-2b3db94b6bf6 >

Note: One of the important tips for lab environments is to set an auto-shutdown timer, below is one such setting in GCP

I have been working on a few hosted environments which include AWS Sagemaker Notebook instances, Google Cloud Colab, Gradient (Paperspace) etc and all of them are really good and needed monthly subscriptions, I decided to have my own GPU server instance which can be personalized and I get charged on a granular basis.

Installing it is not easy, first, you need to find a cloud-computing instance which has GPU support enabled, AWS and GCP are straightforward in this section as the selection is really easy.

Let’s break this into 3 stages

  1. Selecting a GPU server-based instance for ML practice.
  2. Installing Jupyter Server — Pain-Point Making it accessible from the internet.
  3. Installing Package managers like Anaconda — Pain-Point having Kernel of conda reflect in Jupyter lab.

Stage-1

For a change, I will be using GCP in this case from my usual choice of AWS here.

Choose GPU alongside the Instance

Generic Guidelines — https://cloud.google.com/deep-learning-vm/docs/cloud-marketplace

rakesh@instance-1:~$ sudo apt install jupyter-notebook

# Step1: generate the file by typing this line in console

jupyter notebook  Continue reading

Worth Reading: Building Stuff with Large Language Models Is Hard

Large language models (LLM) – ChatGPT and friends – are one of those technologies with a crazy learning curve. They look simple and friendly (resulting in plenty of useless demoware) but become devilishly hard to work with once you try to squeeze consistent value out of them.

Most people don’t want to talk about the hard stuff (sexy demoware results in more page views), but there’s an occasional exception, for example All the Hard Stuff Nobody Talks About when Building Products with LLMs describing all the gotchas Honeycomb engineers discovered when creating a LLM-based user interface.

Worth Reading: Building Stuff with Large Language Models Is Hard

Large language models (LLM) – ChatGPT and friends – are one of those technologies with a crazy learning curve. They look simple and friendly (resulting in plenty of useless demoware) but become devilishly hard to work with once you try to squeeze consistent value out of them.

Most people don’t want to talk about the hard stuff (sexy demoware results in more page views), but there’s an occasional exception, for example All the Hard Stuff Nobody Talks About when Building Products with LLMs describing all the gotchas Honeycomb engineers discovered when creating a LLM-based user interface.

Spoofing ICMP Redirects for Fun and Profit

Security researches found another ICMP redirect SNAFU: a malicious wireless client can send redirects on behalf of the access point redirecting another client’s traffic to itself.

I’m pretty sure the same trick works on any layer-2 technology; the sad part of this particular story is that the spoofed ICMP packet traverses the access point, which could figure out what’s going on and drop the packet. Unfortunately, most of the access points the researchers tested were unable to do that due to limitations in the NPUs (a fancier word for SmartNIC) they were using.

Spoofing ICMP Redirects for Fun and Profit

Security researches found another ICMP redirect SNAFU: a malicious wireless client can send redirects on behalf of the access point redirecting another client’s traffic to itself.

I’m pretty sure the same trick works on any layer-2 technology; the sad part of this particular story is that the spoofed ICMP packet traverses the access point, which could figure out what’s going on and drop the packet. Unfortunately, most of the access points the researchers tested were unable to do that due to limitations in the NPUs (a fancier word for SmartNIC) they were using.

Prisma Access Outperforms Against Cobalt Strike Attacks

The following sponsored blog post was written by Anupam Upadhyaya at Palo Alto Networks. We thank Palo Alto Networks for being a sponsor. Palo Alto Networks is the leading vendor in preventing Cobalt Strike C2 communication and blocked 99.2% of tested attacks, with the next leading vendor blocking only 17% of attacks, as cited in a […]

The post Prisma Access Outperforms Against Cobalt Strike Attacks appeared first on Packet Pushers.

No Server Recession At Lenovo And Supermicro So Far

We think that server spending is a leading indicator of economic growth or decline, and we are tracking the public companies that peddle systems to try to get a sense of how they are doing to get a better sense of what enterprises, governments, academic institutions, and other organizations separate from the hyperscalers and cloud builders, the latter of which comprise around half of server shipments and slightly less than half of server spending.

No Server Recession At Lenovo And Supermicro So Far was written by Timothy Prickett Morgan at The Next Platform.

Heavy Networking 684: What To Do With Your E-Waste?

By some estimates, 50 to 70 million tons of e-waste is generated every year, and that number is growing. When sent to landfills to be buried or burned, e-waste can leach toxic chemicals into soil and air. On today’s Heavy Networking, we’ll look at options for responsible disposal of IT gear, including repurposing it on site, reselling or donating it, and working with e-cycling companies.

Heavy Networking 684: What To Do With Your E-Waste?

By some estimates, 50 to 70 million tons of e-waste is generated every year, and that number is growing. When sent to landfills to be buried or burned, e-waste can leach toxic chemicals into soil and air. On today’s Heavy Networking, we’ll look at options for responsible disposal of IT gear, including repurposing it on site, reselling or donating it, and working with e-cycling companies.

The post Heavy Networking 684: What To Do With Your E-Waste? appeared first on Packet Pushers.

Data center colocation provider Cyxtera files for bankruptcy

Colocation provider Cyxtera Technologies has filed for Chapter 11 bankruptcy after spending the last few months trying to find a buyer or reduce its debt load. The company will now attempt to restructure through bankruptcy or perhaps a suitor will come along to buy out the company.Meanwhile, the company says it will be business as usual for its customers, but with the reorganization that comes with Chapter 11, it’s hard to say whether that will last, according to Bill Kleyman, an independent consultant to data-center companies.To read this article in full, please click here

When Making Bets on SASE, Don’t Count on Native SD-WAN Monitoring Tools for Help

The following post is by Jeremy Rossbach, Chief Technical Evangelist at Broadcom. We thank Broadcom for being a sponsor. I’ve been preaching the same thing for years: To overcome the challenges of modern network complexity and successfully transform your networks, you need modern network monitoring data. Monitor the user experience and the health of every […]

The post When Making Bets on SASE, Don’t Count on Native SD-WAN Monitoring Tools for Help appeared first on Packet Pushers.

Case study: Calico enables zero-trust security and policy automation at scale in a multi-cluster environment for Box

Box is a content cloud that helps organizations securely manage their entire content lifecycle from anywhere in the world, powering over 67% of Fortune 500 businesses. As a cloud-first SaaS, the company provides customers with an all-in-one content solution within a highly secure infrastructure, where organizations can work on any content, from projects and contracts to Federal Risk and Authorization Management Program (FedRAMP)-related content.

Box has two types of operations: cloud-managed Kubernetes clusters in hybrid, multi-cloud, and public cloud environments, and self-managed Kubernetes clusters in co-located data centers. The company runs multiple clusters with sizes of 1,000 nodes and larger. As one of the early adopters of Kubernetes, Box began using Kubernetes much before Google Kubernetes Engine (GKE) or Amazon’s Elastic Kubernetes Services (EKS) was born, and has been on the leading edge of innovation for Kubernetes in areas such as security, observability, and automation.

In collaboration with Tigera, Box shares how Calico helped the company achieve zero-trust security and policy automation at scale in a multi-cluster environment.

ICYMI: Watch this recording from the 2022 CalicoCon Cloud Native Security Summit, where Tapas Kumar Mohapatra of Box shares how Box moved into automated dependency mapping and policy generation with API Continue reading

Migration Coordinator: Approaches and Modes

Migration Coordinator is a fully supported free tool that is built into NSX Data Center to help migrate from NSX for vSphere to NSX (aka NSX-T). Migration Coordinator was first introduced in NSX-T 2.4 with a couple of modes to enable migrations. Through customer conversations over the years, we’ve worked to expand what can be done with Migration Coordinator. Today, Migration Coordinator supports over 10 different ways to migrate from NSX for vSphere to NSX.

In this blog series, we will look at the available approaches and the prep work involved with each of those approaches.  This blog series should help choose, from multiple different angles, the right mode to choose for migrating from NSX for vSphere to NSX.

  • 3 Standard Migration Modes
  • 3 Advanced Migration Modes
  • 3 More Modes Available Under User Defined Topology
  • Lastly, 2 more Modes Dedicated to Cross-VC to Federation and available on NSX Global Manager UI

Some of these modes take a cookie-cutter approach and require very little prep work to migrate while others allow you to customize the migration to suit their needs. In this blog, we will take a high level look at these modes.

Migration Coordinator Approaches

At a high Continue reading