Cato boasts 5Gbps encrypted tunnel throughput

Cato Networks said today that it has successfully created an encrypted tunnel capable of 5Gbps of throughput, offering reassurance to network administrators worried about traffic overhead created by Secure Access Service Edge (SASE) platforms.The company’s announcement said that increasing uptake of SASE, particularly by large enterprises, has created a need for faster encrypted connections that still support the full array of security technologies present in SASE. The speed boost, Cato said, was made possible by improved performance in the company’s Single Pass Processing Engine, which is the umbrella of services that runs in its various points of presence.To read this article in full, please click here

Using aliases on Linux

Using aliases on Linux systems can save you a lot of trouble and help you work faster and smarter. This post examines the ways and reasons that many Linux users take advantage of aliases, shows how to set them up and use them, and provides a number of examples of how they can help you get your tasks done with less trouble.What are aliases? Aliases are simply one-line commands that are assigned names and generally stored in a startup file (e.g., .bashrc) that is run when you log in using a tool like PuTTY or open a terminal window on your desktop. The syntax is easy. It follows this pattern:$ alias NAME = 'COMMAND' As a simple example, typing a command like that shown below enables you to clear your screen simply by typing “c”.To read this article in full, please click here

Using aliases on Linux

Using aliases on Linux systems can save you a lot of trouble and help you work faster and smarter. This post examines the ways and reasons that many Linux users take advantage of aliases, shows how to set them up and use them, and provides a number of examples of how they can help you get your tasks done with less trouble.What are aliases? Aliases are simply one-line commands that are assigned names and generally stored in a startup file (e.g., .bashrc) that is run when you log in using a tool like PuTTY or open a terminal window on your desktop. The syntax is easy. It follows this pattern:$ alias NAME = 'COMMAND' As a simple example, typing a command like that shown below enables you to clear your screen simply by typing “c”.To read this article in full, please click here

Network spending priorities for second-half 2023

OK, it’s not been a great first half for many companies, from end users to vendors and providers. The good news is that users sort of believe that many of the economic and political issues that have contributed to the problem have been at least held at bay.There’s still uncertainty in the tech world, but it's a bit less than before. Most of the companies I’ve talked with this year have stayed guardedly optimistic that things were going to improve. Over the last month, of the nearly 200 companies I’ve emailed with, only 21 were “pessimistic” about the outlook for their tech spending in the second half.Lack of pessimism doesn’t translate to optimism, though, and optimism is a bit non-specific for network and IT planners to build on. What are the user priorities for tech for the rest of the year? Do they think their budgets will shift, and if so from what to what? Are they looking to make major changes in their networks, change their vendors, be more or less open? I thought I knew some of the answers to these questions, but for some I was wrong.To read this article in full, please click here

Network spending priorities for second-half 2023

OK, it’s not been a great first half for many companies, from end users to vendors and providers. The good news is that users sort of believe that many of the economic and political issues that have contributed to the problem have been at least held at bay.There’s still uncertainty in the tech world, but it's a bit less than before. Most of the companies I’ve talked with this year have stayed guardedly optimistic that things were going to improve. Over the last month, of the nearly 200 companies I’ve emailed with, only 21 were “pessimistic” about the outlook for their tech spending in the second half.Lack of pessimism doesn’t translate to optimism, though, and optimism is a bit non-specific for network and IT planners to build on. What are the user priorities for tech for the rest of the year? Do they think their budgets will shift, and if so from what to what? Are they looking to make major changes in their networks, change their vendors, be more or less open? I thought I knew some of the answers to these questions, but for some I was wrong.To read this article in full, please click here

Protecting GraphQL APIs from malicious queries

Protecting GraphQL APIs from malicious queries
Protecting GraphQL APIs from malicious queries

Starting today, Cloudflare’s API Gateway can protect GraphQL APIs against malicious requests that may cause a denial of service to the origin. In particular, API Gateway will now protect against two of the most common GraphQL abuse vectors: deeply nested queries and queries that request more information than they should.

Typical RESTful HTTP APIs contain tens or hundreds of endpoints. GraphQL APIs differ by typically only providing a single endpoint for clients to communicate with and offering highly flexible queries that can return variable amounts of data. While GraphQL’s power and usefulness rests on the flexibility to query an API about only the specific data you need, that same flexibility adds an increased risk of abuse. Abusive requests to a single GraphQL API can place disproportional load on the origin, abuse the N+1 problem, or exploit a recursive relationship between data dimensions. In order to add GraphQL security features to API Gateway, we needed to obtain visibility inside the requests so that we could apply different security settings based on request parameters. To achieve that visibility, we built our own GraphQL query parser. Read on to learn about how we built the parser and the security features it enabled.

Continue reading

Network Break 434: Cisco Licensing To Get Simpler, Bluecat Buys Again, Hashicorp Money Problems, and Itential Pops A Release

Take a Network Break: Drew is on holiday (again) and Ethan shows up. Who knew he was still around ? We start with FU, Cisco Live was underwhelming announcing a new focus simplicity and that customers hate their licensing, Bluecat spends again, Hashicorp gets a financial slapping, Itential ships a new version and Quantum Space Networking. 

Network Break 434: Cisco Licensing To Get Simpler, Bluecat Buys Again, Hashicorp Money Problems, and Itential Pops A Release

Take a Network Break: Drew is on holiday (again) and Ethan shows up. Who knew he was still around ? We start with FU, Cisco Live was underwhelming announcing a new focus simplicity and that customers hate their licensing, Bluecat spends again, Hashicorp gets a financial slapping, Itential ships a new version and Quantum Space Networking. 

The post Network Break 434: Cisco Licensing To Get Simpler, Bluecat Buys Again, Hashicorp Money Problems, and Itential Pops A Release appeared first on Packet Pushers.

Raspberry Pi 4 real-time network analytics

CanaKit Raspberry Pi 4 EXTREME Kit - Aluminum
This article describes how build an inexpensive Raspberry Pi 4 based server for real-time flow analytics of industry standard sFlow streaming telemetry. Support for sFlow is widely implemented in datacenter equipment from vendors including: A10, Arista, Aruba, Cisco, Edge-Core, Extreme, Huawei, Juniper, NEC, Netgear, Nokia, NVIDIA, Quanta, and ZTE.

In this example, we will use an 8G Raspberry Pi 4 running Raspberry Pi OS Lite (64-bit).  The easiest way to format a memory card and install the operating system is to use the Raspberry Pi Imager (shown above).
Click on the gear icon to set a user and password and enable ssh access. These initial settings allow the Rasberry Pi to be accessed over the network without having to attach a screen, keyboard, and mouse.

Next, follow instruction for installing Docker Engine (Raspberry Pi OS Lite is based on Debian 11).

The diagram shows how the sFlow-RT real-time analytics engine receives a continuous telemetry stream from industry standard sFlow instrumentation build into network, server and application infrastructure and delivers analytics through APIs and can easily be integrated with a wide variety of on-site and cloud, orchestration, DevOps and Software Defined Networking Continue reading

Setting up your own Cloud-GPU Server, Jupyter and Anaconda — Easy and complete walkthrough

< MEDIUM: https://medium.com/@raaki-88/setting-up-your-own-cloud-gpu-server-jupyter-and-anaconda-easy-and-complete-walkthrough-2b3db94b6bf6 >

Note: One of the important tips for lab environments is to set an auto-shutdown timer, below is one such setting in GCP

I have been working on a few hosted environments which include AWS Sagemaker Notebook instances, Google Cloud Colab, Gradient (Paperspace) etc and all of them are really good and needed monthly subscriptions, I decided to have my own GPU server instance which can be personalized and I get charged on a granular basis.

Installing it is not easy, first, you need to find a cloud-computing instance which has GPU support enabled, AWS and GCP are straightforward in this section as the selection is really easy.

Let’s break this into 3 stages

  1. Selecting a GPU server-based instance for ML practice.
  2. Installing Jupyter Server — Pain-Point Making it accessible from the internet.
  3. Installing Package managers like Anaconda — Pain-Point having Kernel of conda reflect in Jupyter lab.

Stage-1

For a change, I will be using GCP in this case from my usual choice of AWS here.

Choose GPU alongside the Instance

Generic Guidelines — https://cloud.google.com/deep-learning-vm/docs/cloud-marketplace

rakesh@instance-1:~$ sudo apt install jupyter-notebook

# Step1: generate the file by typing this line in console

jupyter notebook  Continue reading

Worth Reading: Building Stuff with Large Language Models Is Hard

Large language models (LLM) – ChatGPT and friends – are one of those technologies with a crazy learning curve. They look simple and friendly (resulting in plenty of useless demoware) but become devilishly hard to work with once you try to squeeze consistent value out of them.

Most people don’t want to talk about the hard stuff (sexy demoware results in more page views), but there’s an occasional exception, for example All the Hard Stuff Nobody Talks About when Building Products with LLMs describing all the gotchas Honeycomb engineers discovered when creating a LLM-based user interface.

Worth Reading: Building Stuff with Large Language Models Is Hard

Large language models (LLM) – ChatGPT and friends – are one of those technologies with a crazy learning curve. They look simple and friendly (resulting in plenty of useless demoware) but become devilishly hard to work with once you try to squeeze consistent value out of them.

Most people don’t want to talk about the hard stuff (sexy demoware results in more page views), but there’s an occasional exception, for example All the Hard Stuff Nobody Talks About when Building Products with LLMs describing all the gotchas Honeycomb engineers discovered when creating a LLM-based user interface.