Author Archives: Rakesh M
Author Archives: Rakesh M
Disclaimer: All Writings And Opinions Are My Own And Are Interpreted Solely From My Understanding. Please Contact The Concerned Support Teams For A Professional Opinion, As Technology And Features Change Rapidly.
This series of blog posts will focus on one feature at a time to simplify understanding.
At this point, ChatGPT—or any Large Language Model (LLM)—needs no introduction. I’ve been exploring GPTs with relative success, and I’ve found that API interaction makes them even more effective.
But how can we turn this into a workflow, even a simple one? What are our use cases and advantages? For simplicity, we’ll use the OpenAI API rather than open-source, self-hosted LLMs like Meta’s Llama.
Let’s consider an example: searching for all OSPF-related RFCs on the web. Technically, we’ll use a popular search engine, but to do this programmatically, I’ll use Serper. You can find more details at https://serper.dev. Serper is a powerful search API that allows developers to programmatically access search engine results. It provides a simple interface to retrieve structured data from search queries, making it easier to integrate search functionality into applications and workflows.
Let’s build the first building block and try to fetch results using Serper. When you sign Continue reading
I stumbled across this tool, while am always a fan of VRNET-LAB https://github.com/vrnetlab/vrnetlab and it operates on docker containers, i could not get it properly bridge it with Local network meaning reachability to internet is something that I never worked on.
A container lab is a virtualized environment that utilises containers to create and manage network testing labs. It offers a flexible and efficient way to simulate complex network topologies, test new features, and perform various network experiments.
One striking feature that i really liked about containerlab is that representation is in a straight yaml which most of the network engineers now a days are Familiar with and its easy to edit the representation.
Other advantages
Host mappings after spinning up the lab
Slide explaining the capture process – Courtesy Petr Ankudinov (https://arista-netdevops-community.github.io/building-containerlab-with-ceos/#1)
https://containerlab.dev/quickstart/ – Will give you how to do a quick start and install containerlab.
https://github.com/topics/clab-topo – Topologies contributed by community
https://github.com/arista-netdevops-community/building-containerlab-with-ceos/tree/main?tab=readme-ov-file – Amazing Repo
https://arista-netdevops-community.github.io/building-containerlab-with-ceos/ -> This presentation has some a eVPN topology and also explain how to spin up a quick eVPN with ceos Continue reading
Disclaimer: All Writings And Opinions Are My Own And Are Interpreted Solely From My Understanding. Please Contact The Concerned Support Teams For A Professional Opinion, As Technology And Features Change Rapidly.
And No! This can’t replace the accuracy of static templating configurations. This helps us to better understand and develop the templates. This was almost rocket science to me when I first got to know about them.
Most modern day deployments have some sort of variable files and template files (YAML and Jinja2). These can be intimidating. It was mysterious. When I first looked at them years ago, I found them confusing. Today, with LLM you don’t have to really be worried about how to generate it. The parser in itself can come up on the fly to generate popular networking gear. More than that, it’s more than willing to take in the data to spit out whatever configuration is needed.
Lets say I just appreciated the way the configuration files are generated today. I wanted to quickly see if an LLM can generate the config. It also do the deployment for me. Then it helps me with some pre-checks, all without writing the code.
Let’s not go too far Continue reading
Disclaimer: All Writings And Opinions Are My Own And Are Interpreted Solely From My Understanding. Please Contact The Concerned Support Teams For A Professional Opinion, As Technology And Features Change Rapidly.
In a world where even your toaster might soon have a PhD in quantum physics, LLMs are taking over faster than a cat video going viral! LLMs are becoming increasingly powerful and are being integrated into various business and personal use cases. Networking is no different. Due to reasons like privacy, connectivity, and cost, deploying smaller form factor models or larger ones (if you can afford in-house compute) is becoming more feasible for faster inference and lower cost.
The availability and cost of model inference are improving rapidly. While OpenAI’s ChatGPT-4 is well-known, Meta and other firms are also developing LLMs that can be deployed in-house and fine-tuned for various scenarios.
Let’s explore how to deploy an open-source model in the context of coding. For beginners, ease of deployment is crucial; nothing is more off-putting than a complicated setup.
Reference : Ollama.com (https://github.com/ollama/ollama?tab=readme-ov-file) simplifies fetching a model and starting work immediately.
Visit ollama.com to understand what a codellama model looks like and what Continue reading
Disclaimer: All writings and opinions are my own and are interpreted solely from my understanding. Please contact the concerned support teams for a professional opinion, as technology and features change rapidly.
My name is Stephen King, and you are reading my novel. Absolutely Not! He is the most incredible author of all time! And you are reading my blog! One of my many, many, many interests is traffic mirroring and monitoring in public clouds, especially inter-VCN/VPC traffic. Traffic from an instance is mirrored and sent for any analysis, whether regulatory or troubleshooting. I quickly set up something in my OCI; the results and learnings are fascinating.
TLDR: Traffic Mirroring and Monitoring in Oracle OCI using VTAPs
Topology and a refresher
IGW helps us connect to the Internet, NLB helps us send traffic to VTAP-HOST mirrored from VTAP, and a DRG helps us communicate with other VCNs.
What is the end goal? Mirror and send all the traffic from Host-1 with IP 192.168.1.6 to VTAP-Host for further analysis.
Below is generated by OCI Network Visualiser, which is very cool.
A few things Continue reading
Disclaimer: All writings and opinions are my own and are interpreted solely from my understanding. Please contact the concerned support teams for a professional opinion, as technology and features change rapidly.
Topics Discussed: DRCCs Intro and OCI Cloud Regional Subnet
Spoiler alert: This involves an on-premises cloud for customers offered by Oracle. All offerings for cloud@customer by Oracle can be viewed at https://www.oracle.com/cloud/cloud-at-customer/
Dedicate Region Cloud @Customer – DRCC
Ref: https://www.oracle.com/cloud/cloud-at-customer/dedicated-region/#rc30category-documentation
Before understanding DRCC, let’s consider the scope of improvement on customer premises with their own Data Center and what it tries to solve. Fair? Let’s imagine I have my own data centre. Great! My infrastructure supports some of my workloads, but how do I get the latest technology stack, support, and services that the public cloud offers?
For example, how can someone leverage all these services without having to invest in development cycles yet meet latency/integration/data security and multiple other requirements?
Ref: https://www.oracle.com/cloud/
Scope of improvement on customer premises | Oracle Dedicated Region Cloud at Customer Solutions |
Data Security and Compliance Concerns | Provides a fully managed cloud region in the customer’s datacenter, ensuring data sovereignty and compliance with local regulations. |
Latency Continue reading |
This post is a bit different from my usual networking content. I won’t be discussing power grid inefficiencies or pollution. After learning some principles from permaculture, an ethical design science focused on life, it’s more of a personal reflection. I’m concerned about our society’s reliance on non-producing, ever-consuming and our mindset to depend on external producers. Even simple things that can be eliminated or produced without much effort have become needlessly complex, creating unnecessary consumption, erasing the human spark to make or say no to mindless consumerism, and creating vast dump yards. We used to be producers but have become ultimate consumers of life’s simple things.
TLDR: It’s not about affordability or money; I think everyone here can afford spinach or charge their devices happily, paying a service provider; the question is, why are we dependent on external grids and super-stores for simple things in life that were free at one point of time in our society, over consumerism, isolated and end up creating our own silo kingdoms?
Why did it make sense to me?
At a very high level, we all know that the climatic changes and food production, in general, have affected the topsoil so much that Continue reading
It has been quite some time since my last blog post. The past few months have been busy, leaving little time to write. I am happy to share that I have started working as a site-reliability developer at Oracle. While it has only been a short while, I am enjoying the work.
Reflecting on the past year, I am happy to say it has been productive overall. I have had the opportunity to collaborate with various networking and software teams, which has taught me a lot about high-scale traffic patterns and migrations. Working in an operations role has been beneficial since it requires constant fire-fighting and documentation for improvements. This has made me more aware of traffic patterns and ways to improve reliability.
What can be improved?
However, I could benefit from more study on the software architecture and distributed system design parameters from a business perspective. I want to write more this year; the first half was better, but lacked in the second half, where most of the writing was private.
From a personal standpoint, 2024 needs more travel absolutely and some improvements in garden automation for summer 2024
Other updates
Finally, I have been working on some JNCIA-Devops Continue reading
< MEDIUM :https://towardsaws.com/aws-advanced-networking-speciality-1-3-considerations-402e0d057dfb >
List of blogs on AWS Advanced Networking Speciality Exam — https://medium.com/@raaki-88/list/aws-advanced-network-speciality-24009c3d8474
AWS Shared-Responsibility Model defines how data protection applies in ELBs. It boils down to AWS protecting global infrastructure while the service consumer is more responsible for preserving the content and control over the hosted content.
Few important suggestions for accessing/Securing
Encryption at rest: Server-side encryption for S3 (SSE-S3) is used for ELB access logs. ELB automatically encrypts each log file before storing it in the S3 bucket and decrypts the access log files when you access them. Each log file is encrypted with a unique key, which is encrypted with a master key that is regularly rotated.
Encryption in Transit:
HTTPS/TLS traffic can be terminated at the ELB. ELB can encrypt and decrypt the traffic instead of additional EC2 instances or current EC2 backend instances doing this TLS termination. Using ACM (AWS Certificate Continue reading
List of blogs on AWS Advanced Networking Speciality Exam — https://medium.com/@raaki-88/list/aws-advanced-network-speciality-24009c3d8474
Before understanding LoadBalancer Service, it’s worth understanding a few things about NodePort service.
NodePort service opens a port on each node. External agents can connect directly to the node on the NodePort. If not mentioned, a randomly chosen service is picked up for NodePort. LoadBalancing here is managed by front-end service, which listens to a port and load balances across the Pods, which responds to service requests.
Like NodePort Service, the LoadBalancer service extends the functionality by adding a load balancer in front of all the nodes. Kubernetes requests ELB and registers all the nodes. It’s worth noting that Load Balancer will not detect where the pods are running. Worker nodes are added as backend instances in the load balancer. The classic-load balancer is the default LB the service chooses and can be changed to NLB(Network Load Balancer). CLB routes the requests to Front-end, then to internal service ports Continue reading
<MEDIUM : https://towardsaws.com/aws-advanced-networking-speciality-1-3-5484de6c8da >
A Target group routes requests to one or more registered targets. They can be EC2 Instances, IP addresses, Kubernetes Cluster, Lambda Functions etc. Target groups are specified when you create a listener rule. You can also define various health checks and associate them with each target-groups.
What is Geneve, and what is the context with ELB: Generic Network Virtualisation Encapsulation
In the context of Gateway Load Balancer, a flow can be associated with either 5-Tuple or 3-Tuple.A flow can be associated with either a 5-tuple or 3-tuple flow in load balancers.
A 5-tuple flow includes the source IP address, destination IP address, source port, destination port, and protocol number. This is used for TCP, UDP, and SCTP protocols.
A 3-tuple flow includes the source IP address, destination IP address, and protocol number. This is used for ICMP and ICMPv6 protocols.
Gateway Load balancers and their registered virtual appliances use GENEVE protocol to exchange application traffic on port 6081
References :
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
https://datatracker.ietf.org/doc/html/rfc8926 Continue reading
< MEDIUM : https://raaki-88.medium.com/aws-advanced-networking-speciality-1-3-deedc0217ea6 >
Advanced Network Speciality Exam — Blogs
https://medium.com/@raaki-88/list/aws-advanced-network-speciality-24009c3d8474
Global Accelerator — A service that provides static ip addresses with your accelerator. These IP addresses are Anycast from the AWS edge network, meaning the global accelerator diverts your application’s traffic to the nearest region to the client.
Two types of Global Accelerators — Standard Accelerators and Custom Routing accelerators.
Standard Accelerators uses aws global network to route traffic to the optimal regional endpoint based on health, client location and policies that the user configures, increasing availability and decreasing latency to the end users. Standard-accelerator endpoints can be Network Load balancers, Application load balancers from load balancing context. Custom routing accelerators do not support load balancer endpoints as of today.
When using accelerators and Load-balancers, update DNS records so that application traffic uses accelerator end-point, redirecting the traffic to load-balancer endpoints.
When using an application load balancer in ELB, cloud-front meant to cache the objects can reduce the load on ALBs and improve performance. CF can also protect ALB and internal services from DDOS attacks, as with AWS WAF. But for this to succeed, administrators Continue reading
< MEDIUM: https://raaki-88.medium.com/aws-advanced-networking-speciality-1-3-23eb011b74df >
Previous posts :
https://towardsaws.com/aws-advanced-networking-task-statement-1-3-c457fa0ed893
https://raaki-88.medium.com/aws-advanced-networking-speciality-1-3-3ffe2a43e2f3
Internal ELB — An internal Load balancer is not exposed to the internet and is deployed in a private subnet. A DNS record gets created, which will have a private-IP address of the load-balancer. It’s worth noting to know DNS records will be publicly resolvable. The main intention is to distribute traffic to EC2 instances. Across availability zones, provided all of them have access to VPCs.
External ELB — Also called an Internet-Facing Load Balancer and deployed in the Public subnet. Similar to Internal ELB, this can also be used to distribute and balance traffic across two availability zones.
Example Architecture Reference — https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.html?did=pg_card&trk=pg_card
– Rakesh
< MEDIUM: https://raaki-88.medium.com/aws-advanced-networking-speciality-1-3-3ffe2a43e2f3 >
https://medium.com/towards-aws/aws-advanced-networking-task-statement-1-3-c457fa0ed893 — Has intro details for the speciality exam topic
Different types of Load-Balancers
High Availability Aspect: ELB (Can be any load-balancing) can distribute traffic across various different targets, including EC2-Instances, Containers, and IP addresses in either a single AZ or multiple AZs within a region.
Health Checks: An additional health check can be included to ensure that the end hos serving the application is healthy. This is typically done through HTTP status codes, with a return value 200 indicating a healthy host. If discrepancies are found during the health check, the ELB can gracefully drain traffic and divert it to another host in the target group. If auto-scaling is set up, it can also auto-scale as needed.
Network Design: Depending on the type of traffic and Application traffic pattern, the load and burst-rate choice of load-balancer will differ.
Various Features — High-Availability, High-Throughput, Health-Checks, Sticky-Sessions, Operational-Monitoring and Logging, Delete-Protection.
TLS Termination — You can also have integrated certificate management and SSL decryption which offloads end-host CPU load and acts as a central Continue reading
< MEDIUM: https://towardsaws.com/aws-advanced-networking-task-statement-1-3-c457fa0ed893 >
Ref: https://aws.amazon.com/elasticloadbalancing/
Why Elastic Load Balancer?
CLB — Classic Load Balancer
– AWS Recommends ALB today instead of CLB
– Intended for EC2 instances which are built in EC2-Classic Network
– Layer 4 or Layer 7 Load Balancing
– Provides SSL Offloading and IPv6 support for Classic Networks
ALB — Application Load Balancer
Why Elastic Load Balancer?
Note: This requires the purchase of a wireless router which is capable of running a Wireguard package in this case it’s Slate-Plus GL-A1300 and I do not have any affiliate or ads program with them, I simply liked it for its effectiveness and low cost.
For one reason many of us want a VPN server which does decent encryption but won’t charge us a lot of money, in some cases, it can be done free of cost and in others for not want us to install a variety of software which messes up with internal client routing and also against some of the IT-Policies, even if it’s a browser-based plugin.
Wireguard: https://www.wireguard.com/ — VPN Software, Software-based encryption, extremely fast and light-weight.
GL-A1300 Slate-Plus — Wireless Router with support for Wireguard which is not a feature in many of the current market routers, had OpenWrt as the installed software.
The GL-A1300 Slate Plus wireless VPN encrypted travel router comes packed with features that will make your life easier while travelling. Here are just a few of the most important:
A few weeks ago, I set up a bird feeder and used it to capture bird images, the classifier itself was not that accurate but was doing a decent job, what I have realised is that not every time we end up with highly accurate on-board edge classification especially while learning how to implement them.
So after a few weeks, there were a lot of images some of them sure enough had birds while some of them were taken in Pitch Dark and am not even sure what made the classifier figure out a birdie in the snapshot from the camera.
Now for me in order to make an Image, I have to rely on a re-classifier doing the job for me, initially, I thought I will write a lambda-based classifier as a learning experiment, but then I thought it was a one-time process every 6 months or so, so I went ahead with a managed service option and in this case, its AWS Rekognition, which is quite amazing.
https://aws.amazon.com/rekognition/
Code Snippet used for Classification
import os
import concurrent.futures
import boto3
def detect_birds(image_path):
# Configure AWS credentials and region
session Continue reading
Note: One of the important tips for lab environments is to set an auto-shutdown timer, below is one such setting in GCP
I have been working on a few hosted environments which include AWS Sagemaker Notebook instances, Google Cloud Colab, Gradient (Paperspace) etc and all of them are really good and needed monthly subscriptions, I decided to have my own GPU server instance which can be personalized and I get charged on a granular basis.
Installing it is not easy, first, you need to find a cloud-computing instance which has GPU support enabled, AWS and GCP are straightforward in this section as the selection is really easy.
Let’s break this into 3 stages
For a change, I will be using GCP in this case from my usual choice of AWS here.
Choose GPU alongside the Instance
Generic Guidelines — https://cloud.google.com/deep-learning-vm/docs/cloud-marketplace
rakesh@instance-1:~$ sudo apt install jupyter-notebook # Step1: generate the file by typing this line in console jupyter notebook Continue reading
While browsing through various ways to get my AI-enabled bird camera setup, I came across Nvidia Jetson Nano, there are varied versions of this product and availability are limited, I Am in Europe and ordered the Nano Developer kit from Amazon US, shipping was fast with a good amount of inbound tax as well.
https://developer.nvidia.com/embedded/jetson-nano-developer-kit — is the one I have purchased while both new and old versions are available.
Unboxing Video :
Initial Impressions and disadvantages:
< MEDIUM: https://aws.plainenglish.io/image-search-with-bing-ml-ai-fast-ai-and-aws-sage-maker-61fae1647c >
If you have heard about awesome AI course that fast.ai offers which is free of charge then you should definitely checkout. https://course.fast.ai/ is where you will find out all the details.
Course takes a very hands on approach and anyone can write and bring up their ML model in under 2 hours of the course. As a part of any image classification one of the basic requirements is to have number of images which is often referred to as Dataset and this Dataset is split into Test/Train/Validation for the model training.
For some of the easier examples, you can rely on search engine to give you those images for you, previously fast.ai used Microsoft’s bing.com for image search but later they replaced it with DDG (DuckDuckGo). while DDG is really nice I had throttle issues and some of the packages were outdated and hard to read.
So, I have re-written the same image-search Python function which uses Microsoft’s bing.com search engine
What are Pre-req:
Procedure to generate keys