If Intel hopes to survive the next few years as a freestanding company and return to its role as innovator, it can not afford to waste its time and it cannot afford to make any more mistakes. …
Intel Pushes Out “Clearwater Forest” Xeon 7, Sidelines “Falcon Shores” Accelerator was written by Timothy Prickett Morgan at The Next Platform.
Every recruiter and hiring manager wants people with five years of experience, but you cannot get experience without being hired into a position. How can you break this conundrum? Daniel Dib joins Tom and Russ to talk about how folks just coming into IT, or even those with lots of experience who are trying to shift their focus, can gain experience.
During the last three weeks, we were busy squashing bugs (device configuration fixes, other bug fixes). Some were recent; others were ancient pests uncovered by better integration tests. The end result: netlab release 1.9.4.
netlab release 1.9.4 passed hundreds of integration tests and should be a better choice than the previous 1.9 releases. To upgrade, execute pip3 install --upgrade networklab.
We still missed a few quirks :( Release 1.9.4-post1 addresses those (and, unfortunately, I’m pretty sure there will be more).
sudo tcpdump -i any -s 0 -w sflow.pcap udp port 6343Run the command above on the system you are using to collect sFlow data (if you aren't yet collecting sFlow, see Agents for suggested configuration settings). Type Control-C to end the capture after 5 to 10 minutes. Copy the resulting sflow.pcap file to your laptop.
docker run --rm -it -v $PWD/sflow.pcap:/sflow.pcap sflow/sflowtool \ -r /sflow.pcap -P 1Either compile the latest version of sflowtool or, as shown above, use Docker to run the pre-built sflow/sflowtool image. The -P (Playback) option replays the trace in real-time and displays the contents of each sFlow message. Running sflowtool using Docker provides additional examples, including converting the sFlow messages into JSON format for processing by a Python script.
docker run --rm -it -v $PWD/sflow.pcap:/sflow.pcap sflow/sflowtool \ -r /sflow.pcap -f 192.168.4.198/6343 -P 1The -f (forwarding) option takes an IP address and UDP port number as arguments, in this Continue reading

Welcome back to another post on local LLMs. In this post, we’ll look at setting up a fully local coding assistant inside VSCode using the Continue extension and Ollama. Let’s get started.
As always, if you find this post helpful, press the ‘clap’ button on the left. It means a lot to me and helps me know you enjoy this type of content.
We’ve covered Ollama and Local LLMs in previous blog posts (linked below), but here’s a quick summary.
Ollama is a tool that lets you run large language models (LLMs) directly on your local machine. Local LLMs are language models that run on your computer instead of relying on cloud-based services like ChatGPT. This means you can use them without sending your data to external servers, which is great for privacy. They also work offline, so you’re not dependent on an Internet connection.
That said, it’s important to note that local models, especially on smaller setups, won’t match the speed or performance of cloud-based models like ChatGPT. These cloud models are powered by massive infrastructure, so they’re faster and often more accurate. However, the trade-off is privacy and offline access, which local LLMs provide.

When I first started using local LLMs with Ollama, I quickly realised it relies on a command-line interface to interact with the models. It also comes with an API, but let’s be honest, most of us, myself included, prefer a GUI, much like the one ChatGPT provides. There are plenty of options available, but I decided to try Open Web GUI. In this blog post, we’ll explore what Open-WebGUI is and how simple it is to set up a web-based interface for your local LLMs.
As always, if you find this post helpful, press the ‘clap’ button on the left. It means a lot to me and helps me know you enjoy this type of content.
Ollama is a tool for running local LLMs, offering privacy and control over your data. Out of the box, it lets you interact with models via the terminal or through its API. Installing Ollama is straightforward, and if you’d like a detailed guide, check out my other blog post which is linked below.
This blog post assumes you already have Ollama set up and running. For reference, I’m running this on my MacBook (M3 Pro with 18GB of RAM).
Open Continue reading

I’m going to start by saying I’m totally new to LLMs and running them locally, so I’m not going to pretend like I know what I am doing. I’ve been learning about Ollama for some time now and thought I would share it with my readers as always. This is such an interesting topic and I’m ready to go into the rabbit hole.
As always, if you find the content useful, don’t forget to press the ‘clap’ button to your left. This is one way for me to know that you like this type of content, which means a lot to me. So, let's get started.
LLMs, or Large Language Models, are a type of artificial intelligence designed to process and generate natural language. They are trained on vast amounts of text data, enabling them to understand context, identify patterns, and produce human-like responses. These models can perform various tasks such as answering questions, translating languages, summarising text, generating creative content, and assisting with coding. LLMs have gained significant attention in recent years due to their impressive performance and versatility.
While the hyperscalers and cloud builders provide the best indicator of what it takes to create state of the art GenAI models and the infrastructure to train them as well as to put them into production for practical use through an API interface, perhaps IBM is one of the best leading indicators for how GenAI will slowly be adopted by the enterprises of the world within their own organizations. …
IBM Takes The Patient Path To Future GenAI Profits was written by Timothy Prickett Morgan at The Next Platform.
COMMISSIONED: The new year has arrived, bringing with it the usual resolutions: get fitter, read more books, maybe finally tackle that ever-growing email backlog. …
New Year, New Data Strategy: How AI And Scalable Storage Shape 2025’s Resolutions was written by Timothy Prickett Morgan at The Next Platform.
I got this question from Paul:
Have you ever seen a BGP peer in the “Connect” state? In 20 years, I have never been able to see or reproduce this state, nor any mention in a debug/log. I am starting to believe that all the documentation is BS, and this does not exist.
The BGP Finite State Machine (FSM) (at least the one defined in RFC 4271 and amended in RFC 9687) is “a bit” hard to grasp but the basics haven’t changed from the ancient days of RFC 1771:
It is often said that companies – particularly large companies with enormous IT budgets – do not buy products, they buy roadmaps. …
The Road Ahead For Datacenter Compute Engines: The CPUs was written by Timothy Prickett Morgan at The Next Platform.
If you think it might be difficult to sell companies general purpose servers when they are frenzied about GenAI and trying to figure out how to get GPU-accelerated systems, you ought to try to convince the same companies to upgrade to Windows Server 2025, which launched last November. …
Azure Can’t Make Up For On Premise Profit Decline At Microsoft was written by Timothy Prickett Morgan at The Next Platform.
This should summarise it all.
Colab Notebook – https://colab.research.google.com/drive/1WV6J8IqEfYbn__H2g9-hOoHqfx-YD5iA?usp=sharing
Dalton Ortega, Cisco Modeling Labs Product Manager, sent me the following email as a response to my Configuring IP Addresses Won't Make You an Expert blog post:
First, your statement on Autonetkit is indeed correct. We had removed that from the product due to lack of popularity. That being said, in our roadmap we are looking at methods to reintroduce on-the-fly configuration as well as enhancing our sample labs library to make getting started with CML easier.
Secondly, CML can be run in full IaC mode because of the API-first build. In fact, many of our customers are using CML as an automated test/validation bed for their CI/CD pipelines. Tools like Ansible and Terraform are available to facilitate this inside CML too. For more details, read: