AI agents bring with them the promise of being able to autonomously solve complex tasks put before them, from finding and analyzing the necessary data, choosing tools, and making decisions without human intervention to learning from their mistakes and adapting to changes. …
Cisco’s Outshift Incubator Sends Agentic AI Protocol To The Linux Foundation was written by Jeffrey Burt at The Next Platform.
Both the global economy and spending on information technology are so vast that it is hard to really grasp the numbers sometimes. …
How To Cash In On Massive Datacenter Spending was written by Timothy Prickett Morgan at The Next Platform.
Linux 6.11+ kernels provide TCX attachment points for eBPF programs to efficiently examine packets as they ingress and egress the host. The latest version of the open source Host sFlow agent includes support for TCX packet sampling to stream industry standard sFlow telemetry to a central collector for network wide visibility, e.g. Deploy real-time network dashboards using Docker compose describes how to quickly set up a Prometheus database and use Grafana to build network dashboards.
static __always_inline void sample_packet(struct __sk_buff *skb, __u8 direction) {
__u32 key = skb->ifindex;
__u32 *rate = bpf_map_lookup_elem(&sampling, &key);
if (!rate || (*rate > 0 && bpf_get_prandom_u32() % *rate != 0))
return;
struct packet_event_t pkt = {};
pkt.timestamp = bpf_ktime_get_ns();
pkt.ifindex = skb->ifindex;
pkt.sampling_rate = *rate;
pkt.ingress_ifindex = skb->ingress_ifindex;
pkt.routed_ifindex = direction ? 0 : get_route(skb);
pkt.pkt_len = skb->len;
pkt.direction = direction;
__u32 hdr_len = skb->len < MAX_PKT_HDR_LEN ? skb->len : MAX_PKT_HDR_LEN;
if (hdr_len > 0 && bpf_skb_load_bytes(skb, 0, pkt.hdr, hdr_len) < 0)
return;
bpf_perf_event_output(skb, &events, BPF_F_CURRENT_CPU, &pkt, sizeof(pkt));
}
SEC("tcx/ingress")
int tcx_ingress(struct __sk_buff *skb) {
sample_packet(skb, 0);
return TCX_NEXT;
}
SEC("tcx/egress")
int tcx_egress(struct __sk_buff *skb) {
sample_packet(skb, 1);
return TCX_NEXT;
}
The sample.bpf.c file Continue reading
Is an LLM a stubborn donkey, a genie, or a slot machine (and why)? Find out in the Who is LLM? article by Martin Fowler.
It is beginning to look like Intel plans to milk the impending 18A manufacturing process for a long time. …
Intel Puts The Process Horse Back In Front Of The Foundry Cart was written by Timothy Prickett Morgan at The Next Platform.
Changing an existing BGP routing policy is always tricky on platforms that apply line-by-line changes to device configurations (Cisco IOS and most other platforms claiming to have industry-standard CLI, with the notable exception of Arista EOS). The safest approach seems to be:
On July 23, 2025, the White House unveiled its AI Action Plan (Plan), a significant policy document outlining the current administration's priorities and deliverables in Artificial Intelligence. This plan emerged after the White House received over 10,000 public comments in response to a February 2025 Request for Information (RFI). Cloudflare’s comments urged the White House to foster conditions for U.S. leadership in AI and support open-source AI, among other recommendations.
There is a lot packed into the three pillar, 28-page Plan.
Pillar I: Accelerate AI Innovation. Focuses on removing regulations, enabling AI adoption and developing, and ensuring the availability of open-source and open-weight AI models.
Pillar II: Build American AI Infrastructure. Prioritizes the construction of high-security data centers, bolstering critical infrastructure cybersecurity, and promoting Secure-by-Design AI technologies.
Pillar III: Lead in International AI Diplomacy and Security. Centers on providing America’s allies and partners with access to AI, as well as strengthening AI compute export control enforcement.
Each of these pillars outlines policy recommendations for various federal agencies to advance the plan’s overarching goals. There’s much that the Plan gets right. Below we cover a few parts of the Plan that we think are particularly important. Continue reading
Arista AVD (Architect, Validate, Deploy) – https://avd.arista.com – is a powerful tool that brings network architecture into the world of Infrastructure-as-Code. I wanted to try it out in a lab setting and see how it works in a non-standard environment. Since my go-to lab tool is GNS3 with Arista cEOS images — while the AVD […]
<p>The post Testing Arista AVD with GNS3 and EOS first appeared on IPNET.</p>
Kubernetes has transformed how we deploy and manage applications. It gives us the ability to spin up a virtual data center in minutes, scaling infrastructure with ease. But with great power comes great complexities, and in the case of Kubernetes, that complexity is security.
By default, Kubernetes permits all traffic between workloads in a cluster. This “allow by default” stance is convenient during development, and testing but it’s dangerous in production. It’s up to DevOps, DevSecOps, and cloud platform teams to lock things down.
To improve the security posture of a Kubernetes cluster, we can use microsegmentation, a practice that limits each workload’s network reach so it can only talk to the specific resources it needs. This is an essential security method in today’s cloud-native environments.
We all understand that network policies can achieve microsegmentation; or in other words, it can divide our Kubernetes network model into isolated pieces. This is important since Kubernetes is usually used to provide multiple teams with their infrastructural needs or host multiple workloads for different tenants. With that, you would think network policies are first citizens of clusters. However, when we dig into implementing them, three operational challenges Continue reading
Every company in every industry in every geography on Earth is trying to figure out how they are going to train AI models and tune them to help with their particular workloads. …
Financial Services Firms Will Bank On Homegrown AI Training was written by Timothy Prickett Morgan at The Next Platform.
While the hyperscalers and clouds and their AI model builder customers are setting the pace in compute, networking, and storage during the GenAI revolution, that does not mean that they will necessarily provide the only systems that will be used by the largest enterprises in the world. …
For Now, AI Helps IBM’s Bottom Line More Than Its Top Line was written by Timothy Prickett Morgan at The Next Platform.
Businesses have always relied on data, but they never were able to get full value out of them when they were siloed by structure, system, or storage. …
Google’s Open Lakehouse: The Foundation For Enterprise AI Data was written by Timothy Prickett Morgan at The Next Platform.
What is Jevon’s Paradox? Tom, Eyvonne, and Russ discuss how this famous paradox impact network engineering.
download