Software giant Oracle has a vast installed base of enterprise customers that it has agglomerated over the decades that gives it the cash flow to do many things. …
Oracle’s Financing Primes The OpenAI Pump was written by Timothy Prickett Morgan at The Next Platform.
Artificial intelligence can do more than just code or write, it can also create music. […]
The post How to create AI generated song for Youtube first appeared on Brezular's Blog.
After creating the infrastructure that generates the device configuration files within netlab (not in an Ansible playbook), it was time to try to apply it to something else, not just Linux containers. FRR containers were the obvious next target.
netlab uses two different mechanisms to configure FRR containers:
I wanted to replace both with Linux scripts that could be started with the docker exec command.
Figure 6-14 depicts a demonstrative event where Rank 4 receives seven simultaneous flows (1). As these flows are processed by their respective PDCs and handed over to the Semantic Sublayer (2), the High-Bandwidth Memory (HBM) Controller becomes congested. Because HBM must arbitrate multiple fi_write RMA operations requiring concurrent memory bank access and state updates, the incoming packet rate quickly exceeds HBM’s transactional retirement rate.
This causes internal buffers at the memory interface to fill, creating a local congestion event (3). To prevent buffer overflow, which would lead to dropped packets and expensive RMA retries, the receiver utilizes NSCC to move the queuing "pain" back to the source. This is achieved by using pds.rcv_cwnd_pend parameter of the ACK_CC header (4). The parameter operates on a scale of 0 to 127; while zero is ignored, a value of 127 triggers the maximum possible rate decrement. In this scenario, a value of 64 is utilized, resulting in a 50% penalty relative to the newly acknowledged data.
Rather than directly computing a new transport rate, the mechanism utilizes a three-phase process to define a restricted Congestion Window (CWND). This reduction in CWND inherently forces the source to drain its inflight bucket to Continue reading
The market researchers at Gartner have extended their forecast out to 2027 and dropped 2024 from the view since it is now more than a year past. …
Gartner Takes Another Stab At Forecasting AI Spending was written by Timothy Prickett Morgan at The Next Platform.
Today, we are excited to share a refresh of the Tigera and Calico visual identity!
This update better reflects who we are, who we serve, and where we are headed next.

If you have been part of the Calico community for a while, you know that change at Tigera is always driven by substance, not style alone. Since the early days of Project Calico, our focus has always been clear: Build powerful, scalable networking and security for Kubernetes, and do it in the open with the community.
Tigera was founded by the original Project Calico engineering team and remains deeply committed to maintaining Calico Open Source as the leading standard for container networking and network security.
“Tigera’s story began in 2016 with Project Calico, an open-source container networking and security project. Calico Open Source has since become the most widely adopted solution for containers and Kubernetes. We remain committed to maintaining Calico Open Source as the leading standard, while also delivering advanced capabilities through our commercial editions.”
—Ratan Tipirneni, President & CEO, Tigera
This refresh is an evolution, not a reinvention. You Continue reading
Earlier this week, the UK’s Competition and Markets Authority (CMA) opened its consultation on a package of proposed conduct requirements for Google. The consultation invites comments on the proposed requirements before the CMA imposes any final measures. These new rules aim to address the lack of choice and transparency that publishers (broadly defined as “any party that makes content available on the web”) face over how Google uses search to fuel its generative AI services and features. These are the first consultations on conduct requirements launched under the digital markets competition regime in the UK.
We welcome the CMA’s recognition that publishers need a fairer deal and believe the proposed rules are a step into the right direction. Publishers should be entitled to have access to tools that enable them to control the inclusion of their content in generative AI services, and AI companies should have a level playing field on which to compete.
But we believe the CMA has not gone far enough and should do more to safeguard the UK’s creative sector and foster healthy competition in the market for generative and agentic AI.
In January Continue reading
Updated at 6:55 a.m. PT
Today, we’re introducing a new Worker template for Vertical Microfrontends (VMFE). This template allows you to map multiple independent Cloudflare Workers to a single domain, enabling teams to work in complete silos — shipping marketing, docs, and dashboards independently — while presenting a single, seamless application to the user.
Most microfrontend architectures are "horizontal", meaning different parts of a single page are fetched from different services. Vertical microfrontends take a different approach by splitting the application by URL path. In this model, a team owning the `/blog` path doesn't just own a component; they own the entire vertical stack for that route – framework, library choice, CI/CD and more. Owning the entire stack of a path, or set of paths, allows teams to have true ownership of their work and ship with confidence.
Teams face problems as they grow, where different frameworks serve varying use cases. A marketing website could be better utilized with Astro, for example, while a dashboard might be better with React. Or say you have a monolithic code base where many teams ship as a collective. An update to add new features from several teams can get frustratingly rolled back because Continue reading

The videos from the Network Observability webinar with Dinesh Dutt are now available without a valid ipSpace.net account. Enjoy!
Everyone is jumpy about how much capital expenses Microsoft has on the books in 2025 and what it expects to spend on datacenters and their hardware in 2026. …
Microsoft Is More Dependent On OpenAI Than The Converse was written by Timothy Prickett Morgan at The Next Platform.