Sanil Nambiar, client engagement lead, AI for networks, at IBM: Assembling the infrastructure organizations will need for AI.
“The strategy, obviously, is hybrid cloud, data and AI and automation working together as an architecture,” Nambiar told me in this episode of The New Stack Makers.
IBM has invested in what he calls “three foundational platforms” because each offers capabilities essential to AI infrastructure.
Red Hat, a hybrid cloud platform, is needed “for that consistent runtime across on-prem and cloud,” he said. HashiCorp offers “life cycle control and policy-driven automation.”
And Confluent is for “real-time, contextual, trustworthy data access for AI.”
All of these platforms are needed, Nambiar said, because “AI does not sit on top of chaos and magically fix it. You really need environments which are consistent, infrastructure that is programmable, data that moves in real time.”
The Core Challenges of Modern Network Operations
The new complexity AI introduces has added to the challenges networking Continue reading
SEATTLE — Blockchain may no longer be at the peak of its hype cycle, but the technology is still sparking innovation, as real-life use cases emerge. Distributed ledgers (DLTs), for instance, which allow for the secure recording and transfer of digital assets without reliance on a centralized authority, have obvious advantages for financial organizations.
DLTs are at the core of an emerging ecosystem built on open source. In this On the Road episode of The New Stack Makers, recorded at Open Source Summit North America, Hedera, and OSSNA keynote talk on DLTs with Alex Williams, founder and publisher of TNS.
For DLTs, Baird said, “We have an open source ledger, the blockchain is open source, you can think of it like an operating system that’s open source. You can run programs on top of it that are open source, you can run programs on top of it that are not open source.”
The layer built on top of all this is also open source. “We had to come up with an algorithm for how they’re going to talk Continue reading
CHICAGO — Incoming traffic looking to access your network and platform probably uses the network’s ingress. But the ingress carries with it scaling, availability and security issues.
For instance, said Kate Osborn, a software engineer at NGINX, suggested in this episode of TNS Makers recorded On the Road at KubeCon + CloudNative Con North America.
“One of the biggest issues is it’s not extensible,” Osborn said. “So it’s a very simple resource. But there’s a bunch of complex routing that people want to do. And in Continue reading
As the internet fills every nook and cranny of our lives, it runs into greater complexity for developers, operations engineers, and the organizations that employ them. How do you reduce latency? How do you comply with the regulations of each region or country where you have a virtual presence? How do you keep data near where it’s actually used?
For a growing number of organizations, the answer is to use the edge.
In this episode of the New Stack Makers podcast, Sheraline Barthelmy, head of product, marketing and customer success for Cox Edge, were joined by The Advantages and Challenges of Going ‘Edge Native’
Also available on Google Podcasts, PlayerFM, Spotify, TuneIn
The edge is composed of servers that are physically located close to the customers who will use them — the “last Continue reading