Archive

Category Archives for "Networking"

How Prisma saved 98% on distribution costs with Cloudflare R2

How Prisma saved 98% on distribution costs with Cloudflare R2
How Prisma saved 98% on distribution costs with Cloudflare R2

The following is a guest post written by Pierre-Antoine Mills, Miguel Fernández, and Petra Donka of Prisma. Prisma provides a server-side library that helps developers read and write data to the database in an intuitive, efficient and safe way.

Prisma’s mission is to redefine how developers build data-driven applications. At its core, Prisma provides an open-source, next-generation TypeScript Object-Relational Mapping (ORM) library that unlocks a new level of developer experience thanks to its intuitive data model, migrations, type-safety, and auto-completion.

Prisma ORM has experienced remarkable growth, engaging a vibrant community of developers. And while it was a great problem to have, this growth was causing an explosion in our AWS infrastructure costs. After investigating a wide range of alternatives, we went with Cloudflare’s R2 storage — and as a result are thrilled that our engine distribution costs have decreased by 98%, while delivering top-notch performance.

It was a natural fit: Prisma is already a proud technology partner of Cloudflare’s, offering deep database integration with Cloudflare Workers. And Cloudflare products provide much of the underlying infrastructure for Prisma Accelerate and Prisma Pulse, empowering user-focused product development. In this post, we’ll dig into how we decided to extend our ongoing Continue reading

Day Two Cloud 215: Highlights From The Edge

Today's Day Two Cloud covers highlights from a recent Edge Field Day event. Ned Bellavance was a delegate at the event and will share perceptions and insights based on presentations from the event. Topics include a working definition of edge, the constraints of hosting infrastructure in edge locations (power, space, network connectivity and others), and operational models for running software and services in these environments.

The post Day Two Cloud 215: Highlights From The Edge appeared first on Packet Pushers.

Day Two Cloud 215: Highlights From The Edge

Today's Day Two Cloud covers highlights from a recent Edge Field Day event. Ned Bellavance was a delegate at the event and will share perceptions and insights based on presentations from the event. Topics include a working definition of edge, the constraints of hosting infrastructure in edge locations (power, space, network connectivity and others), and operational models for running software and services in these environments.

The Era of Ultra-Low Latency 25G Ethernet

Back in the early 2000s, store and forward networking was used by both market data providers, exchanges and customers executing electronic trading applications where the lowest latency execution can make the difference in a strategy from a profit to a loss. Moving closer to the exchange to reduce link latency, eliminating any unnecessary network hops, placing all feed handler and trading execution servers on the same switch to minimize transit time, and leveraging high-performance 10Gb NICs with embedded FPGAs all contributed to the ongoing effort to squeeze out every last microsecond to execute trades and gain a performance edge.

HS057 Technical Debt

In this podcast episode, Johna and I discuss the concept of technical debt. We provide different definitions of technical debt, with me focusing on the inability to switch solutions easily and Johna emphasizing the trade-off between immediate speed and long-term efficiency. We give examples of technical debt, such as outdated systems and insecure infrastructure, and […]

The post HS057 Technical Debt appeared first on Packet Pushers.

HS057: Technical Debt

In this podcast episode, Johna and I discuss the concept of technical debt. We provide different definitions of technical debt, with me focusing on the inability to switch solutions easily and Johna emphasizing the trade-off between immediate speed and long-term efficiency. We give examples of technical debt, such as outdated systems and insecure infrastructure, and... Read more »

AMD to acquire Nod.ai to boost open source AI software capabilities

Chipmaker AMD has announced plans to acquire open source machine learning and AI software provider Nod.ai as the chipmaker looks to expand its AI capabilities in a bid to shore up competition against AI chip market leader Nvidia.The acquisition, whose financial details have not been disclosed, is expected to bring AMD a team that can help accelerate the deployment of AI-based offerings optimized for the company’s Instinct data center accelerators, Ryzen AI processors, EPYC processors, Versal SoCs, and Radeon GPUs, AMD said in a statement.To read this article in full, please click here

Top 5 best paid survey apps

Getty Images Today's fast-paced world has seen smartphones seamlessly weave themselves into the fabric of our daily lives. The exciting part? They've also opened up new avenues for us to boost our income effortlessly. Enter the world of online moneymaking, where participating in paid surveys through mobile apps has become a game-changer. These nifty apps not only provide you with a platform to share your thoughts but also offer tangible rewards in return for your valuable time and insights.To read this article in full, please click here

IBM: Treat generative AI like a burning platform and secure it now

In the rush to deploy generative AI, many organizations are sacrificing security in favor of innovation, IBM warns.Among 200 executives surveyed by IBM, 94% said it’s important to secure generative AI applications and services before deployment. Yet only 24% of respondents’ generative AI projects will include a cybersecurity component within the next six months. In addition, 69% said innovation takes precedence over security for generative AI, according to the IBM Institute for Business Value’s report, The CEO's guide to generative AI: Cybersecurity.To read this article in full, please click here

IBM: Treat generative AI like a burning platform and secure it now

In the rush to deploy generative AI, many organizations are sacrificing security in favor of innovation, IBM warns.Among 200 executives surveyed by IBM, 94% said it’s important to secure generative AI applications and services before deployment. Yet only 24% of respondents’ generative AI projects will include a cybersecurity component within the next six months. In addition, 69% said innovation takes precedence over security for generative AI, according to the IBM Institute for Business Value’s report, The CEO's guide to generative AI: Cybersecurity.To read this article in full, please click here

Tech Bytes: Why Retail Branches Need Next-Gen SD-WAN And SASE (Sponsored)

Today on the Tech Bytes podcast, we talk with sponsor Palo Alto Networks about SD-WAN for retail locations. From securing payment card data to supporting customer Wi-Fi to connecting a multitude of IoT devices, a secure, reliable WAN is a must for retail. We talk with Palo Alto Networks about how SD-WAN can help retail locations get and keep shoppers in stores.

The post Tech Bytes: Why Retail Branches Need Next-Gen SD-WAN And SASE (Sponsored) appeared first on Packet Pushers.

Victims of Success

It feels like the cybersecurity space is getting more and more crowded with breaches in the modern era. I joke that on our weekly Gestalt IT Rundown news show that we could include a breach story every week and still not cover them all. Even Risky Business can’t keep up. However, the defenders seem to be gaining on the attackers and that means the battle lines are shifting again.

Don’t Dwell

A recent article from The Register noted that dwell times for detection of ransomware and malware hav dropped almost a full day in the last year. Dwell time is especially important because detecting the ransomware early means you can take preventative measures before it can be deployed. I’ve seen all manner of early detection systems, such as data protection companies measuring the entropy of data-at-rest to determine when it is no longer able to be compressed, meaning it likely has been encrypted and should be restored.

Likewise, XDR companies are starting to reduce the time it takes to catch behaviors on the network that are out of the ordinary. When a user starts scanning for open file shares and doing recon on the network you can almost guarantee they’ve Continue reading

Why are OpenAI, Microsoft and others looking to make their own chips?

As demand for generative AI grows, cloud service providers such as Microsoft, Google and AWS, along with large language model (LLM) providers such as OpenAI, have all reportedly considered developing their own custom chips for AI workloads.Speculation that some of these companies — notably OpenAI and Microsoft — have been making efforts to develop their own custom chips for handling generative AI workloads due to chip shortages have dominated headlines for the last few weeks.   To read this article in full, please click here

LiquidStack launches modular liquid cooling solutions for the edge

Immersion cooling specialist LiquidSack has introduced a pair of modular data center units using immersion cooling for edge deployments and advanced cloud computing applications.The units are called the MicroModular and MegaModular. The former contains a single 48U DataTank immersion cooling system (the size of a standard server rack) and the latter comes with up to six 48U DataTanks. The products can offer between 250kW to 1.5MW of IT capacity with a PUE of 1.02. (Power usage effectiveness, or PUE, is a metric to measure data center efficiency. It’s the ratio of the total amount of energy used by a data center facility to the energy delivered to computing equipment.)To read this article in full, please click here