Archive

Category Archives for "Networking"

Cilium CNCF Graduation Could Mean Better Observability, Security with eBPF

eBPF (extended Berkeley packet filter) is a powerful technology that operates directly within the Linux kernel, offering robust hooks for extending runtime observability, security, and networking capabilities across various deployment environments. While eBPF has gained widespread adoption, organizations are encouraged to leverage tools and layers built on eBPF to effectively harness its functionality. For instance, Gartner advises that most enterprises lack the expertise to directly utilize Cilium offers additional capabilities with eBPF to help secure the network connectivity between runtimes deployed on Docker and Kubernetes, as well as other environments, including bare metal and virtual machines. Isovalent, which created Cilium and donated it to the CNCF, and the contributors are also, in parallel, developing Cilium capabilities to offer network observability and network security functionality through Cilium sub-projects consisting of Hubble and Tetragon, respectively. This graduation certifies that Cilium — created by

Downloading web resources

Last time I went to the dentist, they offered to use a fancy scanner to better be able to show me my teeth.

Who can say no to that? I already for fun got a 3D scan of my brain, so why not teeth too?

I requested the data, and got a link to a web UI. Unfortunately it was just a user friendly 3D viewer, without any download button.

Here’s how I extracted the 3D data:

  1. Open Chrome developer console, e.g. by pressing Ctrl-Shift-C (I hate it that Chrome hijacked this. Every single day I press Ctrl-Shift-C to copy, and it throws up this thing)
  2. Close the stupid “what’s new” spam, that nobody in the history of ever has wanted to see.
  3. Go to the ‘Network’ tab.
  4. Reload the page.
  5. Right click on any item in the list, and choose “Save all as HAR with content”. No, I don’t know why I can’t just save that one resource.
  6. A HAR file is a JSON file archive, essentially.
    $ jq '.log | keys' foo.har
    [
      "creator",
      "entries",
      "pages",
      "version"
    ]
    $ jq '.log | .entries[0].request | keys' foo.har
    [
      "bodySize",
      "cookies",
      "headers",
      "headersSize",
       Continue reading

Using command options and arguments to get just the right output on Linux

This post covers some well-known Linux commands that, when used with particular options and arguments, can save you some time or ensure that what you are doing is what you intended. The first “trick” involves how you exit vi or vim and the difference that can make.Using :x instead of :wq when saving files with vi or vim The vi and vim editors are the most commonly used text editors on Linux systems. When you exit either with :x instead of the more common :wq, the file will only be saved if you have just made changes to it. This can be helpful if you want to ensure that the file’s timestamp reflects its most recent changes. Just keep in mind that, if you make a change to a file and then undo those changes – like deleting a word or a line and then replacing it with the same content, this will still be seen as a change and vi or vim will save the file, updating the timestamp whether you use :x or :wq.To read this article in full, please click here

Using command options and arguments to get just the right output on Linux

This post covers some well-known Linux commands that, when used with particular options and arguments, can save you some time or ensure that what you are doing is what you intended. The first “trick” involves how you exit vi or vim and the difference that can make.Using :x instead of :wq when saving files with vi or vim The vi and vim editors are the most commonly used text editors on Linux systems. When you exit either with :x instead of the more common :wq, the file will only be saved if you have just made changes to it. This can be helpful if you want to ensure that the file’s timestamp reflects its most recent changes. Just keep in mind that, if you make a change to a file and then undo those changes – like deleting a word or a line and then replacing it with the same content, this will still be seen as a change and vi or vim will save the file, updating the timestamp whether you use :x or :wq.To read this article in full, please click here

Arista switches target ultra-low latency networking demands

Arista Networks has unveiled a portfolio of 25G Ethernet switches aimed at supporting data center, financial, industrial control applications that demand high-performance and extremely low latency.The new 7130 25G Series boxes are a significant power and features upgrade over the vendor’s current 7130 10G Ethernet line of devices and promise to reduce link latency 2.5-fold for data transmission by reducing queuing, serialization delays and eliminating the need for latency-inducing Forward Error Correction (FEC) typically required by 25G Ethernet, according to the vendor.   In addition the new switches eliminate the need for multiple cables and switches to set up and support the current level of low-latency networks, according to Martin Hull, vice president of Cloud Titans and Platform Product Management with Arista Networks in a blog about the new switches.To read this article in full, please click here

BrandPost: Five reasons to adopt a single-vendor SASE approach

By: Gabriel Gomane, Senior Product Marketing Manager at HPE Aruba NetworkingAs organizations are moving to a cloud-centric architecture, where most applications reside in the cloud and the demand for hybrid work environment increases, security must evolve in parallel: Legacy VPNs have often provided poor user experience. Additionally, usage of VPNs without granular controls could over-extend network privilege, granting users more access to resources than necessary, increasing security risks. Traditional network architectures routed application traffic to the data center for security inspection, which is no longer practical, and impacted application performance since most applications now reside in the cloud. With data increasingly hosted in SaaS applications, organizations need to take extra steps to protect their data. Sensitive data can indeed be stored in both sanctioned and unsanctioned cloud applications (or shadow IT), and may travel over unsecured links, leading to potential risk of data loss. Employees are vulnerable to web-based threats such as phishing attacks and ransomware when browsing the internet or simply accessing emails. The explosion of IoT devices in the recent years have significantly increased the attack surface. However, IoT devices are often built on a simple design and lack sophisticated security mechanisms. Finally, organizations must comply with Continue reading

Kubernetes Unpacked 037: Improving The Developer Experience With Continuous Deployment (Sponsored)

In this sponsored episode of the Kubernetes Unpacked podcast, Kristina and Michael are joined by Adam Frank, SVP of Product and Marketing at Armory, to discuss the role of continuous deployment in the software development lifecycle. They highlight the challenges organizations face in implementing effective continuous integration and continuous deployment (CI/CD) processes and the importance of prioritizing the developer experience.

The post Kubernetes Unpacked 037: Improving The Developer Experience With Continuous Deployment (Sponsored) appeared first on Packet Pushers.

Kubernetes Unpacked 037: Improving The Developer Experience With Continuous Deployment (Sponsored)

In this sponsored episode of the Kubernetes Unpacked podcast, Kristina and Michael are joined by Adam Frank, SVP of Product and Marketing at Armory, to discuss the role of continuous deployment in the software development lifecycle. They highlight the challenges organizations face in implementing effective continuous integration and continuous deployment (CI/CD) processes and the importance of prioritizing the developer experience.

How Prisma saved 98% on distribution costs with Cloudflare R2

How Prisma saved 98% on distribution costs with Cloudflare R2
How Prisma saved 98% on distribution costs with Cloudflare R2

The following is a guest post written by Pierre-Antoine Mills, Miguel Fernández, and Petra Donka of Prisma. Prisma provides a server-side library that helps developers read and write data to the database in an intuitive, efficient and safe way.

Prisma’s mission is to redefine how developers build data-driven applications. At its core, Prisma provides an open-source, next-generation TypeScript Object-Relational Mapping (ORM) library that unlocks a new level of developer experience thanks to its intuitive data model, migrations, type-safety, and auto-completion.

Prisma ORM has experienced remarkable growth, engaging a vibrant community of developers. And while it was a great problem to have, this growth was causing an explosion in our AWS infrastructure costs. After investigating a wide range of alternatives, we went with Cloudflare’s R2 storage — and as a result are thrilled that our engine distribution costs have decreased by 98%, while delivering top-notch performance.

It was a natural fit: Prisma is already a proud technology partner of Cloudflare’s, offering deep database integration with Cloudflare Workers. And Cloudflare products provide much of the underlying infrastructure for Prisma Accelerate and Prisma Pulse, empowering user-focused product development. In this post, we’ll dig into how we decided to extend our ongoing Continue reading

Day Two Cloud 215: Highlights From The Edge

Today's Day Two Cloud covers highlights from a recent Edge Field Day event. Ned Bellavance was a delegate at the event and will share perceptions and insights based on presentations from the event. Topics include a working definition of edge, the constraints of hosting infrastructure in edge locations (power, space, network connectivity and others), and operational models for running software and services in these environments.

The post Day Two Cloud 215: Highlights From The Edge appeared first on Packet Pushers.

Day Two Cloud 215: Highlights From The Edge

Today's Day Two Cloud covers highlights from a recent Edge Field Day event. Ned Bellavance was a delegate at the event and will share perceptions and insights based on presentations from the event. Topics include a working definition of edge, the constraints of hosting infrastructure in edge locations (power, space, network connectivity and others), and operational models for running software and services in these environments.

The Era of Ultra-Low Latency 25G Ethernet

Back in the early 2000s, store and forward networking was used by both market data providers, exchanges and customers executing electronic trading applications where the lowest latency execution can make the difference in a strategy from a profit to a loss. Moving closer to the exchange to reduce link latency, eliminating any unnecessary network hops, placing all feed handler and trading execution servers on the same switch to minimize transit time, and leveraging high-performance 10Gb NICs with embedded FPGAs all contributed to the ongoing effort to squeeze out every last microsecond to execute trades and gain a performance edge.

HS057 Technical Debt

In this podcast episode, Johna and I discuss the concept of technical debt. We provide different definitions of technical debt, with me focusing on the inability to switch solutions easily and Johna emphasizing the trade-off between immediate speed and long-term efficiency. We give examples of technical debt, such as outdated systems and insecure infrastructure, and […]

The post HS057 Technical Debt appeared first on Packet Pushers.

HS057: Technical Debt

In this podcast episode, Johna and I discuss the concept of technical debt. We provide different definitions of technical debt, with me focusing on the inability to switch solutions easily and Johna emphasizing the trade-off between immediate speed and long-term efficiency. We give examples of technical debt, such as outdated systems and insecure infrastructure, and... Read more »