In this blog post, I will be talking about audit and compliance and how to implement it with Calico. Most IT organizations are asked to meet some standard of compliance, whether internal or industry-specific. However organizations are not always provided with the guidance to implement it. Furthermore, when guidance has been provided, it is usually applicable to a more traditional and static environment and doesn’t address the dynamic nature of Kubernetes. Existing compliance tools that rely on periodic snapshots do not provide accurate assessments of Kubernetes workloads against your compliance standards.
A good starting point is understanding what type of compliance requirements needs to be enforced and confirming that the enforcement is successful. Following this is finding a way to easily report on the current state of your environment so you can proactively ensure you are complying with the standards defined. You should also be prepared to provide a report on-demand when an audit team is investigating.
This blog is not meant to be a how-to guide to meet HIPAA, PCI-DSS or SOC. However, it will provide you with the guidance regarding these regulations so you can apply it and understand Continue reading
Computers only have a history stretching back some 60 or 70 years—and yet much of that history has already been lost in this mist of time. Are we focusing so deeply on the future that we have forgotten our past? What might we learn from the past, even the recent past, and how does forgetting our past impact the future. Federico Lucifredi joins Tom Ammon and Russ White to discuss some of his projects finding, repairing, and operating old personal computers.
transcript will be linked in a few days
If you are interested in retrocomputing, you might want to start with this Stack Exchange, the Retrocomputing Forum, this Reddit forum.
Here’s a fun question and don’t cheat by asking ChatGPT. What is more valuable, an ounce of gold or an ounce of an Nvidia “Hopper” H100 GPU accelerator? …
The post AI To The Rescue For Server And Storage Spending In Q1 first appeared on The Next Platform.
AI To The Rescue For Server And Storage Spending In Q1 was written by Timothy Prickett Morgan at The Next Platform.
If you want to get the attention of server makers and compute engine providers and especially if you are going to be building GPU-laden clusters with shiny new gear to drive AI training and possibly AI inference for large language models and recommendation engines, the first thing you need is $1 billion. …
The post The $1 Billion And Higher Ante To Play The AI Game first appeared on The Next Platform.
The $1 Billion And Higher Ante To Play The AI Game was written by Timothy Prickett Morgan at The Next Platform.
Today we talk about Large Language Models (LLMs) and writing products and applications that use LLMs. Our guest is Phillip Carter, Principal PM at Honeycomb.io. Honeycomb makes an observability tool for site reliability engineers, and Carter worked on a project called Query Assistant that helps Honeycomb users get answers to questions about how to use the product and get insights from it. We discuss taking natural language input and turning it into outputs to help SREs do their jobs.
The post Day Two Cloud 201: Building A Product That Uses LLMs appeared first on Packet Pushers.
In Linux, network-based applications rely on the kernel’s networking stack to establish communication with other systems. While this process is generally efficient and has been optimized over the years, in some cases it can create unnecessary overhead that can impact the overall performance of the system for network-intensive workloads such as web servers and databases.
XDP (eXpress Data Path) is an eBPF-based high-performance datapath inside the Linux kernel that allows you to bypass the kernel’s networking stack and directly handle packets at the network driver level. XDP can achieve this by executing a custom program to handle packets as they are received by the kernel. This can greatly reduce overhead, improve overall system performance, and improve network-based applications by shortcutting the normal networking path of ordinary traffic. However, using raw XDP can be challenging due to its programming complexity and the high learning curve involved. Solutions like Calico Open Source offer an easier way to tame these technologies.
Calico Open Source is a networking and security solution that seamlessly integrates with Kubernetes and other cloud orchestration platforms. While infamous for its policy engine and security capabilities, there are many other features that can be used in an environment by installing Continue reading
AskJJX: “What’s the best way to find and disable rogue APs on the network? We had an audit finding and got our hand slapped.” Ahhh, I love this question for so many reasons. First, because my answer to this today, in 2023, is very different than my answer would have been years ago. You may […]
The post AskJJX: How To Handle Rogue APs Without Getting Arrested appeared first on Packet Pushers.
This blog is co-authored by Zack Kayyali and Hicham (he-sham) Mourad
The steps below detail how to install Red Hat Ansible Automation Platform on Google Cloud from the marketplace. Before starting the deployment process, please ensure the Google Cloud account you are using to deploy has the following permissions. These IAM roles are required to deploy the Google Cloud foundation stack offering. The foundation stack offering here refers to the base Ansible Automation Platform 2 deployment.
This blog details how to deploy Ansible Automation Platform on Google Cloud, and then access the application. This deployment process will be configured to set up Ansible Automation Platform on its own Virtual Private Cloud (VPC) that it creates and manages. We also support deploying into an existing VPC.
To begin, first log into your Google Cloud account. If you have a private offer, ensure that these are accepted for both the foundation and extension node offerings.
Note:
In some scenarios it is really useful to be able to simulate a WAN in regards to latency, jitter, and packet loss. Especially for those of us that work with SD-WAN and want to test or policies in a controlled environment. In this post I will describe how I build a WAN impairment device in Linux for a VMware vSphere environment and how I can simulate different conditions.
My SD-WAN lab is built on VMware vSphere using Catalyst SD-WAN with Catalyst8000v as virtual routers and on-premises controllers. The goal with the WAN impairment device is to be able to manipulate each internet connection to a router individually. That way I can simulate that a particular connection or router is having issues while other connections/routers are not. I don’t want to impose the same conditions on all connections/devices simultaneously. To do this, I have built a physical topology that looks like this:
All devices are connected to a management network that I can access via a VPN. This way I have “out of band” access to all devices and can use SSH to configure my routers with a bootstrap configuration. To avoid having to create many unique VLANs in the vSwitch, Continue reading