Networking, security initiatives dominate IT spending priorities

Network connectivity and security are key areas for IT investment as well as potential barriers to global success, according to new research.Nearly half of CIOs claim that establishing and managing connectivity in new markets is the single most critical factor when it comes to ensuring successful global expansion, according to Expereo, which surveyed 650 large enterprise and mid-market CIOs across Asia, Europe and North America for its research. In addition, 49% of CIOs report that their board views global connectivity as “a business-critical asset to growth.”To read this article in full, please click here

Audit and Compliance with Calico

In this blog post, I will be talking about audit and compliance and how to implement it with Calico. Most IT organizations are asked to meet some standard of compliance, whether internal or industry-specific. However organizations are not always provided with the guidance to implement it. Furthermore, when guidance has been provided, it is usually applicable to a more traditional and static environment and doesn’t address the dynamic nature of Kubernetes. Existing compliance tools that rely on periodic snapshots do not provide accurate assessments of Kubernetes workloads against your compliance standards.

Getting started with audit and compliance for Kubernetes clusters

A good starting point is understanding what type of compliance requirements needs to be enforced and confirming that the enforcement is successful. Following this is finding a way to easily report on the current state of your environment so you can proactively ensure you are complying with the standards defined. You should also be prepared to provide a report on-demand when an audit team is investigating.

This blog is not meant to be a how-to guide to meet HIPAA, PCI-DSS or SOC. However, it will provide you with the guidance regarding these regulations so you can apply it and understand Continue reading

Hedge 185: Retrocomputing

Computers only have a history stretching back some 60 or 70 years—and yet much of that history has already been lost in this mist of time. Are we focusing so deeply on the future that we have forgotten our past? What might we learn from the past, even the recent past, and how does forgetting our past impact the future. Federico Lucifredi joins Tom Ammon and Russ White to discuss some of his projects finding, repairing, and operating old personal computers.

download

transcript will be linked in a few days

If you are interested in retrocomputing, you might want to start with this Stack Exchange, the Retrocomputing Forum, this Reddit forum.

The $1 Billion And Higher Ante To Play The AI Game

If you want to get the attention of server makers and compute engine providers and especially if you are going to be building GPU-laden clusters with shiny new gear to drive AI training and possibly AI inference for large language models and recommendation engines, the first thing you need is $1 billion.

The post The $1 Billion And Higher Ante To Play The AI Game first appeared on The Next Platform.

The $1 Billion And Higher Ante To Play The AI Game was written by Timothy Prickett Morgan at The Next Platform.

Day Two Cloud 201: Building A Product That Uses LLMs

Today we talk about Large Language Models (LLMs) and writing products and applications that use LLMs. Our guest is Phillip Carter, Principal PM at Honeycomb.io. Honeycomb makes an observability tool for site reliability engineers, and Carter worked on a project called Query Assistant that helps Honeycomb users get answers to questions about how to use the product and get insights from it. We discuss taking natural language input and turning it into outputs to help SREs do their jobs.

Day Two Cloud 201: Building A Product That Uses LLMs

Today we talk about Large Language Models (LLMs) and writing products and applications that use LLMs. Our guest is Phillip Carter, Principal PM at Honeycomb.io. Honeycomb makes an observability tool for site reliability engineers, and Carter worked on a project called Query Assistant that helps Honeycomb users get answers to questions about how to use the product and get insights from it. We discuss taking natural language input and turning it into outputs to help SREs do their jobs.

The post Day Two Cloud 201: Building A Product That Uses LLMs appeared first on Packet Pushers.

CIOs, Heed On-Premises App and Infrastructure Performance

Although legacy applications and infrastructure may not be a popular topic, their significance to organizations is crucial. As cloud native technologies are poised to become a dominant part of computing, certain applications and infrastructure must remain on premises, particularly in regulated and other industries. Amid the buzz surrounding no-code and low-code platforms, technologists must prioritize acquiring the appropriate tools and insights to manage on-premises environments’ availability and performance. Consumer expectations for flawless digital experiences continue to rise, so companies must optimize their on-premises customer-facing applications to accommodate. For Some, On-Premises Infrastructure Will Remain Essential Much of the recent digital transformation across multiple industries can be attributed to a substantial shift to the cloud. Cloud native technologies are in high demand due to their ability to expedite release velocity and optimize operations with speed, agility, scale and resilience. Nevertheless, it’s easy to overlook the fact that many organizations, especially larger enterprises, still run their applications and infrastructure on premises. While this may seem surprising, it’s partially due to the time-consuming process of seamlessly and securely migrating highly intricate, legacy applications to the cloud. Often, only a portion of an application may be migrated to the cloud while major components will remain Continue reading

Turbocharging host workloads with Calico eBPF and XDP

In Linux, network-based applications rely on the kernel’s networking stack to establish communication with other systems. While this process is generally efficient and has been optimized over the years, in some cases it can create unnecessary overhead that can impact the overall performance of the system for network-intensive workloads such as web servers and databases.

XDP (eXpress Data Path) is an eBPF-based high-performance datapath inside the Linux kernel that allows you to bypass the kernel’s networking stack and directly handle packets at the network driver level. XDP can achieve this by executing a custom program to handle packets as they are received by the kernel. This can greatly reduce overhead, improve overall system performance, and improve network-based applications by shortcutting the normal networking path of ordinary traffic. However, using raw XDP can be challenging due to its programming complexity and the high learning curve involved. Solutions like Calico Open Source offer an easier way to tame these technologies.

Calico Open Source is a networking and security solution that seamlessly integrates with Kubernetes and other cloud orchestration platforms. While infamous for its policy engine and security capabilities, there are many other features that can be used in an environment by installing Continue reading

China seeks to improve reliability of its chip manufacturing sector

China’s Ministry for Industry and Information Technology has said it wants to improve the country’s manufacturing capabilities, singling out the production of advanced semiconductor materials and automotive chips as areas that are in need of improvement.In its recently released Opinions on Manufacturing Reliability Improvement report, the department said it was putting forward a plan that would make up for the shortcomings of basic product reliability and improve the quality of core components in three industries: machinery, electronics, and automotive.To read this article in full, please click here

Getting help on Linux

If you’re fairly new to Linux, you might need some help getting started on the command line. But you made it here, so let’s run through a number of ways that you can get comfortable and up to speed fairly quickly.Man pages Every Linux command should have a "man page" (i.e., manual page) – an explanation of what the command does, how it works and the options that you can use to specify what you want the command to show you. For example, if you wanted to see the options for formatting the output of the date command, you should look at the man page with the command “man date”. It should among other things, show you the format of the date command like this:To read this article in full, please click here

Getting help on Linux

If you’re fairly new to Linux, you might need some help getting started on the command line. But you made it here, so let’s run through a number of ways that you can get comfortable and up to speed fairly quickly.Man pages Every Linux command should have a "man page" (i.e., manual page) – an explanation of what the command does, how it works and the options that you can use to specify what you want the command to show you. For example, if you wanted to see the options for formatting the output of the date command, you should look at the man page with the command “man date”. It should among other things, show you the format of the date command like this:To read this article in full, please click here

5 ways to boost server efficiency

Servers can consume more than half of the energy in modern data centers, which makes server efficiency attractive to companies looking to hit carbon-neutral sustainability targets. Plus, reducing energy usage can save money.To help reach that goal, here are five ways to boost server efficiency, according to recent research from the Uptime Institute, which is focused on improving the performance, efficiency, and reliability of business-critical infrastructure. Upgrade to a newer server generation. For decades, server energy efficiency has consistently improved thanks to improved efficiency of processors that power them. Pick servers with high compute capacity as measured in number of transactions per second. Those are the most energy efficient. Go for high core count. In general, efficiency improves with the number of cores, although there is some tapering off at the highest end. Be aware that while a server can be more energy efficient, its actual overall power consumed (Watts) can increase even as its efficiency (transactions per second per Watt) increases. Embrace power-management features in two ways: by reducing core CPU voltage and frequency as utilization increases, and by moving unneeded cores to idle state. For its analysis, Uptime focused servers that use AMD EPYC or Intel Xeon Continue reading

5 ways to boost server efficiency

Servers can consume more than half of the energy in modern data centers, which makes server efficiency attractive to companies looking to hit carbon-neutral sustainability targets. Plus, reducing energy usage can save money.To help reach that goal, here are five ways to boost server efficiency, according to recent research from the Uptime Institute, which is focused on improving the performance, efficiency, and reliability of business-critical infrastructure. Upgrade to a newer server generation. For decades, server energy efficiency has consistently improved thanks to improved efficiency of processors that power them. Pick servers with high compute capacity as measured in number of transactions per second. Those are the most energy efficient. Go for high core count. In general, efficiency improves with the number of cores, although there is some tapering off at the highest end. Be aware that while a server can be more energy efficient, its actual overall power consumed (Watts) can increase even as its efficiency (transactions per second per Watt) increases. Embrace power-management features in two ways: by reducing core CPU voltage and frequency as utilization increases, and by moving unneeded cores to idle state. For its analysis, Uptime focused servers that use AMD EPYC or Intel Xeon Continue reading

AskJJX: How To Handle Rogue APs Without Getting Arrested

AskJJX: “What’s the best way to find and disable rogue APs on the network? We had an audit finding and got our hand slapped.” Ahhh, I love this question for so many reasons. First, because my answer to this today, in 2023, is very different than my answer would have been years ago. You may […]

The post AskJJX: How To Handle Rogue APs Without Getting Arrested appeared first on Packet Pushers.

How to deploy Red Hat Ansible Automation Platform on Google Cloud

This blog is co-authored by Zack Kayyali and Hicham (he-sham) Mourad

Deploying Red Hat Ansible Automation Platform

The steps below detail how to install Red Hat Ansible Automation Platform on Google Cloud from the marketplace. Before starting the deployment process, please ensure the Google Cloud account you are using to deploy has the following permissions. These IAM roles are required to deploy the Google Cloud foundation stack offering.  The foundation stack offering here refers to the base Ansible Automation Platform 2 deployment.

This blog details how to deploy Ansible Automation Platform on Google Cloud, and then access the application. This deployment process will be configured to set up Ansible Automation Platform on its own Virtual Private Cloud (VPC) that it creates and manages. We also support deploying into an existing VPC.

To begin, first log into your Google Cloud account. If you have a private offer, ensure that these are accepted for both the foundation and extension node offerings. 

Note: 

  • The foundation offer refers to the “Red Hat Ansible Automation Platform 2 - Up to 100 Managed Nodes” marketplace item. 
  • The extension node offer refers to the “Extension Node - Ansible Automation Platform 2 - 100 Managed Continue reading

Building a WAN Impairment Device in Linux on VMware vSphere

In some scenarios it is really useful to be able to simulate a WAN in regards to latency, jitter, and packet loss. Especially for those of us that work with SD-WAN and want to test or policies in a controlled environment. In this post I will describe how I build a WAN impairment device in Linux for a VMware vSphere environment and how I can simulate different conditions.

My SD-WAN lab is built on VMware vSphere using Catalyst SD-WAN with Catalyst8000v as virtual routers and on-premises controllers. The goal with the WAN impairment device is to be able to manipulate each internet connection to a router individually. That way I can simulate that a particular connection or router is having issues while other connections/routers are not. I don’t want to impose the same conditions on all connections/devices simultaneously. To do this, I have built a physical topology that looks like this:

All devices are connected to a management network that I can access via a VPN. This way I have “out of band” access to all devices and can use SSH to configure my routers with a bootstrap configuration. To avoid having to create many unique VLANs in the vSwitch, Continue reading