Archive

Category Archives for "Networking"

AI is changing server sales but paying off for enterprises

The adoption of artificial intelligence is changing the way servers are being procured while having a quick and positive impact on firms that deploy AI technologies, according to a pair of research reports from Omdia.In its upcoming cloud and data center market report, the research firm predicts a reduction in the number of server shipments for the first time since 2007. However, the server drop in 2007 was due to a global economic crisis. The current shift in server buying has a more positive spin.Omdia found that demand for compute resources remains high. However, it also reports that demand for more expensive servers with specialized hardware for AI model training (translation: GPUs) are being prioritized over the typical enterprise server with just a CPU.To read this article in full, please click here

Catalyst SD-WAN – Botstrapping a Catalyst8000v in a Virtual Lab

I’m rebuilding my Catalyst SD-WAN lab and thought I would give some quick pointers on how to bootstrap a Catalyst 8000v in your virtual lab. When the router first boots up, it will be in autonomous mode (non-SD-WAN mode):

Router#show version | i operating
Router operating mode: Autonomous

Configure the router to be in controller mode which will cause it to reboot:

Router#controller-mode enable
Enabling controller mode will erase the nvram filesystem, remove all configuration files, and reload the box! 
Ensure the BOOT variable points to a valid image 
Continue? [confirm]
% Warning: Bootstrap config file needed for Day-0 boot is missing
Do you want to abort? (yes/[no]): no

To bootstrap the router, the following is needed:

  • System IP
  • Site ID
  • Organization name
  • vBond name/IP
  • IP address of tunnel interface (if not using DHCP)
  • Tunnel interface name
  • DNS server (if using name resolution)
  • On-premises root cert (if using your own certificates)
  • Certificate

First, verify that the router is now in controller mode:

Router#show version | i operating
Router operating mode: Controller-Managed

Create a small bootstrap configuration with all the required parameters. Mine is below (some information redacted):

config-transaction
system
system-ip x.x.x.x
site-id xxxxxxxxxx
organization-name "sd-wan-lab-daniel"
vbond 192. Continue reading

WISP/FISP Design: Switch Centric (SWC) Topology

IP ArchiTechs switch centric core being built in the Denver DC. Dec 2018

Overview

This is an article i’ve wanted to write for a long time. In the last decade, the work that we have done at iparchitechs.com with WISPs/FISPs in network design using commodity equipment like MikroTik and FiberStore has yielded quite a few best practices and lessons learned.

While the idea of “router on a stick” isn’t new, when we first started working with WISPs/FISPs and MikroTik routers 10+ years ago, we immediately noticed a few common elements in the requests we’d get for consulting:

I’m out of ports on my router…how do I add more?”

“I started with a single router, how do I make it redundant and keep NAT/peering working properly”?

“I have high CPU on my router and I don’t know how to add capacity and split the traffic”


“I can’t afford Cisco or Juniper but I need a network that’s highly available and resilient”

Coming from a telco background where a large chassis was used pretty much everywhere for redundancy and relying on links split across multiple line cards with LACP, that was one of my first inclinations to solve the Continue reading

NXDOMAIN

The DNS is a strange and at times surprising environment. One could take a simple perspective and claim that the aim of the DNS is to translate DNS names into IP addresses. And you wouldn’t be wrong, but it's also so much more. Most of the time when we analyse the behaviour of the DNS we look at the way in which names are resolved by the DNS infrasdtructure, but there is also another view of the DNS. What do we see when we look at DNS queries for names that do not exist in the DNS?

Networking, security initiatives dominate IT spending priorities

Network connectivity and security are key areas for IT investment as well as potential barriers to global success, according to new research.Nearly half of CIOs claim that establishing and managing connectivity in new markets is the single most critical factor when it comes to ensuring successful global expansion, according to Expereo, which surveyed 650 large enterprise and mid-market CIOs across Asia, Europe and North America for its research. In addition, 49% of CIOs report that their board views global connectivity as “a business-critical asset to growth.”To read this article in full, please click here

Networking, security initiatives dominate IT spending priorities

Network connectivity and security are key areas for IT investment as well as potential barriers to global success, according to new research.Nearly half of CIOs claim that establishing and managing connectivity in new markets is the single most critical factor when it comes to ensuring successful global expansion, according to Expereo, which surveyed 650 large enterprise and mid-market CIOs across Asia, Europe and North America for its research. In addition, 49% of CIOs report that their board views global connectivity as “a business-critical asset to growth.”To read this article in full, please click here

Audit and Compliance with Calico

In this blog post, I will be talking about audit and compliance and how to implement it with Calico. Most IT organizations are asked to meet some standard of compliance, whether internal or industry-specific. However organizations are not always provided with the guidance to implement it. Furthermore, when guidance has been provided, it is usually applicable to a more traditional and static environment and doesn’t address the dynamic nature of Kubernetes. Existing compliance tools that rely on periodic snapshots do not provide accurate assessments of Kubernetes workloads against your compliance standards.

Getting started with audit and compliance for Kubernetes clusters

A good starting point is understanding what type of compliance requirements needs to be enforced and confirming that the enforcement is successful. Following this is finding a way to easily report on the current state of your environment so you can proactively ensure you are complying with the standards defined. You should also be prepared to provide a report on-demand when an audit team is investigating.

This blog is not meant to be a how-to guide to meet HIPAA, PCI-DSS or SOC. However, it will provide you with the guidance regarding these regulations so you can apply it and understand Continue reading

Hedge 185: Retrocomputing

Computers only have a history stretching back some 60 or 70 years—and yet much of that history has already been lost in this mist of time. Are we focusing so deeply on the future that we have forgotten our past? What might we learn from the past, even the recent past, and how does forgetting our past impact the future. Federico Lucifredi joins Tom Ammon and Russ White to discuss some of his projects finding, repairing, and operating old personal computers.

download

transcript will be linked in a few days

If you are interested in retrocomputing, you might want to start with this Stack Exchange, the Retrocomputing Forum, this Reddit forum.

Day Two Cloud 201: Building A Product That Uses LLMs

Today we talk about Large Language Models (LLMs) and writing products and applications that use LLMs. Our guest is Phillip Carter, Principal PM at Honeycomb.io. Honeycomb makes an observability tool for site reliability engineers, and Carter worked on a project called Query Assistant that helps Honeycomb users get answers to questions about how to use the product and get insights from it. We discuss taking natural language input and turning it into outputs to help SREs do their jobs.

The post Day Two Cloud 201: Building A Product That Uses LLMs appeared first on Packet Pushers.

Day Two Cloud 201: Building A Product That Uses LLMs

Today we talk about Large Language Models (LLMs) and writing products and applications that use LLMs. Our guest is Phillip Carter, Principal PM at Honeycomb.io. Honeycomb makes an observability tool for site reliability engineers, and Carter worked on a project called Query Assistant that helps Honeycomb users get answers to questions about how to use the product and get insights from it. We discuss taking natural language input and turning it into outputs to help SREs do their jobs.

CIOs, Heed On-Premises App and Infrastructure Performance

Although legacy applications and infrastructure may not be a popular topic, their significance to organizations is crucial. As cloud native technologies are poised to become a dominant part of computing, certain applications and infrastructure must remain on premises, particularly in regulated and other industries. Amid the buzz surrounding no-code and low-code platforms, technologists must prioritize acquiring the appropriate tools and insights to manage on-premises environments’ availability and performance. Consumer expectations for flawless digital experiences continue to rise, so companies must optimize their on-premises customer-facing applications to accommodate. For Some, On-Premises Infrastructure Will Remain Essential Much of the recent digital transformation across multiple industries can be attributed to a substantial shift to the cloud. Cloud native technologies are in high demand due to their ability to expedite release velocity and optimize operations with speed, agility, scale and resilience. Nevertheless, it’s easy to overlook the fact that many organizations, especially larger enterprises, still run their applications and infrastructure on premises. While this may seem surprising, it’s partially due to the time-consuming process of seamlessly and securely migrating highly intricate, legacy applications to the cloud. Often, only a portion of an application may be migrated to the cloud while major components will remain Continue reading

Turbocharging host workloads with Calico eBPF and XDP

In Linux, network-based applications rely on the kernel’s networking stack to establish communication with other systems. While this process is generally efficient and has been optimized over the years, in some cases it can create unnecessary overhead that can impact the overall performance of the system for network-intensive workloads such as web servers and databases.

XDP (eXpress Data Path) is an eBPF-based high-performance datapath inside the Linux kernel that allows you to bypass the kernel’s networking stack and directly handle packets at the network driver level. XDP can achieve this by executing a custom program to handle packets as they are received by the kernel. This can greatly reduce overhead, improve overall system performance, and improve network-based applications by shortcutting the normal networking path of ordinary traffic. However, using raw XDP can be challenging due to its programming complexity and the high learning curve involved. Solutions like Calico Open Source offer an easier way to tame these technologies.

Calico Open Source is a networking and security solution that seamlessly integrates with Kubernetes and other cloud orchestration platforms. While infamous for its policy engine and security capabilities, there are many other features that can be used in an environment by installing Continue reading

China seeks to improve reliability of its chip manufacturing sector

China’s Ministry for Industry and Information Technology has said it wants to improve the country’s manufacturing capabilities, singling out the production of advanced semiconductor materials and automotive chips as areas that are in need of improvement.In its recently released Opinions on Manufacturing Reliability Improvement report, the department said it was putting forward a plan that would make up for the shortcomings of basic product reliability and improve the quality of core components in three industries: machinery, electronics, and automotive.To read this article in full, please click here

Getting help on Linux

If you’re fairly new to Linux, you might need some help getting started on the command line. But you made it here, so let’s run through a number of ways that you can get comfortable and up to speed fairly quickly.Man pages Every Linux command should have a "man page" (i.e., manual page) – an explanation of what the command does, how it works and the options that you can use to specify what you want the command to show you. For example, if you wanted to see the options for formatting the output of the date command, you should look at the man page with the command “man date”. It should among other things, show you the format of the date command like this:To read this article in full, please click here

Getting help on Linux

If you’re fairly new to Linux, you might need some help getting started on the command line. But you made it here, so let’s run through a number of ways that you can get comfortable and up to speed fairly quickly.Man pages Every Linux command should have a "man page" (i.e., manual page) – an explanation of what the command does, how it works and the options that you can use to specify what you want the command to show you. For example, if you wanted to see the options for formatting the output of the date command, you should look at the man page with the command “man date”. It should among other things, show you the format of the date command like this:To read this article in full, please click here

5 ways to boost server efficiency

Servers can consume more than half of the energy in modern data centers, which makes server efficiency attractive to companies looking to hit carbon-neutral sustainability targets. Plus, reducing energy usage can save money.To help reach that goal, here are five ways to boost server efficiency, according to recent research from the Uptime Institute, which is focused on improving the performance, efficiency, and reliability of business-critical infrastructure. Upgrade to a newer server generation. For decades, server energy efficiency has consistently improved thanks to improved efficiency of processors that power them. Pick servers with high compute capacity as measured in number of transactions per second. Those are the most energy efficient. Go for high core count. In general, efficiency improves with the number of cores, although there is some tapering off at the highest end. Be aware that while a server can be more energy efficient, its actual overall power consumed (Watts) can increase even as its efficiency (transactions per second per Watt) increases. Embrace power-management features in two ways: by reducing core CPU voltage and frequency as utilization increases, and by moving unneeded cores to idle state. For its analysis, Uptime focused servers that use AMD EPYC or Intel Xeon Continue reading