OpenVPN has been a dominant player in the VPN space since its release in 2001. With a 23-year history, OpenVPN has proven to be a reliable and secure protocol. However, it has some downsides, particularly regarding performance and ease of use.
OpenVPN creates a secure tunnel between two endpoints using SSL/TLS for encryption. While robust, the protocol is complex and requires considerable resources to run efficiently. Setting up and managing OpenVPN can be cumbersome, especially for DevOps teams juggling multiple environments and configurations. It wouldn’t be the first time an OpenVPN server stopped working because the TLS certificates expired.
WireGuard, on the other hand, is the new kid on the block, having been introduced in recent years. What sets WireGuard apart from OpenVPN is its simplicity and efficiency. While OpenVPN relies on older, more complex cryptographic algorithms, WireGuard uses modern encryption that is both faster and more secure.
Unlike OpenVPN, WireGuard is integrated directly into the Linux kernel, meaning it operates at a lower level and with less overhead. This results in faster connection times and lower resource usage. One of the significant benefits of WireGuard is its minimal codebase — about 10% the size of OpenVPN’s — which reduces Continue reading
Unless you’ve studied for a network cert, the Open Systems Interconnection (OSI) model is probably somewhat of a mystery to you. Maybe you heard of it from a coworker, or maybe you saw it in a marketing campaign for something on AWS.
Maybe you thought “Layer 3” was just some new buzzword. Such shorthand references to the OSI model, however, can be useful if you can decode them, as they can help you understand where in your network stack a tool could fit or where to look for a problem during an incident call.
Before we get too far, let me address a point of contention. Many people will say the theoretical OSI model is outdated. The model is theoretical, true, and the real world is certainly more complex than it may lead you to believe. Its layers don’t neatly map to specific devices, and other models exist that more accurately reflect the real world, such as the Transmission Control Protocol/Internet Protocol (TCP/IP) model.
Image 1
It’s useful to think of the OSI model as an abstraction that allows us to reason about the separation of concerns on a network. We use it to think through troubleshooting steps should Continue reading
We’ve all been frustrated by latency, either as users of an application, or as developers building such apps.
At ScyllaDB‘s annual Pekka Enberg, founder and CTO of shared his favorite tips for spotting and removing latency from systems.
“Latency lurks everywhere,” said Enberg, who also has authored a once estimated that it loses 1% of sales for every 100ms of latency.
Screenshot
Enberg has thought plenty about ways of reducing latency and has boiled down his solutions into three different approaches:
Reduce data movement
Continue reading
The traditional way to enable things hosted in different clouds to talk to each other has been to order a physical cross-connect (a literal cable) from a colo provider that hosts the clouds’ “onramps” (private connections to the clouds’ networks) to link them.
Various software-defined alternatives have emerged in recent years. Some do essentially the same thing but virtually, while others, like typical SD-WANs, rely on things like IPsec or other encryption protocols to secure connections traversing the internet.
There are other, less mature approaches, including some cloud providers’ own intercloud connectivity services. And there’s always the option to send your cloud-to-cloud API requests over the public internet and hope for the best.
The traditional ways tend to be costly and complex to set up, requiring specialized networking knowledge. A little more than a year ago, we decided to build a cloud-to-cloud private connectivity service that would be easier to use. The goal was to give developers who are proficient in cloud but not in networking a way to create multicloud connections and manage them with familiar tools, like Terraform or Pulumi, without hosting any infrastructure in our data centers.
It was an interesting challenge, not in the least from Continue reading
Network Attached Storage (NAS) is a great way to build out storage for your business. Instead of relying solely on external drives, shared directories or expensive cloud storage, why not deploy a tool that was created specifically for scalable storage?
That’s where TrueNAS comes into play.
TrueNAS is a take on Linux that is purpose-built for storage and comes with all the NAS capabilities you can imagine. TrueNAS can be installed on off-the-shelf hardware (even small form-factor PCs or virtual machines), so your storage server can be tucked out of the way.
This storage solution includes features like:
User/group management
Alerts
SSH connectivity
2-Factor authentication
Storage pools
Snapshots
Disks (and disk importing)
Support for directory services such as Active Directory, LDAP, NIS, and Kerberos
Sharing via Apple Shares, Block Shares, UNIX Shares, WebDAV, and SMB
Service management
Plugins
Jails
Virtual Machines
Shell access
The installation of TrueNAS is all text-based but is incredibly simple to take care of and takes very little time. With minimal configuration work for the installation, I had an instance of TrueNAS up and running within about 2 minutes. The only thing you need to do is set a root password during the installation, which is Continue reading
The .io domain was originally created for the British Indian Ocean Territory but eventually became popular with the tech sector, for obvious reasons.
Part of the reason for this is that ‘io’ is similar in appearance to I/O (aka input/output), which is why the tech sector started gobbling up the .io domains. There were issues soon after the creation of the domain that had to do with the distribution of profit. A lot of app developers use the .io domain. The New Stack uses the .io domain.
It’s everywhere.
But there’s a problem, and it’s one that could have a cascading effect within the realm of the tech sector.
What has happened is that the
Back in the 70s, if you wanted to be online, you had to be a college student, researcher, or in the military to be on the internet. That was it. Joe or Jane User? Forget about it. Then, during a Chicago blizzard, a young computer scientist, online services such as CompuServe started as early as 1969. However, unlike the free BBSs, these services could cost as much as $30 an hour in 1970s dollars or $130 an hour in today’s money. XMODEM file transfer protocol in 1977. This innovative method broke binary files into packets, ensuring reliable delivery over unstable analog telephone lines. XMODEM became a cornerstone of early online file sharing and inspired numerous subsequent file transfer protocols.
While considered inefficient by today’s standards, XMODEM established key concepts that are still used in file transfers. These include breaking data into packets for transmission, using checksums or CRCs for error detection, and implementing handshaking between sender and receiver.
Thanks to XMODEM, people began sharing files with one another. This, in turn, helped create
Kubernetes recently Kubernetes to enter its rebellious phase.
It will experience awkward growth spurts (as new use cases force Kubernetes to adapt); it might go through an identity crisis (is it a platform or is it an API?); it will ask for less supervision and more independence (and rely on AI-driven tooling to require less direct human oversight).
As Kubernetes matures into adolescence, let’s consider how its networking and security circulatory systems grow and adapt. With eBPF, the technology that lets you run custom programs within the Linux (and, soon, Windows) kernel, is not stopping. Beyond networking and security (and the Tetragon projects I work on), more use cases are emerging as you will learn during KubeCon:
Measuring Introducing Continue reading
The keepers of the internet standards are Internet Architecture Board (IAB), a group of theThe Next Era of Network Management Operations (NEMOPS) workshop, to compile a list of technologies that might be useful for an internet of the future.
They did this before, RFC 6241), the Network Configuration protocol, now widely-used to install, manipulate, and delete the configuration of network devices.
YANG (RFC 8040), a programmatic interface for YANG.
CORECONF (
A Git repository simplifies the sharing of code to a team. Many teams opt to go the GitHub route but there might be an occasion when you need to spin up a quick repository that is only available to those team members working on your LAN.
When you need to deploy a Git repository on your LAN and you need to give other team members access to it, the goal is to do it quickly and securely. Thanks to git and Secure Shell (SSH), this isn’t nearly as challenging as you might think. And although this setup might not be an option for team members who work outside of your LAN, it’s great for a temporary repository offered to those within your company network.
How does it work? Let me show you.
What You’ll Need
To make this work, you’ll need the following:
A Linux machine with Git installed.
An SSH key pair.
A user with sudo privileges (if the minimum requirements aren’t installed).
That’s it. Let’s make some Git magic.
Installing Git
On the off-chance Git isn’t installed, here’s how you can take care of that:
Ubuntu-based distributions – sudo apt-get install git -y
Fedora-based distributions – sudo dnf Continue reading
It was in 2022 when Meta engineers started to see the first clouds of an incoming storm, namely how much AI would change the nature —and volume — of the company’s network traffic.
“Starting 2022, we started seeing a whole other picture,” said Meta’s Networking @Scale 2024 conference, being held this week both virtually and at Santa Clara Convention Center, in Calif.
Mind you, Meta owns one of the world’s largest private backbones, a global network physically connecting 25 data centers and 85 points of presence with millions of miles of fiber optic cable, buried under both land and sea. Its reach and throughput allows someone on an Australian beach to see videos being posted by their friend in Greece nearly instantaneously.
And for the past five years, this global capacity has grown consistently by 30% a year.
Yet, the growing AI demands on the backbone is bumpy and difficult to predict.
“The impact of large clusters, GenAI, and AGI is yet to be learned,” Sundaresan said. “We haven’t yet fully flushed out what that means for the backend.”
Nonetheless, the networking team has gotten creative Continue reading
Tired of dealing with cloud providers and mulling a move to a private cloud instead? Broadcom wants you to take a look at its operation of a private cloud.
This week at Paul Turner, Broadcom vice president of products for VCF, in a press briefing.
Broadcom is positioning VCF as a lower-cost, more secure alternative to public cloud computing.
Overall, the goal is to help the organization create an infrastructure that works together as a single, unified whole while supporting modern application architectures.
Virtual Cloud Foundation architecture (VMware)
Big Results Moving to a Private Cloud
According to the company, a private cloud approach can result in:
Continue reading
Louis Ryan, CTO, Solo.io
The Istio service mesh software offers a potentially big change in how to handle Kubernetes traffic, with the introduction of an ambient mesh option.
Although the technology has been offered as an experimental feature for several releases, the core development team taking feedback from users, this is the first release to offer the feature as a production-grade capability.
It’s a new architecture entirely, explained Solo.io, as well as a member of Idit Levine, founder and CEO of Solo.io. Once applications are decomposed into individual services, these services require a way to communicate. Hence it made sense to festoon each Continue reading
Modern computers and their users rely on network connectivity for nearly everything, including cloud-based applications, software access, data access and communication. It seems that every aspect of computing relies on networking. Linux workstations and servers are no different in this necessity than Windows or macOS systems.
One of a Linux sysadmin’s primary responsibilities is ensuring network connectivity. This requires understanding the system’s identity on the network and configuring it to participate in network data exchanges.
Linux systems have three identities on a network. Various network devices use each identity differently.
Here are the three identities with a summary of their use:
Hostname: A human-friendly name providing users and administrators with an easy way to identify a node.
IP address: A logical address routers and network configuration tools use to identify the system.
MAC address: A physical address on the network interface card (NIC) that uniquely identifies it to switches and other Layer 2 devices.
For example, a computer’s three identities might look like this:
Hostname: computer27
IP address: 192.168.2.200
MAC address: 00:1c:42:73:8d:f2
The use and function of these three network identities are assumed knowledge for this article. Be sure to review basic network information Continue reading
The Secure Shell (SSH) isn’t just about allowing you to remote into servers to tackle admin tasks. Thanks to this secure networking protocol, you can also mount remote directories with the help of the SSH File System (SSHF).
SSHFS uses SFTP (SSH File Transfer Protocol) to mount remote directories to a local machine using secure encryption, which means the connection is far more secure than your standard FTP. As well, once a remote directory is mounted, it can be used as if it was on the local machine.
Consider SSHFS to be a more secure way of creating network shares, the only difference is you have to have SSHFS installed on any machine that needs to connect to the share (whereas with Samba, you only have to have it installed on the machine hosting the share).
Let’s walk through the process of getting SSHFS up and running, so you can securely mount remote directories to your local machine.
What You’ll Need
To make this work, you’ll need at least two Linux machines. These machines can be Ubuntu or Fedora-based, because SSHFS is found in the standard repositories for most Linux distributions. You’ll also need a user with Continue reading
The vast majority of mobile applications rely on making network requests to deliver a successful user experience. However, many engineering teams do not have client-side network monitoring. Instead, they rely exclusively on network performance is from a backend perspective.
Not All Requests Make It to Your Backend Servers
Your backend can only measure the behavior of network requests that actually reach your servers. Below are a few reasons why requests would fail to make it there.
No Internet Connection
There are scenarios where it is not obvious to mobile users that they don’t have a connection. For example, a user can be connected to a WiFi access point, but the upstream connection from the access point is down or has intermittent connectivity.
Interrupted Connection
Even if you initially make a successful connection to a backend server, there’s no guarantee that the request will complete successfully. This is more common with mobile Continue reading