Starting today, we’re thrilled to announce you can run the same tunnel from multiple instances of cloudflared simultaneously. This enables graceful restarts, elastic auto-scaling, easier Kubernetes integration, and more reliable tunnels.
What is Cloudflare Tunnel?
I work on Cloudflare Tunnel, a product our customers use to connect their services and private networks to Cloudflare without poking holes in their firewall. Tunnel connections are managed by cloudflared, a tool that runs in your environment and connects your services to the Internet while ensuring that all its traffic goes through Cloudflare.
Say you have some local service (a website, an API, or a TCP server), and you want to securely expose it to the Internet using a Cloudflare Tunnel. First, download cloudflared, which is a “connector” that connects your local service to the Internet through Cloudflare. You can then connect that service to Cloudflare and generate a DNS entry with a single command:
This creates a tunnel called “mytunnel”, and configures your DNS to map example.com to that tunnel. Then cloudflared connects to the Cloudflare network. When the Cloudflare network receives an incoming request for example.com, it looks up Continue reading
Application initiatives are driving better business outcomes, an elevated customer experience, innovative digital services, and the anywhere workforce. Organizations surveyed by VMware report that 90% of app initiatives are focused on modernization(1). Using a container-based microservices architecture and Kubernetes, app modernization enables rapid feature releases, higher resiliency, and on-demand scalability. This approach can break apps into thousands of microservices deployed across a heterogeneous and often distributed environment. VMware research also shows 80% of surveyed customers today deploy applications in a distributed model across data center, cloud, and edge(2).
Enterprises are deploying their applications across multiple clusters in the data center and across multiple public or private clouds (as an extension of on-premises infrastructure) to support disaster avoidance, cost reduction, regulatory compliance, and more.
Fig 1: Drivers for Multi-Cloud Transformation
The Challenges in Transitioning to Modern Apps
While app teams can quickly develop and validate Kubernetes applications in dev environments, a very different set of security, connectivity, and operational considerations awaits networking and operations teams deploying applications to production environments. These teams face new challenges as they transition to production with existing applications — even more so when applications are distributed across multiple infrastructures, clusters, and clouds. Continue reading
Linux commands offer a lot of flexibility. This post details some ways to make them even more convenient to use by making use of some clever tricks.Using file-name completion
You can avoid typing a full file name by typing the beginning of its name and pressing the tab key. If the string uniquely identifies a file, doing this will complete the filename. Otherwise, you can enter another letter in the name and press tab again. However, you can also get a list of all files that begin with a particular string by typing the string and then hitting the tab key twice. In this example, we do both:$ ls di<tab><tab>
diff-commands dig.1 directory
dig.2 dimensions disk-usage-commands
$ cd dir<tab>
$ pwd
directory
[Find out how MINNIX was used as the inspiration for Linux.]
Reusing commands and changing them
Reissuing recently used commands is easy in bash. To rerun the previous command, all you have to do it type !! on the command line. You can also reissue a command with changes. If you issued the first command shown below only to find that sshd wasn't running, you could issue the second command to start it. Continue reading
Linux commands offer a lot of flexibility. This post details some ways to make them even more convenient to use by making use of some clever tricks.Using file-name completion
You can avoid typing a full file name by typing the beginning of its name and pressing the tab key. If the string uniquely identifies a file, doing this will complete the filename. Otherwise, you can enter another letter in the name and press tab again. However, you can also get a list of all files that begin with a particular string by typing the string and then hitting the tab key twice. In this example, we do both:$ ls di<tab><tab>
diff-commands dig.1 directory
dig.2 dimensions disk-usage-commands
$ cd dir<tab>
$ pwd
directory
[Find out how MINNIX was used as the inspiration for Linux.]
Reusing commands and changing them
Reissuing recently used commands is easy in bash. To rerun the previous command, all you have to do it type !! on the command line. You can also reissue a command with changes. If you issued the first command shown below only to find that sshd wasn't running, you could issue the second command to start it. Continue reading
Linux commands offer a lot of flexibility. This post details some ways to make them even more convenient to use by making use of some clever tricks.Using file-name completion
You can avoid typing a full file name by typing the beginning of its name and pressing the tab key. If the string uniquely identifies a file, doing this will complete the filename. Otherwise, you can enter another letter in the name and press tab again. However, you can also get a list of all files that begin with a particular string by typing the string and then hitting the tab key twice. In this example, we do both:$ ls di<tab><tab>
diff-commands dig.1 directory
dig.2 dimensions disk-usage-commands
$ cd dir<tab>
$ pwd
directory
[Find out how MINNIX was used as the inspiration for Linux.]
Reusing commands and changing them
Reissuing recently used commands is easy in bash. To rerun the previous command, all you have to do it type !! on the command line. You can also reissue a command with changes. If you issued the first command shown below only to find that sshd wasn't running, you could issue the second command to start it. Continue reading
Overall U.S. employment figures for April may have been dismal but not in the tech sector, which has grown steadily all year, adding 16,000 new jobs in April for a total of 60,900 so far this year.That’s according to CompTIA‘s analysis of the U.S. Bureau of Labor Statistics’ (BLS) latest Employment Situation Summary. The overall jobs numbers, which came out last week, were dismal. The U.S. created just 266,000 new jobs in April when economists surveyed by Dow Jones and The Wall Street Journal had estimated 1 million new jobs.Network training 2021: Businesses grow their own expertise
However, there are signs of tech hiring slowing. Employers across all sectors of the economy reduced their hiring of IT workers by an estimated 234,000 positions. This was the first decline after four consecutive months of employment gains. For the year, IT hires have increased by 72,000 positions.To read this article in full, please click here
Linux commands offer a lot of flexibility. This post details some ways to make them even more convenient to use by making use of some clever tricks.Using file-name completion
You can avoid typing a full file name by typing the beginning of its name and pressing the tab key. If the string uniquely identifies a file, doing this will complete the filename. Otherwise, you can enter another letter in the name and press tab again. However, you can also get a list of all files that begin with a particular string by typing the string and then hitting the tab key twice. In this example, we do both:$ ls di<tab><tab>
diff-commands dig.1 directory
dig.2 dimensions disk-usage-commands
$ cd dir<tab>
$ pwd
directory
[Find out how MINNIX was used as the inspiration for Linux.]
Reusing commands and changing them
Reissuing recently used commands is easy in bash. To rerun the previous command, all you have to do it type !! on the command line. You can also reissue a command with changes. If you issued the first command shown below only to find that sshd wasn't running, you could issue the second command to start it. Continue reading
Today I'm going to lab one of my favorite topic - RSVP-TE Resiliency. It's a real pleasure to see how different methods of network resiliency can make your network more stable and reliable.
create LSP to_R6 on R1 (parameters: "totally loose" path, cspf)
create LSP to_R1 on R6 (parameters: "totally loose" path, cspf)
create Epipe service between CPE1 and CPE6. It helps to compare convergency time in the different cases. I use Virtual PC nodes as CPE devices and simple ping as a tool. It's not a production-ready tool suite, but it's relevant for the education lab. (Depend on your CPE devices, but I recommend use rapid ping or adjust send/receive timers. Results will be more clearly)
configure BFD (I use TX/RX equal 100ms):
on L3 interfaces
on OSPF interfaces
on RSVP interfaces
configure IP addresses on CPE
check IP connectivity between CPE
Secondary paths
The first method is Non-standby secondary path. Add secondary path to exist LSPs
check secondary path operation status
our goal is investigation the reconvergence process
run ping on CPE and break the primary path of any LSP(you can shut Continue reading
In the previous blog post in this series, we figured out that you might not need link-layer addresses on point-to-point links. We also started exploring whether you need network-layer addresses on individual interfaces but didn’t get very far. We’ll fix that today and discover the secrets behind IP address-per-interface design.
In the early days of computer networking, there were three common addressing paradigms:
In the previous blog post in this series, we figured out that you might not need link-layer addresses on point-to-point links. We also started exploring whether you need network-layer addresses on individual interfaces but didn’t get very far. We’ll fix that today and discover the secrets behind IP address-per-interface design.
In the early days of computer networking, there were three common addressing paradigms:
As TSMC charges to 5nm transistor designs and Intel struggles for 7nm, IBM has topped them all with the world’s first 2-nanometer node chip.OK, it won’t come to market for four years, according to IBM, and they might not be the first name that comes to mind when you think of processor design, but they are the quiet power in the semiconductor world.[Get regularly scheduled insights by signing up for Network World newsletters.]
As far as commercial chips go, IBM makes two: the Power series for its Power line of Unix and Linux servers, and zArchitecture that is used in the z Series of mainframes. But IBM has its IBM Joint Development Alliance which is partnered with just about every semiconductor vendor out there—Intel, AMD, Nvidia, TSMC, Samsung, you name it.To read this article in full, please click here
As TSMC charges to 5nm transistor designs and Intel struggles for 7nm, IBM has topped them all with the world’s first 2-nanometer node chip.OK, it won’t come to market for four years, according to IBM, and they might not be the first name that comes to mind when you think of processor design, but they are the quiet power in the semiconductor world.[Get regularly scheduled insights by signing up for Network World newsletters.]
As far as commercial chips go, IBM makes two: the Power series for its Power line of Unix and Linux servers, and zArchitecture that is used in the z Series of mainframes. But IBM has its IBM Joint Development Alliance which is partnered with just about every semiconductor vendor out there—Intel, AMD, Nvidia, TSMC, Samsung, you name it.To read this article in full, please click here
As the enterprise edge expands to include semi-permanent remote workforces, IoT, and a range of applications like AI and M2M, they generate torrents of nonstop data that must be stored indefinitely and be available in near-real-time to users and applications.Legacy storage architectures are failing to keep up with both data growth and user/application demand. While storage innovation is pushing more workloads into the cloud, many startups have found that the average enterprise is not yet ready for cloud-only storage. Legacy architectures and applications are experiencing extended shelf-lives due to tight IT budgets, and many enterprises still prefer to keep certain workloads on-premises.To read this article in full, please click here
As the enterprise edge expands to include semi-permanent remote workforces, IoT, and a range of applications like AI and M2M, they generate torrents of nonstop data that must be stored indefinitely and be available in near-real-time to users and applications.Legacy storage architectures are failing to keep up with both data growth and user/application demand. While storage innovation is pushing more workloads into the cloud, many startups have found that the average enterprise is not yet ready for cloud-only storage. Legacy architectures and applications are experiencing extended shelf-lives due to tight IT budgets, and many enterprises still prefer to keep certain workloads on-premises.To read this article in full, please click here
History doesn’t really repeat itself, but it surely does use a lot of synonyms and rhymes, and sometimes, if you listen very closely, you can catch it muttering to itself. …
In line with our promise last year to continue publishing incident reviews for Docker Hub, we have two to discuss from April. While many users were unaffected, it is important for us to be transparent with our community, and we hope it is both informative and instructive.
April 3rd 2021
Starting at about 07:30 UTC, a small proportion of registry requests (under 3%) against Docker Hub began failing. Initial investigation pointed towards several causes, including overloaded internal DNS services and significant and unusual load from several users and IPs. Changes were made to address all of these (scaling, blocking, etc), and while the issue seemed to resolve for several hours at a time, it continued coming back.
The issue re-occurred intermittently into the next day, at which point the actual root cause was determined to be under-scaled load balancers doing service discovery and routing for our applications.
In the past, the bottleneck for the load balancing system was network bandwidth on the nodes, and auto scaling rules were thus tied to bandwidth metrics. Over time and across some significant changes to this system, the load balancing application had become more CPU intensive, and thus the current auto scaling setup Continue reading
We’re excited to announce Calico v3.19.0! This release includes a number of cool new features as well as bug fixes. Thank you to each one of the contributors to this release! For detailed release notes, please go here. Here are some highlights from the release…
VPP Data Plane (tech-preview)
We’re very excited to announce that Calico v3.19 includes tech-preview support for FD.io’s Vector Packet Processing (VPP) data plane, joining Calico’s existing iptables, eBPF, and Windows dataplanes.
The VPP data plane promises high performance Kubernetes networking with support for network policy, encryption via WireGuard or IPSec, and MagLev service load balancing.
In previous versions of Calico, the “calicoctl” command line tool was required to properly manage Calico API resources. In Calico v3.19, we’ve introduced a new tech-preview feature that allows you to manage all projectcalico.org API resources directly with kubectl using an optional API server add-on.