Archive

Category Archives for "Networking"

Day Two Cloud 097: Azure Cloud Networking Essentials

Today's Day Two Cloud podcast explores essential networking capabilities in Azure, including Virtual WAN, VPN gateways, availability zones, SSL termination options, connecting premises and branch offices to the cloud, and more. Our guest is Pierre Roman, Sr Cloud Ops Advocate at Microsoft. This is not a sponsored episode.

The post Day Two Cloud 097: Azure Cloud Networking Essentials appeared first on Packet Pushers.

Highly available and highly scalable Cloudflare tunnels

Highly available and highly scalable Cloudflare tunnels
Highly available and highly scalable Cloudflare tunnels

Starting today, we’re thrilled to announce you can run the same tunnel from multiple instances of cloudflared simultaneously. This enables graceful restarts, elastic auto-scaling, easier Kubernetes integration, and more reliable tunnels.

What is Cloudflare Tunnel?

I work on Cloudflare Tunnel, a product our customers use to connect their services and private networks to Cloudflare without poking holes in their firewall. Tunnel connections are managed by cloudflared, a tool that runs in your environment and connects your services to the Internet while ensuring that all its traffic goes through Cloudflare.

Say you have some local service (a website, an API, or a TCP server), and you want to securely expose it to the Internet using a Cloudflare Tunnel. First, download cloudflared, which is a “connector” that connects your local service to the Internet through Cloudflare. You can then connect that service to Cloudflare and generate a DNS entry with a single command:

cloudflared tunnel create --name mytunnel --url http://localhost:8080 --hostname example.com

This creates a tunnel called “mytunnel”, and configures your DNS to map example.com to that tunnel. Then cloudflared connects to the Cloudflare network. When the Cloudflare network receives an incoming request for example.com, it looks up Continue reading

Multi-Cloud Connectivity and Security Needs of Kubernetes Applications

Application initiatives are driving better business outcomes, an elevated customer experience, innovative digital services, and the anywhere workforce. Organizations surveyed by VMware report that 90% of app initiatives are focused on modernization(1). Using a container-based microservices architecture and Kubernetes, app modernization enables rapid feature releases, higher resiliency, and on-demand scalability. This approach can break apps into thousands of microservices deployed across a heterogeneous and often distributed environment. VMware research also shows 80% of surveyed customers today deploy applications in a distributed model across data center, cloud, and edge(2).

Enterprises are deploying their applications across multiple clusters in the data center and across multiple public or private clouds (as an extension of on-premises infrastructure) to support disaster avoidance, cost reduction, regulatory compliance, and more.

Applications Deployed in a Distributed Model

Fig 1: Drivers for Multi-Cloud Transformation 

The Challenges in Transitioning to Modern Apps 

While app teams can quickly develop and validate Kubernetes applications in dev environments, a very different set of security, connectivity, and operational considerations awaits networking and operations teams deploying applications to production environments. These teams face new challenges as they transition to production with existing applications — even more so when applications are distributed across multiple infrastructures, clusters, and clouds. Continue reading

6 clever command-line tricks for fewer keystrokes

Linux commands offer a lot of flexibility. This post details some ways to make them even more convenient to use by making use of some clever tricks.Using file-name completion You can avoid typing a full file name by typing the beginning of its name and pressing the tab key. If the string uniquely identifies a file, doing this will complete the filename. Otherwise, you can enter another letter in the name and press tab again. However, you can also get a list of all files that begin with a particular string by typing the string and then hitting the tab key twice. In this example, we do both:$ ls di<tab><tab> diff-commands dig.1 directory dig.2 dimensions disk-usage-commands $ cd dir<tab> $ pwd directory [Find out how MINNIX was used as the inspiration for Linux.]   Reusing commands and changing them Reissuing recently used commands is easy in bash. To rerun the previous command, all you have to do it type !! on the command line. You can also reissue a command with changes. If you issued the first command shown below only to find that sshd wasn't running, you could issue the second command to start it. Continue reading

6 Linux command-line tricks for fewer keystrokes

Linux commands offer a lot of flexibility. This post details some ways to make them even more convenient to use by making use of some clever tricks.Using file-name completion You can avoid typing a full file name by typing the beginning of its name and pressing the tab key. If the string uniquely identifies a file, doing this will complete the filename. Otherwise, you can enter another letter in the name and press tab again. However, you can also get a list of all files that begin with a particular string by typing the string and then hitting the tab key twice. In this example, we do both:$ ls di<tab><tab> diff-commands dig.1 directory dig.2 dimensions disk-usage-commands $ cd dir<tab> $ pwd directory [Find out how MINNIX was used as the inspiration for Linux.]   Reusing commands and changing them Reissuing recently used commands is easy in bash. To rerun the previous command, all you have to do it type !! on the command line. You can also reissue a command with changes. If you issued the first command shown below only to find that sshd wasn't running, you could issue the second command to start it. Continue reading

6 Linux command-line tricks for fewer keystrokes

Linux commands offer a lot of flexibility. This post details some ways to make them even more convenient to use by making use of some clever tricks.Using file-name completion You can avoid typing a full file name by typing the beginning of its name and pressing the tab key. If the string uniquely identifies a file, doing this will complete the filename. Otherwise, you can enter another letter in the name and press tab again. However, you can also get a list of all files that begin with a particular string by typing the string and then hitting the tab key twice. In this example, we do both:$ ls di<tab><tab> diff-commands dig.1 directory dig.2 dimensions disk-usage-commands $ cd dir<tab> $ pwd directory [Find out how MINNIX was used as the inspiration for Linux.]   Reusing commands and changing them Reissuing recently used commands is easy in bash. To rerun the previous command, all you have to do it type !! on the command line. You can also reissue a command with changes. If you issued the first command shown below only to find that sshd wasn't running, you could issue the second command to start it. Continue reading

6 clever command-line tricks for fewer keystrokes

Linux commands offer a lot of flexibility. This post details some ways to make them even more convenient to use by making use of some clever tricks.Using file-name completion You can avoid typing a full file name by typing the beginning of its name and pressing the tab key. If the string uniquely identifies a file, doing this will complete the filename. Otherwise, you can enter another letter in the name and press tab again. However, you can also get a list of all files that begin with a particular string by typing the string and then hitting the tab key twice. In this example, we do both:$ ls di<tab><tab> diff-commands dig.1 directory dig.2 dimensions disk-usage-commands $ cd dir<tab> $ pwd directory [Find out how MINNIX was used as the inspiration for Linux.]   Reusing commands and changing them Reissuing recently used commands is easy in bash. To rerun the previous command, all you have to do it type !! on the command line. You can also reissue a command with changes. If you issued the first command shown below only to find that sshd wasn't running, you could issue the second command to start it. Continue reading

Tech industry remains on a hiring spree

Overall U.S. employment figures for April may have been dismal but not in the tech sector, which has grown steadily all year, adding 16,000 new jobs in April for a total of 60,900 so far this year.That’s according to CompTIA‘s analysis of the U.S. Bureau of Labor Statistics’ (BLS) latest Employment Situation Summary. The overall jobs numbers, which came out last week, were dismal. The U.S. created just 266,000 new jobs in April when economists surveyed by Dow Jones and The Wall Street Journal had estimated 1 million new jobs.Network training 2021: Businesses grow their own expertise However, there are signs of tech hiring slowing. Employers across all sectors of the economy reduced their hiring of IT workers by an estimated 234,000 positions. This was the first decline after four consecutive months of employment gains. For the year, IT hires have increased by 72,000 positions.To read this article in full, please click here

Nokia Lab | LAB 8 RSVP-TE Resiliency |


Hello!

Today I'm going to lab one of my favorite topic - RSVP-TE Resiliency. It's a real pleasure to see how different methods of network resiliency can make your network more stable and reliable. 

Please check my first lab for input information.

Topology example

Lab tasks and questions:
  • Preparing
  • create LSP to_R6 on R1 (parameters: "totally loose" path, cspf)
  • create LSP to_R1 on R6 (parameters: "totally loose" path, cspf)
  • create Epipe service between CPE1 and CPE6. It helps to compare convergency time in the different cases. I use Virtual PC nodes as CPE devices and simple ping as a tool. It's not a production-ready tool suite, but it's relevant for the education lab. (Depend on your CPE devices, but I recommend use rapid ping or adjust send/receive timers. Results will be more clearly)
  • configure BFD (I use TX/RX equal 100ms):
  • on L3 interfaces
  • on OSPF interfaces
  • on RSVP interfaces
  • configure IP addresses on CPE
  • check IP connectivity between CPE
  • Secondary paths
    • The first method is Non-standby secondary path. Add secondary path to exist LSPs
    • check secondary path operation status
    • our goal is investigation the reconvergence process
    • run ping on CPE and break the primary path of any LSP(you can shut Continue reading

    Back to Basics: The History of IP Interface Addresses

    In the previous blog post in this series, we figured out that you might not need link-layer addresses on point-to-point links. We also started exploring whether you need network-layer addresses on individual interfaces but didn’t get very far. We’ll fix that today and discover the secrets behind IP address-per-interface design.

    In the early days of computer networking, there were three common addressing paradigms:

    Back to Basics: The History of IP Interface Addresses

    In the previous blog post in this series, we figured out that you might not need link-layer addresses on point-to-point links. We also started exploring whether you need network-layer addresses on individual interfaces but didn’t get very far. We’ll fix that today and discover the secrets behind IP address-per-interface design.

    In the early days of computer networking, there were three common addressing paradigms:

    IBM leapfrogs everyone with its 2nm chips

    As TSMC charges to 5nm transistor designs and Intel struggles for 7nm, IBM has topped them all with the world’s first 2-nanometer node chip.OK, it won’t come to market for four years, according to IBM, and they might not be the first name that comes to mind when you think of processor design, but they are the quiet power in the semiconductor world.[Get regularly scheduled insights by signing up for Network World newsletters.] As far as commercial chips go, IBM makes two: the Power series for its Power line of Unix and Linux servers, and zArchitecture that is used in the z Series of mainframes. But IBM has its IBM Joint Development Alliance which is partnered with just about every semiconductor vendor out there—Intel, AMD, Nvidia, TSMC, Samsung, you name it.To read this article in full, please click here

    IBM leapfrogs everyone with its 2nm chips

    As TSMC charges to 5nm transistor designs and Intel struggles for 7nm, IBM has topped them all with the world’s first 2-nanometer node chip.OK, it won’t come to market for four years, according to IBM, and they might not be the first name that comes to mind when you think of processor design, but they are the quiet power in the semiconductor world.[Get regularly scheduled insights by signing up for Network World newsletters.] As far as commercial chips go, IBM makes two: the Power series for its Power line of Unix and Linux servers, and zArchitecture that is used in the z Series of mainframes. But IBM has its IBM Joint Development Alliance which is partnered with just about every semiconductor vendor out there—Intel, AMD, Nvidia, TSMC, Samsung, you name it.To read this article in full, please click here

    9 enterprise-storage startups to watch

    As the enterprise edge expands to include semi-permanent remote workforces, IoT, and a range of applications like AI and M2M, they generate torrents of nonstop data that must be stored indefinitely and be available in near-real-time to users and applications.Legacy storage architectures are failing to keep up with both data growth and user/application demand. While storage innovation is pushing more workloads into the cloud, many startups have found that the average enterprise is not yet ready for cloud-only storage. Legacy architectures and applications are experiencing extended shelf-lives due to tight IT budgets, and many enterprises still prefer to keep certain workloads on-premises.To read this article in full, please click here

    9 enterprise-storage startups to watch

    As the enterprise edge expands to include semi-permanent remote workforces, IoT, and a range of applications like AI and M2M, they generate torrents of nonstop data that must be stored indefinitely and be available in near-real-time to users and applications.Legacy storage architectures are failing to keep up with both data growth and user/application demand. While storage innovation is pushing more workloads into the cloud, many startups have found that the average enterprise is not yet ready for cloud-only storage. Legacy architectures and applications are experiencing extended shelf-lives due to tight IT budgets, and many enterprises still prefer to keep certain workloads on-premises.To read this article in full, please click here

    What’s New in Calico v3.19

    We’re excited to announce Calico v3.19.0! This release includes a number of cool new features as well as bug fixes. Thank you to each one of the contributors to this release! For detailed release notes, please go here. Here are some highlights from the release…

    VPP Data Plane (tech-preview)

    We’re very excited to announce that Calico v3.19 includes tech-preview support for FD.io’s Vector Packet Processing (VPP) data plane, joining Calico’s existing iptables, eBPF, and Windows dataplanes.

    The VPP data plane promises high performance Kubernetes networking with support for network policy, encryption via WireGuard or IPSec, and MagLev service load balancing.

    Interested? Try it out by following the tech-preview getting started guide!

    Resource Management with kubectl (tech-preview)

    In previous versions of Calico, the “calicoctl” command line tool was required to properly manage Calico API resources. In Calico v3.19, we’ve introduced a new tech-preview feature that allows you to manage all projectcalico.org API resources directly with kubectl using an optional API server add-on.

    Try it out on your cluster by following the guide!

    Windows Data Plane Support for containerd

    Calico v3.19 introduces support for Calico for Windows users to deploy containers using containerd Continue reading