Archive

Category Archives for "Networking"

Migrating to Cloudflare Email Routing

Migrating to Cloudflare Email Routing

A few days ago Google announced that the users from the "G Suite legacy free edition" would need to switch to the paid edition before May 1, 2022, to maintain their services and accounts working. Because of this, many people are now considering alternatives.

One use case for G Suite legacy was handling email for custom domains.

In September, during Birthday Week, we announced Cloudflare Email Routing. This service allows you to create any number of custom email addresses you want on top of the domains you already have with Cloudflare and automatically forward the incoming traffic to any destination inboxes you wish.

Email Routing was designed to be privacy-first, secure, powerful, and very simple to use. Also, importantly, it’s available to all our customers for free.

The closed beta allowed us to keep improving the service and make it even more robust, compliant with all the technical nuances of email, and scalable. Today we're pleased to report that we have over two hundred thousand zones testing Email Routing in production, and we started the countdown to open beta and global availability.

With Email Routing, you can effectively start receiving Emails in any of your domains for any number of Continue reading

Confluent’s Q1 Updates: ‘Data Mesh vs. Data Mess’

Confluent says it will release a series of updates to its data streaming platform every quarter. In this quarter, the updates consist of a number of new features built from the Apache Kafka open source distributed event streaming platform. They include schema linking, new controls to shrink capacity for clusters on-demand and new fully managed Kafka connectors. The new capabilities “can make a huge difference in creating a data mesh versus a data mess,” Schema Linking gives organizations the freedom to develop without the risk of damaging production, Rosanova told The New Stack. “Dev and prod generally don’t talk to one another — because production environments are so sensitive, you don’t want to give everyone access,” Rosanova said. With Schema Linking, built on top of Cluster Linking, schemas can be shared that sync in real-time across teams, organizations and environments, such with hybrid and multicloud environments. “This is far more scalable and efficient compared to workarounds I’ve seen where people are literally sharing schemas through spreadsheets,” Rosanova said. Much verbiage is devoted to scaling, but how to dynamically adjust network resources for resource savings when needed to avoid redundancy is often not addressed. As Rosanova noted, organizations maintain high availability by beefing up their capacity to handle spikes in traffic and avoid downtime. “We added a simple, self-service way to scale back capacity so customers no longer have to worry about wasting resources on capacity they don’t use. These clusters also automatically rebalance your data every time you scale up or down,” Rosanova said. “This solves the really hard challenge of rebalancing workloads while they are running. It’s like changing the tires on a moving car. Now you can optimize data placement without disrupting the real-time flow of information.” New Connectors Confluent’s new release now features over 50 managed connectors for Confluent Cloud. The idea behind Confluent’s Apache Kafka connectors is to facilitate network connections for data streaming with data sources and sinks that organizations select. In the last six months, Confluent more than doubled the number of managed connectors it offers, Rosanova said. “Once one system is connected, two more need to be added, and so on,” he said. “We are bringing real-time data to traditional, non-real-time places to quickly modernize companies’ applications. This is a significant need that continues to grow.” Kafka has emerged as a leading data streaming platform and Confluent continues to evolve with it, Rosanova said. “We are improving what businesses can accomplish with Kafka through these new capabilities. Real-time data streaming continues to play an important role in the services and experiences that set organizations apart,” Rosanova said. “We want to make real-time data streaming within reach for any organization and are continuing to build a platform that is cloud native, complete, and available everywhere.” Confluent’s connector list now includes: Data warehouse connectors: Snowflake, Google BigQuery, Azure Synapse Analytics, Amazon Redshift. Database connectors: MongoDB Atlas, PostgreSQL, MySQL, Microsoft SQL Server, Azure. Cosmos DB, Amazon DynamoDB, Oracle Database, Redis, Google BigTable. Data lake connectors: Amazon S3, Google Cloud Storage, Azure Blob Storage, Azure Data. Lake Storage Gen 2, Databricks Delta Lake. Additionally, Confluent has improved access to popular tools for network monitoring. The platform now offers integrations with Datadog and Prometheus. “With a few clicks, operators have deeper, end-to-end visibility into Confluent Cloud within the monitoring tools they already use,”blog post. The post Confluent’s Q1 Updates: ‘Data Mesh vs. Data Mess’ appeared first on The New Stack.

The Next Frontier in AI Networking

The rapid arrival of real-time gaming, virtual reality and metaverse applications is changing the way network, compute memory and interconnect I/O interact for the next decade. As the future of metaverse applications evolve, the network needs to adapt for 10 times the growth in traffic connecting 100s of processors with trillions of transactions and gigabits of throughput. AI is becoming more meaningful as distributed applications push the envelope of predictable scale and performance of the network. A common characteristic of these AI workloads is that they are both data and compute-intensive. A typical AI workload involves a large sparse matrix computation, distributed across 10s or 100s of processors (CPU, GPU, TPU, etc.) with intense computations for a period of time. Once the data from all peers is received, it can be reduced or merged with the local data and then another cycle of processing begins.

Raspberry Pi bluetooth console

Sometimes you want to connect to a bluetooth on the console. Likely because you screwed something up with the network or filewall settings.

You could plug in a screen and keyboard, but that’s a hassle. And maybe you didn’t prepare the Pi to force the monitor to be on even if it’s not connected at boot. Then it just doesn’t work.

Even more of a hassle is to plug in a serial console cable into the GPIO pins.

But modern Raspberry Pi’s have bluetooth. So let’s use that!

Setting up the service on the raspberry pi

Create /etc/systemd/system/bluetooth-console.service with this content:

[Unit]
Description=Bluetooth console
After=bluetooth.service
Requires=bluetooth.service

[Service]
ExecStart=/usr/bin/rfcomm watch hci0 1 getty rfcomm0 115200 vt100
Restart=always
RestartSec=10
StartLimitIntervalSec=0

[Install]
WantedBy=multi-user.target

This sets up a console on bluetooth channel 1 with a login prompt. But it doesn’t work yet. Apparently setting After, Required, and even Requisite doesn’t prevent systemd from running this before setting up bluetooth (timestamps in the logs don’t lie). Hence the restart stuff.

I also tried setting ExecStartPre / ExecStartPost there to enable Bluetooth discoverability, since something else in the boot process seems to turn it back off if I set it Continue reading

How to inventory server storage with PowerShell

Making inventories of computer storage, particularly on severs, is complex due to the number of factors involved. There might be multiple physical media devices each of which contains multiple logical volumes. Volumes could span multiple disks with hardware or software-based RAID configurations. Each volume could be configured with its own drive letter, and folders throughout the file system could be shared on the network.Those inventories are important because gathering data on physical storage media can identify what type of storage is available and what physical storage capacity servers have. PowerShell can help with those inventories, particularly the Get-PhysicalDisk cmdlet, which uses Windows Management Instrumentation (WMI) under the covers. Get-PhysicalDisk uses WMI to query the MSFT_PhysicalDisk class, with the WMI class providing numeric values for things like MediaType and BusType, while Get-PhysicalDisk returns descriptive text values.To read this article in full, please click here

How to inventory server storage with PowerShell

Making inventories of computer storage, particularly on severs, is complex due to the number of factors involved. There might be multiple physical media devices each of which contains multiple logical volumes. Volumes could span multiple disks with hardware or software-based RAID configurations. Each volume could be configured with its own drive letter, and folders throughout the file system could be shared on the network.Those inventories are important because gathering data on physical storage media can identify what type of storage is available and what physical storage capacity servers have. PowerShell can help with those inventories, particularly the Get-PhysicalDisk cmdlet, which uses Windows Management Instrumentation (WMI) under the covers. Get-PhysicalDisk uses WMI to query the MSFT_PhysicalDisk class, with the WMI class providing numeric values for things like MediaType and BusType, while Get-PhysicalDisk returns descriptive text values.To read this article in full, please click here

Incorrect proxying of 24 hostnames on January 24, 2022

Incorrect proxying of 24 hostnames on January 24, 2022

On January 24, 2022, as a result of an internal Cloudflare product migration, 24 hostnames (including www.cloudflare.com) that were actively proxied through the Cloudflare global network were mistakenly redirected to the wrong origin. During this incident, traffic destined for these hostnames was passed through to the clickfunnels.com origin and may have resulted in a clickfunnels.com page being displayed instead of the intended website. This was our doing and clickfunnels.com was unaware of our error until traffic started to reach their origin.

API calls or other expected responses to and from these hostnames may not have responded properly, or may have failed completely. For example, if you were making an API call to api.example.com, and api.example.com was an impacted hostname, you likely would not have received the response you would have expected.

Here is what happened:

At 2022-01-24 22:24 UTC we started a migration of hundreds of thousands of custom hostnames to the Cloudflare for SaaS product. Cloudflare for SaaS allows SaaS providers to manage their customers’ websites and SSL certificates at scale - more information is available here. This migration was intended to be completely seamless, with the outcome being enhanced Continue reading

Tech Bytes: Singtel And The Cloud-Ready Network (Sponsored)

Today on the Tech Bytes podcast we talk with sponsor Singtel, a global provider of network services. We dive into the services Singtel provides, including Internet, MPLS, IP transit 4G/5G, and why you might want to consider Singtel for cloud connectivity. Our guest is Mark Seabrook, Global Solutions Manager at Singtel.

The post Tech Bytes: Singtel And The Cloud-Ready Network (Sponsored) appeared first on Packet Pushers.

Day Two Cloud 131: Monitoring The Cloud From The Cloud

Today's Day Two Cloud podcast delves into issues about monitoring all the things, including the notion of monitoring the cloud...from the cloud. Ned Bellavance and Ethan Banks discuss the pros and cons of DIY vs. using a service, differences between monitoring infrastructure stacks and applications, what to monitor and why, how to deal with all that data, the necessity of alerting, constructing meaningful dashboards, and more.

The post Day Two Cloud 131: Monitoring The Cloud From The Cloud appeared first on Packet Pushers.

Day Two Cloud 131: Monitoring The Cloud From The Cloud

Today's Day Two Cloud podcast delves into issues about monitoring all the things, including the notion of monitoring the cloud...from the cloud. Ned Bellavance and Ethan Banks discuss the pros and cons of DIY vs. using a service, differences between monitoring infrastructure stacks and applications, what to monitor and why, how to deal with all that data, the necessity of alerting, constructing meaningful dashboards, and more.

NSX-T 3.2 Introduces Migration Coordinator’s User Defined Topology Mode

VMware NSX-T 3.2 is one of our largest releases — and it’s packed full of innovative features that address multi-cloud security, scale-out networking, and simplified operations. Check out the release blog for an overview of the new features introduced with this release.

Among those new features, let’s look at one of the highlights. With this release, Migration Coordinator now supports a groundbreaking feature addressing user-defined topology and enabling flexibility around supported topologies. In this blog post, we’ll look at the workflow for this new feature — starting with a high-level overview and then digging into the details of User Defined Topology. For more information on Migration Coordinator, check out the resource links at the end of this blog.

Migration Coordinator

Migration Coordinator is a tool that was introduced about 3 years ago with NSX-T 2.4. It enabled customers to migrate from NSX for vSphere to NSX-T Data Center. It’s a free and fully supported tool built into NSX-T Data Center. Migration Coordinator is flexible, with multiple options enabling multiple ways to migrate based on customer requirements.

Prior to NSX-T 3.2, Migration Coordinator offered two primary options:

  1. Migrate Everything: Migrate from edges to compute, to workloads in an automated fashion and with a workflow that resembles an in-place upgrade on existing Continue reading