Will Nvidia give up on the Arm deal?

Nvidia may be about to do something it never does: give up.The chip giant is finally ready to throw in the towel on its proposed acquisition of Arm Holdings after vociferous opposition by UK regulators, according to a report from Bloomberg (paywalled).[Get regularly scheduled insights by signing up for Network World newsletters.] First announced in September 2020, the deal has dragged on due to almost immediate opposition from UK entities. Arm Holdings is a British company but owned by Japanese tech giant Softbank. Laden with debt, Softbank wanted to unload Arm to someone better suited to manage the company, and Nvidia stepped forward.To read this article in full, please click here

Will Nvidia give up on the Arm deal?

Nvidia may be about to do something it never does: give up.The chip giant is finally ready to throw in the towel on its proposed acquisition of Arm Holdings after vociferous opposition by UK regulators, according to a report from Bloomberg (paywalled).[Get regularly scheduled insights by signing up for Network World newsletters.] First announced in September 2020, the deal has dragged on due to almost immediate opposition from UK entities. Arm Holdings is a British company but owned by Japanese tech giant Softbank. Laden with debt, Softbank wanted to unload Arm to someone better suited to manage the company, and Nvidia stepped forward.To read this article in full, please click here

Hedge 116: Schofield’s Laws of Computing

Jack Schofield, a prolific journalist covering computers and computing, developed three “laws” across his thirty years of reporting that have come to be known as Schofield’s Laws of Computing. What are these laws, and how do they apply to the modern computing landscape—especially for the network engineer? Join Tom Ammon and Russ White as they discuss Schofield’s Laws of Computing.

download

Protecting Holocaust educational websites

Protecting Holocaust educational websites
Protecting Holocaust educational websites

Today is the International Holocaust Remembrance Day. On this day, we commemorate the victims that were murdered by the Nazis and their accomplices.

During the Holocaust, and in the events that led to it, the Nazis exterminated one third of the European Jewish population. Six million Jews, along with countless other members of minority and disability groups, were murdered because the Nazis believed they were inferior.

Cloudflare’s Project Galileo provides free protection to at-risk groups across the world including Holocaust educational and remembrance websites. During the past year alone, Cloudflare mitigated over a quarter of a million cyber threats launched against Holocaust-related websites.

Antisemitism and the Final Solution

In the Second World War and the years leading up to it, antisemitism served as the foundation of racist laws and fueled violent Pogroms against Jews. The tipping point was a night of violence known as the Kristallnacht ("Night of Broken Glass"). Jews and other minority groups were outlawed, dehumanized, persecuted and killed. Jewish businesses were boycotted, Jewish books burned and synagogues destroyed. Jews, Roma and other “enemies of the Reich'' were forced into closed ghettos and concentration camps. Finally, as part of the Final Solution for the Jewish Question, Continue reading

Migrating to Cloudflare Email Routing

Migrating to Cloudflare Email Routing

A few days ago Google announced that the users from the "G Suite legacy free edition" would need to switch to the paid edition before May 1, 2022, to maintain their services and accounts working. Because of this, many people are now considering alternatives.

One use case for G Suite legacy was handling email for custom domains.

In September, during Birthday Week, we announced Cloudflare Email Routing. This service allows you to create any number of custom email addresses you want on top of the domains you already have with Cloudflare and automatically forward the incoming traffic to any destination inboxes you wish.

Email Routing was designed to be privacy-first, secure, powerful, and very simple to use. Also, importantly, it’s available to all our customers for free.

The closed beta allowed us to keep improving the service and make it even more robust, compliant with all the technical nuances of email, and scalable. Today we're pleased to report that we have over two hundred thousand zones testing Email Routing in production, and we started the countdown to open beta and global availability.

With Email Routing, you can effectively start receiving Emails in any of your domains for any number of Continue reading

Confluent’s Q1 Updates: ‘Data Mesh vs. Data Mess’

Confluent says it will release a series of updates to its data streaming platform every quarter. In this quarter, the updates consist of a number of new features built from the Apache Kafka open source distributed event streaming platform. They include schema linking, new controls to shrink capacity for clusters on-demand and new fully managed Kafka connectors. The new capabilities “can make a huge difference in creating a data mesh versus a data mess,” Schema Linking gives organizations the freedom to develop without the risk of damaging production, Rosanova told The New Stack. “Dev and prod generally don’t talk to one another — because production environments are so sensitive, you don’t want to give everyone access,” Rosanova said. With Schema Linking, built on top of Cluster Linking, schemas can be shared that sync in real-time across teams, organizations and environments, such with hybrid and multicloud environments. “This is far more scalable and efficient compared to workarounds I’ve seen where people are literally sharing schemas through spreadsheets,” Rosanova said. Much verbiage is devoted to scaling, but how to dynamically adjust network resources for resource savings when needed to avoid redundancy is often not addressed. As Rosanova noted, organizations maintain high availability by beefing up their capacity to handle spikes in traffic and avoid downtime. “We added a simple, self-service way to scale back capacity so customers no longer have to worry about wasting resources on capacity they don’t use. These clusters also automatically rebalance your data every time you scale up or down,” Rosanova said. “This solves the really hard challenge of rebalancing workloads while they are running. It’s like changing the tires on a moving car. Now you can optimize data placement without disrupting the real-time flow of information.” New Connectors Confluent’s new release now features over 50 managed connectors for Confluent Cloud. The idea behind Confluent’s Apache Kafka connectors is to facilitate network connections for data streaming with data sources and sinks that organizations select. In the last six months, Confluent more than doubled the number of managed connectors it offers, Rosanova said. “Once one system is connected, two more need to be added, and so on,” he said. “We are bringing real-time data to traditional, non-real-time places to quickly modernize companies’ applications. This is a significant need that continues to grow.” Kafka has emerged as a leading data streaming platform and Confluent continues to evolve with it, Rosanova said. “We are improving what businesses can accomplish with Kafka through these new capabilities. Real-time data streaming continues to play an important role in the services and experiences that set organizations apart,” Rosanova said. “We want to make real-time data streaming within reach for any organization and are continuing to build a platform that is cloud native, complete, and available everywhere.” Confluent’s connector list now includes: Data warehouse connectors: Snowflake, Google BigQuery, Azure Synapse Analytics, Amazon Redshift. Database connectors: MongoDB Atlas, PostgreSQL, MySQL, Microsoft SQL Server, Azure. Cosmos DB, Amazon DynamoDB, Oracle Database, Redis, Google BigTable. Data lake connectors: Amazon S3, Google Cloud Storage, Azure Blob Storage, Azure Data. Lake Storage Gen 2, Databricks Delta Lake. Additionally, Confluent has improved access to popular tools for network monitoring. The platform now offers integrations with Datadog and Prometheus. “With a few clicks, operators have deeper, end-to-end visibility into Confluent Cloud within the monitoring tools they already use,”blog post. The post Confluent’s Q1 Updates: ‘Data Mesh vs. Data Mess’ appeared first on The New Stack.

The Next Frontier in AI Networking

The rapid arrival of real-time gaming, virtual reality and metaverse applications is changing the way network, compute memory and interconnect I/O interact for the next decade. As the future of metaverse applications evolve, the network needs to adapt for 10 times the growth in traffic connecting 100s of processors with trillions of transactions and gigabits of throughput. AI is becoming more meaningful as distributed applications push the envelope of predictable scale and performance of the network. A common characteristic of these AI workloads is that they are both data and compute-intensive. A typical AI workload involves a large sparse matrix computation, distributed across 10s or 100s of processors (CPU, GPU, TPU, etc.) with intense computations for a period of time. Once the data from all peers is received, it can be reduced or merged with the local data and then another cycle of processing begins.

Raspberry Pi bluetooth console

Sometimes you want to connect to a bluetooth on the console. Likely because you screwed something up with the network or filewall settings.

You could plug in a screen and keyboard, but that’s a hassle. And maybe you didn’t prepare the Pi to force the monitor to be on even if it’s not connected at boot. Then it just doesn’t work.

Even more of a hassle is to plug in a serial console cable into the GPIO pins.

But modern Raspberry Pi’s have bluetooth. So let’s use that!

Setting up the service on the raspberry pi

Create /etc/systemd/system/bluetooth-console.service with this content:

[Unit]
Description=Bluetooth console
After=bluetooth.service
Requires=bluetooth.service

[Service]
ExecStart=/usr/bin/rfcomm watch hci0 1 getty rfcomm0 115200 vt100
Restart=always
RestartSec=10
StartLimitIntervalSec=0

[Install]
WantedBy=multi-user.target

This sets up a console on bluetooth channel 1 with a login prompt. But it doesn’t work yet. Apparently setting After, Required, and even Requisite doesn’t prevent systemd from running this before setting up bluetooth (timestamps in the logs don’t lie). Hence the restart stuff.

I also tried setting ExecStartPre / ExecStartPost there to enable Bluetooth discoverability, since something else in the boot process seems to turn it back off if I set it Continue reading

How to inventory server storage with PowerShell

Making inventories of computer storage, particularly on severs, is complex due to the number of factors involved. There might be multiple physical media devices each of which contains multiple logical volumes. Volumes could span multiple disks with hardware or software-based RAID configurations. Each volume could be configured with its own drive letter, and folders throughout the file system could be shared on the network.Those inventories are important because gathering data on physical storage media can identify what type of storage is available and what physical storage capacity servers have. PowerShell can help with those inventories, particularly the Get-PhysicalDisk cmdlet, which uses Windows Management Instrumentation (WMI) under the covers. Get-PhysicalDisk uses WMI to query the MSFT_PhysicalDisk class, with the WMI class providing numeric values for things like MediaType and BusType, while Get-PhysicalDisk returns descriptive text values.To read this article in full, please click here

How to inventory server storage with PowerShell

Making inventories of computer storage, particularly on severs, is complex due to the number of factors involved. There might be multiple physical media devices each of which contains multiple logical volumes. Volumes could span multiple disks with hardware or software-based RAID configurations. Each volume could be configured with its own drive letter, and folders throughout the file system could be shared on the network.Those inventories are important because gathering data on physical storage media can identify what type of storage is available and what physical storage capacity servers have. PowerShell can help with those inventories, particularly the Get-PhysicalDisk cmdlet, which uses Windows Management Instrumentation (WMI) under the covers. Get-PhysicalDisk uses WMI to query the MSFT_PhysicalDisk class, with the WMI class providing numeric values for things like MediaType and BusType, while Get-PhysicalDisk returns descriptive text values.To read this article in full, please click here

Can A Leaner IBM Be Mean Enough To Grow In The Datacenter?

The company was named International Business Machines for a reason, and over the several decades that IBM concentrated on peddling managed services and consulting services to the largest corporations on Earth, with its Global Services behemoth representing two-thirds of its revenues, the company lost touch with, and took for granted, the machine part of its rich and long heritage.

Can A Leaner IBM Be Mean Enough To Grow In The Datacenter? was written by Timothy Prickett Morgan at The Next Platform.