In this Linux tip, we’re going to look at an easy way to avoid saving commands in your command history. The first thing you need to do is set your HISTCONTROL variable to ignore commands that you enter followed by a space by using the “ignorespace” setting.
When preparing scripts that will run in bash, it’s often critical to be able to set up numeric variables that you can then increment or decrement as the script proceeds. The only surprising part of this is how many options you have to choose from to make the increment or decrement operation happen.Incrementing numeric variables
First, to increment a variable, you first need to set it up. While the example below sets the variable $count to 1, there is no need to start at 1.$ count=1
This would also work:$ count=111
Regardless of the initial setting, you can then increment your variable using any of the following commands. Just replace $count with your variable name.To read this article in full, please click here
A traditional backup starts with an initial full backup and is followed by a series of incremental or cumulative incremental backups (also known as differential backups). After some period of time, you will perform another full backup and more incremental backups. However, the advent of disk-based backup systems has given rise to the concept of the incremental forever approach, where only one backup is performed followed by a series of incremental backups. Let’s take a look at the different ways to do this.File-level incremental forever
The first type of incremental forever backup is a file-level incremental forever backup product. This type of approach has actually been around for quite some time, with early versions of it available in the ‘90s. The reason why this is called a file-level incremental is that the decision to backup an item happens at the file level. If anything within a file changes, it will change its modification date (or archive bit in Windows), and the entire file will be backed up. Even if only one byte of data was changed within the file, the entire file will be included in the backup.To read this article in full, please click here
Microsoft has blamed staff strength and failed automation for a data center outage in Australia that took place on August 30, disabling users from accessing Azure, Microsoft 365, and Power Platform services for over 24 hours.In a post-incident analysis report, Microsoft said the outage occurred due to a utility power sag in Australia’s East region, which in turn “tripped a subset of the cooling units offline in one data center, within one of the Availability Zones.”As the cooling units were not working properly, the rise in temperature forced an automated shutdown of the data center in order to preserve data and infrastructure health, affecting compute, network, and storage services.To read this article in full, please click here
Arm Holdings unveiled a program that it says will simplify and accelerate the adoption of Arm Neoverse-based technology into new compute solutions. The program, called Arm Neoverse Compute Subsystems (CSS), was introduced at the Hot Chips 2023 technical conference held at Stanford University.Neoverse is Arm’s server-side technology meant for high performance while still offering the power efficiency that Arm’s mobile parts are known for. CSS enables partners to build specialized silicon more affordably and quickly than previous discrete IP solutions.The first-generation CSS product, Arm CSS N2, is based on the Neoverse N2 platform first introduced in 2020. CSS N2 provides partners with a customizable compute subsystem, allowing them to focus on features like memory, I/O, acceleration, and so on.To read this article in full, please click here
Intel used the Hot Chips 2023 show to introduce the next generation of its Xeon processors, codenamed Sierra Forest and Granite Rapids. This will be the first generation of Xeon processors with different core designs: the new Efficient-core (E-core) architecture and existing Performance-core (P-core) architecture. The new processors will be considerably beefier than the previous generation, codenamed Sapphire Rapids. They will feature up to 144 cores and emphasize greater memory and I/O bandwidth performance, two areas where Xeon has lagged behind AMD’s Epyc processors.To read this article in full, please click here
There were some cloud announcements this week at VMware Explore in Las Vegas, but AI was the star, as it has been at nearly every tech company lately. Vendors have been rushing to add generative AI to their platforms, and VMware is no exception.The biggest AI features to emerge from the conference – VMware Private AI Foundation and Intelligent Assist – won't be fully available for months. VMware Private AI Foundation is a joint development with Nvidia that will enable enterprises to customize models and run generative AI applications on their own infrastructure. Intelligent Assist is a family of generative AI-based solutions trained on VMware’s proprietary data to automate IT tasks.To read this article in full, please click here
Nvidia exceeded all expectations for its second fiscal quarter of 2024 with revenue of $13.51 billion, a 101% jump from the same quarter last year. Net income came in at $6.74 billion, or $2.48 per diluted share, which is up 854% from a year ago and up 202% from the previous quarter.Analysts had expected revenue to come in at $11.04 billion with earnings per share totaling $2.07, according to data from Bloomberg.And it’s all thanks for enterprise sales. Last quarter, enterprise sales accounted for 60% of total revenue. This quarter, $10.3 billion of the $13.5 billion in total revenue – 76% – came from data center sales.“A new computing era has begun. Companies worldwide are transitioning from general-purpose to accelerated computing and generative AI,” said Jensen Huang, founder and CEO of Nvidia, in a statement. “Nvidia GPUs connected by our Mellanox networking and switch technologies and running our CUDA AI software stack make up the computing infrastructure of generative AI.”To read this article in full, please click here
A dramatic disagreement in the enterprise Linux community has some distributions scrambling to keep their code compatible with Red Hat, as the acknowledged biggest player in the space cracks down on source code distribution.The core issue is the existence of several “downstream” Linux distributions based on Red Hat Enterprise Linux. Those distributions were historically based on CentOS, a free RHEL clone developed originally for the purposes of testing and development. The downstream distributions in question, however, are supported by companies like CIQ and Oracle – which sell support services for their “clones” of RHEL. This has led to a long-running tension between those companies and Red Hat, whose supporters argue that the downstream companies are simply repackaging Red Hat’s work for profit, while detractors say that Red Hat is violating the sprit – if not, technically, the law – of open source.To read this article in full, please click here
Bugs emerged earlier this month in Intel and AMD processors that affect both client and server processors over multiple generations. Fortunately, the bugs were found some time ago and researchers kept it quiet while fixes were developed.Google researchers found the Intel bug known as Downfall (CVE-2022-40982) and reported it to Intel more than a year ago, so both parties had plenty of time to work things out. The Downfall bug exploits a flaw in the "Gather" instruction that affected Intel CPUs use to grab information from multiple places in a system's memory. A Google researcher created a proof-of-concept exploit that could steal encryption keys and other kinds of data from other users on a given server.To read this article in full, please click here
One of the first things Linux users need to learn is how to move around the Linux file system and, eventually, how to make it even easier to move around the file system. This post describes both the basic commands you need and some smart moves to make navigating easier.Absolute and relative paths
Before we get moving, it’s important to understand the difference between absolute paths (like /home/jdoe) and relative paths (like images/photos and ..). Absolute paths always begin with a / that, of course, represents the base of the file system. If the specified path doesn’t start with a /, it’s relative. Here are some examples of both relative and absolute paths:To read this article in full, please click here
The Linux more command is a fairly obvious command to use when you want to scan through a text file a screen at a time, but there still might be quite a few things you don’t know about this command. For one thing, you don’t have to start at the top of the file if you don’t want to. Add an argument like +20 and you will start with the 20th line in the file with a command like that shown below.$ man +20 myfile
Note that the more command automatically adjusts itself to the number of lines in your terminal window. In addition, the last line displayed will not be a line from the file by default, but an indication of what percentage of the text has been displayed thus far – at least if there’s more text to follow. It will look like this:To read this article in full, please click here
Micron has introduced memory expansion modules that support the 2.0 generation of Compute Express Link (CXL) and come with up to 256GB of DRAM running over a PCIe x8 interface.CXL is an open interconnect standard with wide industry support that is meant to be a connection between machines allowing for the direct sharing of contents of memory. It is built on top of PCI Express for coherent memory access between a CPU and a device, such as a hardware accelerator, or a CPU and memory.PCIe is normally used in point-to-point communications, such as SSD to memory, while CXL will eventually support one-to-many communication. So far, CXL is capable of simple point-to-point communication only.To read this article in full, please click here
Data replication has stood the test of time, providing organizations with a reliable means of safeguarding critical information for decades. Replication creates redundant copies of vital data, ensuring its availability and resiliency in case of disasters or system failures. In this article, I will explore the intricacies of data replication, examining its fundamental components, types, and potential limitations.Data replication starts with the selection of a source volume or filesystem that needs protection. This source volume might be a virtual disk, often referred to as a LUN (logical unit number), sourced from a storage array or volume manager. It may also take the form of a filesystem. Replication can occur either at the block level, a common practice due to its efficiency, or at the filesystem level, although the latter tends to be less favored for its relatively inferior performance.To read this article in full, please click here
Linux shells like bash have a convenient way of remembering commands that you type, making it easy to run them again without having to retype them. Just use the history command (which is a bash built-in) and then use an exclamation point followed by the number shown in front of the command in the history command output that you want to rerun. Alternatively, you can back up to that command by pressing the up arrow key as many times as needed to reach that command and then press return. Don’t forget, though, that you can also set up commands you are likely to use often as aliases by adding a line like this to your ~/.bashrc file so that you don’t need to search for them in your command history. Here’s an example:To read this article in full, please click here
ECL has announced what it says will be the world’s first modular, sustainable, off-grid data center that uses hydrogen as its primary power source, promising carbon neutral performance and 99.9999% uptime.Modular data centers are designed to go together like building blocks, allowing companies to start small and grow as their capacity needs increase. The ECL data centers will come in 1 megawatt blocks.ECL's data-center-as-a-service offering is geared primarily to mid-sized data center operators, as well as large companies with a mix of cloud and on-premises IT environments. It claims its data centers will have a total cost of ownership that's two-thirds of what a traditional colocation data center environment would cost when measured over five years.To read this article in full, please click here
Three of Red Hat’s chief enterprise Linux competitors are banding together to create an alternative to Red Hat-based software, after the company made changes to its terms of use earlier this summer, making it more difficult to access its source code.Oracle, SUSE, and CIQ, in a joint statement issued Thursday, said that the new Open Enterprise Linux Association will “encourage the development” of Linux distributions compatible with Red Hat Enterprise Linux by providing free access to source code.“With OpenELA, CIQ, Oracle and SUSE join forces with the open source community to ensure a stable and resilient future for both upstream and downstream communities to leverage Enterprise Linux,” said CIQ CEO Gregory Kurtzer, in the statement.To read this article in full, please click here
As it previewed in March, IBM is set to deliver an AI-infused, hybrid-cloud oriented version of its z/OS mainframe operating system.Set for delivery on Sept. 29, z/OS 3.1, the operating system grows IBM’s AI portfolio to let customers securely deploy AI applications co-located with z/OS applications and data, as well as a variety of new features such as container extensions for Red Hat and Linux applications that better support hybrid cloud applications on the Big Iron.In this release of the mainframe’s OS, AI support is implemented in a feature package called AI System Services for IBM z/OS version 1.1. that lets customers build an AI Framework that IBM says is designed to support initial and future intelligent z/OS management capabilities.To read this article in full, please click here