Sandra Henry-Stocker

Author Archives: Sandra Henry-Stocker

How log rotation works with logrotate

Log rotation on Linux systems is more complicated than you might expect. Which log files are rotated, when and how often, whether or not the rotated log files are compressed, and how many instances of the log files are retained all depend on settings in configuration files.Rotating log files is important for several reasons. First, you probably don't want older log files eating up too much of your disk space. Second, when you need to analyze log data, you probably don't want those log files to be extremely large and cumbersome. And last, organizing log files by date probably makes spotting and analyzing changes quite a bit easier (e.g., comparing last week's log data to this week's).To read this article in full or to leave a comment, please click here

All you need to know about Unix environment variables

Simply put, environment variables are variables that are set up in your shell when you log in. They are called “environment variables” because most of them affect the way your Unix shell works for you. One points to your home directory and another to your history file. One identifies your mail file while another controls the colors that you see when you ask for a file listing. Still another sets up your default search path.If you haven’t examined your environment variables in a while, you might be surprised by how many of them are configured. An easy way to see how many have been established in your account is to run this command: $ env | wc -l 25 The env command (or printenv) will list all of the enviroment variables and their values. Here’s a sampling:To read this article in full or to leave a comment, please click here

Better spelling with GNU’s aspell

GNU's aspell is a very useful tool for fixing potential typos in files. It not only picks out your misspellings and displays them to you; it offers you a list of potential corrections and applies your changes as instructed. And it often remembers the fixes that you've applied.Hopefully, you’ve spotted the typo in this post’s image. If you had a file containing the word “appertizers”, this clever utility would help you to spot and replace it.Say you had a file named "oops" that contained this typo: $ cat oops Please list the appertizers in alphabeticle order. If you asked aspell to check this file with the command “apsell check oops”, it would present the file contents with the word “appertizer” highlighted and offer the list below as options for correcting the error.To read this article in full or to leave a comment, please click here

How to use GNU’s aspell to fix spelling errors in files

GNU's aspell is a very useful tool for fixing potential typos in files. It not only picks out your misspellings and displays them to you, but it offers you a list of potential corrections and applies your changes as instructed. And it often remembers the fixes that you've applied.Hopefully, you’ve spotted the typo in this post’s image. If you had a file containing the word “appertizers,” this clever utility would help you to spot and replace it.Say you had a file named "oops" that contained this typo:$ cat oops Please list the appertizers in alphabeticle order. If you asked aspell to check this file with the command “apsell check oops”, it would present the file contents with the word “appertizer” highlighted and offer the list below as options for correcting the error.To read this article in full or to leave a comment, please click here

Unix: Dealing with signals

On Unix systems, there are several ways to send signals to processes—with a kill command, with a keyboard sequence (like control-C), or through your own program (e.g., using a kill command in C). Signals are also generated by hardware exceptions such as segmentation faults and illegal instructions, timers and child process termination.But how do you know what signals a process will react to? After all, what a process is programmed to do and able to ignore is another issue.Fortunately, the /proc file system makes information about how processes handle signals (and which they block or ignore) accessible with commands like the one shown below. In this command, we’re looking at information related to the login shell for the current user, the "$$" representing the current process.To read this article in full or to leave a comment, please click here

True random numbers are here — what that means for data centers

For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.And the bad guys worked at breaking the encryption.We used longer keys and better algorithms.And the bad guys kept at it. And life went on.But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.To read this article in full or to leave a comment, please click here

True random numbers are here — what that means for data centers

For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.And the bad guys worked at breaking the encryption.We used longer keys and better algorithms.And the bad guys kept at it. And life went on.But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.To read this article in full or to leave a comment, please click here

True random numbers are here — what that means for data centers

For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.And the bad guys worked at breaking the encryption.We used longer keys and better algorithms.And the bad guys kept at it. And life went on.But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.To read this article in full or to leave a comment, please click here

True random numbers are here — what that means for data centers

For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.And the bad guys worked at breaking the encryption.We used longer keys and better algorithms.And the bad guys kept at it. And life went on.But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.To read this article in full or to leave a comment, please click here

Where is IoT headed?

A recent survey of IEEE engineers reveals some interesting insights into the Internet of Things (IoT)—both challenges and expectations. Commissioned by Northeastern University Silicon Valley, the survey asked the engineers to answer nine questions about IoT development and deployment. Some of the answers might surprise you.While still in its infancy, the IoT is poised to change our lives in very personal and meaningful ways. The visionaries are already asking if robots will someday replace soldiers, if guns will be traded for cyber-bots, and if artificial intelligence (AI) will change the way we live our daily lives.The audience for the survey was a group of 500 IEEE members—all in fields associated with IoT or working in IoT itself. Their areas of expertise varied from manufacturing (nearly 40% of the participants) to project management (only .19%).To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix’s mysterious && and ||

The Unix shell’s && and || operators provide some very useful functionality, but they can be a bit mysterious, especially considering the number of options for how they are used.The most common use of these Boolean operators is in the construction of multi-conditional tests—when you want two or more conditions to be true (or any in a set of operators to be true) before some command is run. The && serves as a logical AND (requiring all conditions to be true) operation, while the || provides a logical OR (requiring only one to be true).Combining tests In the script below, we’re using && to combine two very simple conditions. We won’t get output unless both conditions are true. This particular script runs through the tests twice, but only to demonstrate the two “flavors” of the brackets that can be used. Note that && doesn’t work inside square brackets unless they’re doubled.To read this article in full or to leave a comment, please click here

Viewing Linux output in columns

The Linux column command makes it easy to display data in a columnar format -- often making it easier to view, digest, or incorporate into a report. While column is a command that's simple to use, it has some very useful options that are worth considering. In the examples in this post, you will get a feel for how the command works and how you can get it to format data in the most useful ways.By default, the column command will ignore blanks lines in the input data. When displaying data in multiple columns, it will organize the content by filling the left column first and then moving to the right. For example, a file containing numbers 1 to 12 might be displayed in this order:To read this article in full or to leave a comment, please click here

Viewing Linux output in columns

The Linux column command makes it easy to display data in a columnar format -- often making it easier to view, digest, or incorporate into a report. While column is a command that's simple to use, it has some very useful options that are worth considering. In the examples in this post, you will get a feel for how the command works and how you can get it to format data in the most useful ways.By default, the column command will ignore blanks lines in the input data. When displaying data in multiple columns, it will organize the content by filling the left column first and then moving to the right. For example, a file containing numbers 1 to 12 might be displayed in this order:To read this article in full or to leave a comment, please click here

How to keep Linux from hanging up on you

When you run a command in the background on a Linux system and then log out, the process you were running will stop abruptly. If you elect to run the command with a no-hangup command, on the other hand, it will continue running and will store its output in a file.Here's how this works. The nohup command instructs your process to ignore the SIGHUP signal that would normally shut it down. That allows you to leave time-consuming processes to complete on their own without you having to remain logged in. By default, the output of the command you are running will be left in a file named nohup.out so that you can find your data the next time you log in.To read this article in full or to leave a comment, please click here

How to keep Linux from hanging up on you

When you run a command in the background on a Linux system and then log out, the process you were running will stop abruptly. If you elect to run the command with a no-hangup command, on the other hand, it will continue running and will store its output in a file.Here's how this works. The nohup command instructs your process to ignore the SIGHUP signal that would normally shut it down. That allows you to leave time-consuming processes to complete on their own without you having to remain logged in. By default, the output of the command you are running will be left in a file named nohup.out so that you can find your data the next time you log in.To read this article in full or to leave a comment, please click here

Improving on history

The Linux history command allows users to repeat commands without retyping them and to look over a list of commands that they've recently used, but that's just the obvious stuff. It is also highly configurable, allows you to pick and choose what you reuse (e.g., complete commands or portions of commands), and control what commands are recorded. In today's post, we're going to run through the basics and then explore some of the more interesting behaviors of the history command.The basics Typing "history" and getting a list of previously entered commands is the command's most obvious use. Pressing the up arrow until you reach a command that you want to repeat and hitting enter to rerun it is next. And, as you probably know, you can also use the down arrow. In fact, you can scroll up and down your list of previously entered commands to review them or rerun them.To read this article in full or to leave a comment, please click here

Improving on history—the Linux history command, that is

The Linux history command allows users to repeat commands without retyping them and to look over a list of commands they've recently used, but that's just the obvious stuff. It is also highly configurable, allows you to pick and choose what you reuse (e.g., complete commands or portions of commands), and controls what commands are recorded.In today's post, we're going to run through the basics and then explore some of the more interesting behaviors of the history command.The basics of the Linux history command Typing "history" and getting a list of previously entered commands is the command's most obvious use. Pressing the up arrow until you reach a command that you want to repeat and hitting enter to rerun it is next. And, as you probably know, you can also use the down arrow. In fact, you can scroll up and down your list of previously entered commands to review them or rerun them.To read this article in full or to leave a comment, please click here