Author Archives: Sandra Henry-Stocker
Author Archives: Sandra Henry-Stocker
The /proc file system first made its way into some Unix operating systems (such as Solaris) in the mid-1990s, promising to give users more and easier access into the kernel and to running processes. It was a very welcome enhancement — looking and acting like a regular file system, but delivering hooks into the kernel and the ability to treat processes as files. It went well beyond what we could do with ps and other common commands for examining processes and the system they run on.When it first appeared, /proc took a lot of us by surprise. We were used to devices as files, but access to processes as files was new and exciting. In the years since, /proc has become more of a go-to source for process information, but it retains an element of mystery because of the incredible detail that it provides.To read this article in full or to leave a comment, please click here
There are many ways to search for files on Linux systems and the commands can be very easy or very specific -- narrowing down your search criteria to find what just you're looking for and nothing else. In today's post, we're going to examine some of the most useful commands and options for your file searches. We're going to look into: quick finds more complex search criteria combining conditions reversing criteria simple vs detailed responses looking for duplicate files There are actually several useful commands for searching for files. The find command may be the most obvious, but it's not the only command or always the fastest way to find what you're looking for.To read this article in full or to leave a comment, please click here
The umask setting plays a big role in determining the permissions that are assigned to files that you create. But what's behind this variable and how do the numbers relate to settings like rwxr-xr-x?First, umask is a setting that directly controls the permissions assigned when you create files or directories. Create a new file using a text editor or simply with the touch command and its permissions will be derived from your umask setting. You can look at your umask setting simply by typing umask on the command line.$ umask 0022 Where the umask setting comes from The umask setting for all users is generally set up in a system-wide file like /etc/profile, /etc/bashrc or /etc/login.defs -- a file that's used every time someone logs into the system. The setting can be overidden in user-specific files like ~/.bashrc or ~/.profile since these files are read later in the login process. It can also be reset on a temporary basis at any time with the umask command.To read this article in full or to leave a comment, please click here
The Linux umask setting plays a big role in determining the permissions that are assigned to files that you create. But what's behind this variable, and how do the numbers relate to settings like rwxr-xr-x?First, umask is a setting that directly controls the permissions assigned when you create files or directories. Create a new file using a text editor or simply with the touch command, and its permissions will be derived from your umask setting. You can look at your umask setting simply by typing umask on the command line.$ umask 0022 Where the umask setting comes from The umask setting for all users is generally set up in a system-wide file like /etc/profile, /etc/bashrc or /etc/login.defs — a file that's used every time someone logs into the system. The setting can be overidden in user-specific files like ~/.bashrc or ~/.profile since these files are read later in the login process. It can also be reset on a temporary basis at any time with the umask command.To read this article in full or to leave a comment, please click here
How much do you need to know about disks to successfully manage a Linux system? What commands do what? How do you make good decisions about partitioning? What kind of troubleshooting tools are available? What kind of problems might you run into? This article covers a lot of territory – from looking into the basics of a Linux file systems to sampling some very useful commands.Disk technology background In the beginning days of Unix and later Linux, disks were physically large, but very small in terms of storage capacity. A 300 megabyte disk in the mid-90’s was the size of a shoebox. Today, you can get multi-terrabyte disks that are the size of a slice of toast.To read this article in full or to leave a comment, please click here
How much do you need to know about disks to successfully manage a Linux system? What commands do what? How do you make good decisions about partitioning? What kind of troubleshooting tools are available? What kind of problems might you run into? This article covers a lot of territory – from looking into the basics of a Linux file systems to sampling some very useful commands.Disk technology In the beginning days of Unix and later Linux, disks were physically large, but very small in terms of storage capacity. A 300 megabyte disk in the mid-90’s was the size of a shoebox. Today, you can get multi-terrabyte disks that are the size of a slice of toast.To read this article in full or to leave a comment, please click here
On Linux systems, run levels are operational levels that describe the state of the system with respect to what services are available. One run level is restrictive and only used for maintenance; network connections will not be operational, but admins can log in through a console connection. Others allow anyone to log in and work, but maybe with some differences in the available services. This post examines how run levels are configured and how you can change the run level interactively or modify what services are available.The default run state on Linux systems -- the one that will be used when the system starts up (unless instructed otherwise) -- is usually configured in the /etc/inittab file which generally looks something like this:To read this article in full or to leave a comment, please click here
It seems only decades ago that I was commonly sending out notices to my users by editing the /etc/motd file on the servers I managed. I would tell them about planned outages, system upgrades, new tools and who would be covering for me during my very rare vacations.Somewhere along the stretch of time since, message of the day files seem to have faded from common usage — maybe overwhelmed by an excess of system messages, emailed alerts, texts, and other notices that have taken over, the /etc/motd file has. Or maybe not.+ Also on Network World: Half a dozen clever Linux command line tricks + The truth is the /etc/motd file on quite a number of Linux systems has simply become part of a larger configuration of messages that are fed to users when they log in. And even if your /etc/motd file is empty or doesn’t exist at all, login messages are being delivered when someone logs into a server via a terminal window — and you have more control over what those messages are telling your users than you might realize.To read this article in full or to leave a comment, please click here
Working on the Linux command can be a lot of fun, but it can be even more fun when you use commands that take less work on your part or display information in interesting and useful ways. In today’s post, we’re going to look at half a dozen commands that might make your time on the command line more profitable.watch The watch command will repeatedly run whatever command you give it and show you the output. By default, it runs the command every two seconds. Each successive running of the command overwrites what it displayed on the previous run, so you're always looking at the latest data.You might use it when you’re waiting for someone to log in. In this case, you would use the command “watch who” or maybe “watch -n 15 who” to have the command run every 15 seconds instead of every two seconds. The date and time will appear in the upper right-hand corner of your terminal window.To read this article in full or to leave a comment, please click here
Log rotation on Linux systems is more complicated than you might expect. Which log files are rotated, when and how often, whether or not the rotated log files are compressed, and how many instances of the log files are retained all depend on settings in configuration files.Rotating log files is important for several reasons. First, you probably don't want older log files eating up too much of your disk space. Second, when you need to analyze log data, you probably don't want those log files to be extremely large and cumbersome. And last, organizing log files by date probably makes spotting and analyzing changes quite a bit easier (e.g., comparing last week's log data to this week's).To read this article in full or to leave a comment, please click here
Simply put, environment variables are variables that are set up in your shell when you log in. They are called “environment variables” because most of them affect the way your Unix shell works for you. One points to your home directory and another to your history file. One identifies your mail file while another controls the colors that you see when you ask for a file listing. Still another sets up your default search path.If you haven’t examined your environment variables in a while, you might be surprised by how many of them are configured. An easy way to see how many have been established in your account is to run this command: $ env | wc -l 25 The env command (or printenv) will list all of the enviroment variables and their values. Here’s a sampling:To read this article in full or to leave a comment, please click here
GNU's aspell is a very useful tool for fixing potential typos in files. It not only picks out your misspellings and displays them to you; it offers you a list of potential corrections and applies your changes as instructed. And it often remembers the fixes that you've applied.Hopefully, you’ve spotted the typo in this post’s image. If you had a file containing the word “appertizers”, this clever utility would help you to spot and replace it.Say you had a file named "oops" that contained this typo: $ cat oops Please list the appertizers in alphabeticle order. If you asked aspell to check this file with the command “apsell check oops”, it would present the file contents with the word “appertizer” highlighted and offer the list below as options for correcting the error.To read this article in full or to leave a comment, please click here
GNU's aspell is a very useful tool for fixing potential typos in files. It not only picks out your misspellings and displays them to you, but it offers you a list of potential corrections and applies your changes as instructed. And it often remembers the fixes that you've applied.Hopefully, you’ve spotted the typo in this post’s image. If you had a file containing the word “appertizers,” this clever utility would help you to spot and replace it.Say you had a file named "oops" that contained this typo:$ cat oops Please list the appertizers in alphabeticle order. If you asked aspell to check this file with the command “apsell check oops”, it would present the file contents with the word “appertizer” highlighted and offer the list below as options for correcting the error.To read this article in full or to leave a comment, please click here
On Unix systems, there are several ways to send signals to processes—with a kill command, with a keyboard sequence (like control-C), or through your own program (e.g., using a kill command in C). Signals are also generated by hardware exceptions such as segmentation faults and illegal instructions, timers and child process termination.But how do you know what signals a process will react to? After all, what a process is programmed to do and able to ignore is another issue.Fortunately, the /proc file system makes information about how processes handle signals (and which they block or ignore) accessible with commands like the one shown below. In this command, we’re looking at information related to the login shell for the current user, the "$$" representing the current process.To read this article in full or to leave a comment, please click here
For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.And the bad guys worked at breaking the encryption.We used longer keys and better algorithms.And the bad guys kept at it. And life went on.But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.To read this article in full or to leave a comment, please click here
For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.And the bad guys worked at breaking the encryption.We used longer keys and better algorithms.And the bad guys kept at it. And life went on.But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.To read this article in full or to leave a comment, please click here
For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.And the bad guys worked at breaking the encryption.We used longer keys and better algorithms.And the bad guys kept at it. And life went on.But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.To read this article in full or to leave a comment, please click here
For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.And the bad guys worked at breaking the encryption.We used longer keys and better algorithms.And the bad guys kept at it. And life went on.But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.To read this article in full or to leave a comment, please click here
A recent survey of IEEE engineers reveals some interesting insights into the Internet of Things (IoT)—both challenges and expectations. Commissioned by Northeastern University Silicon Valley, the survey asked the engineers to answer nine questions about IoT development and deployment. Some of the answers might surprise you.While still in its infancy, the IoT is poised to change our lives in very personal and meaningful ways. The visionaries are already asking if robots will someday replace soldiers, if guns will be traded for cyber-bots, and if artificial intelligence (AI) will change the way we live our daily lives.The audience for the survey was a group of 500 IEEE members—all in fields associated with IoT or working in IoT itself. Their areas of expertise varied from manufacturing (nearly 40% of the participants) to project management (only .19%).To read this article in full or to leave a comment, please click here
On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here