Author Archives: Sandra Henry-Stocker
Author Archives: Sandra Henry-Stocker
Linux provides a wide variety of commands for working with files — commands that can save you time and make your work a lot less tedious.Finding files When you're looking for files, the find command is probably going to be the first command to come to mind, but sometimes a well-crafted ls command works even better. Want to remember what you called that script you were working on last night before you fled the office and drove home? Easy! Use an ls command with the -ltr options. The last files listed will be the ones most recently created or updated.$ ls -ltr ~/bin | tail -3 -rwx------ 1 shs shs 229 Sep 22 19:37 checkCPU -rwx------ 1 shs shs 285 Sep 22 19:37 ff -rwxrw-r-- 1 shs shs 1629 Sep 22 19:37 test2 A command like this one will list only the files that were updated today:To read this article in full, please click here
Carriage returns go back a long way – as far back as typewriters on which a mechanism or a lever swung the carriage that held a sheet of paper to the right so that suddenly letters were being typed on the left again. They have persevered in text files on Windows, but were never used on Linux systems. This incompatibility sometimes causes problems when you’re trying to process files on Linux that were created on Windows, but it's an issue that is very easily resolved.The carriage return, also referred to as Ctrl+M, character would show up as an octal 15 if you were looking at the file with an od octal dump) command. The characters CRLF are often used to represent the carriage return and linefeed sequence that ends lines on Windows text files. Those who like to gaze at octal dumps will spot the \r \n. Linux text files, by comparison, end with just linefeeds.To read this article in full, please click here
How you freeze and "thaw out" a screen on a Linux system depends a lot on what you mean by these terms. Sometimes “freezing a screen” might mean freezing a terminal window so that activity within that window comes to a halt. Sometimes it means locking your screen so that no one can walk up to your system when you're fetching another cup of coffee and type commands on your behalf.In this post, we'll examine how you can use and control these actions. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] How to freeze a terminal window on Linux You can freeze a terminal window on a Linux system by typing Ctrl+S (hold control key and press "s"). Think of the "s" as meaning "start the freeze". If you continue typing commands after doing this, you won't see the commands you type or the output you would expect to see. In fact, the commands will pile up in a queue and will be run only when you reverse the freeze by typing Ctrl+Q. Think of this as "quit the freeze".To read this article in full, please click Continue reading
If you’ve ever wished that you could line up multiple terminal windows and organize them in a single window frame, we may have some good news for you. The Linux Terminator can do this for you. No problemo!Splitting windows Terminator will initially open like a terminal window with a single window. Once you mouse click within that window, however, it will bring up an options menu that gives you the flexibility to make changes. You can choose “split horizontally” or “split vertically” to split the window you are currently position in into two smaller windows. In fact, with these menu choices, complete with tiny illustrations of the resultant split (resembling = and ||), you can split windows repeatedly if you like. Of course, if you split the overall window into more than six or nine sections, you might just find that they're too small to be used effectively.To read this article in full, please click here
Linux just turned 28 years old. From its modest beginnings as an interesting project to the OS that now empowers all 500 of the top 500 supercomputers, along with a huge variety of tiny embedded devices, its place in today's computing world is unparalleled.I was still working with SunOS at the time that Linux was announced — a couple years before it evolved into the System V based Solaris. The full ramifications of what it would mean to be "open source" weren't clear at the time. I was in love with Unix, and this clearly related newborn was of some interest, but not enough to draw me away from the servers I was managing and articles I was writing in those days.To read this article in full, please click here
For decades, Linux users have been renaming files with the mv command. It’s easy, and the command does just what you expect. Yet sometimes you need to rename a large group of files. When that is the case, the rename command can make the task a lot easier. It just requires a little finesse with regular expressions. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] Unlike the mv command, rename isn’t going to allow you to simply specify the old and new names. Instead, it uses a regular expression like those you'd use with Perl. In the example below, the "s" specifies that we're substituting the second string (old) for the first, thus changing this.new to this.old.To read this article in full, please click here
While it may not be obvious to the casual user, Linux file systems have evolved significantly over the last decade or so to make them more resistant to corruption and performance problems.Most Linux systems today use a file system type called ext4. The “ext” part stands for “extended” and the 4 indicates that this is the 4th generation of this file system type. Features added over time include the ability to provide increasingly larger file systems (currently as large as 1,000,000 TiB) and much larger files (up to 16 TiB), more resistance to system crashes and less fragmentation (scattering single files as chunks in multiple locations) which improves performance.To read this article in full, please click here
The Linux command line provides some excellent tools for determining how frequently users log in and how much time they spend on a system. Pulling information from the /var/log/wtmp file that maintains details on user logins can be time-consuming, but with a couple easy commands, you can extract a lot of useful information on user logins.One of the commands that helps with this is the last command. It provides a list of user logins that can go quite far back. The output looks like this:$ last | head -5 | tr -s " " shs pts/0 192.168.0.14 Wed Aug 14 09:44 still logged in shs pts/0 192.168.0.14 Wed Aug 14 09:41 - 09:41 (00:00) shs pts/0 192.168.0.14 Wed Aug 14 09:40 - 09:41 (00:00) nemo pts/1 192.168.0.18 Wed Aug 14 09:38 still logged in shs pts/0 192.168.0.14 Tue Aug 13 06:15 - 18:18 (00:24) Note that the tr -s " " portion of the command above reduces strings of blanks to single blanks, and in this case, it keeps the output shown from being so wide that it would be wrapped around on Continue reading
While PDFs are generally regarded as fairly stable files, there’s a lot you can do with them on both Linux and other systems. This includes merging, splitting, rotating, breaking into single pages, encrypting and decrypting, applying watermarks, compressing and uncompressing, and even repairing. The pdftk command does all this and more.The name “pdftk” stands for “PDF tool kit,” and the command is surprisingly easy to use and does a good job of manipulating PDFs. For example, to pull separate files into a single PDF file, you would use a command like this:$ pdftk pg1.pdf pg2.pdf pg3.pdf pg4.pdf pg5.pdf cat output OneDoc.pdf That OneDoc.pdf file will contain all five of the documents shown and the command will run in a matter of seconds. Note that the cat option directs the files to be joined together and the output option specifies the name of the new file.To read this article in full, please click here
Managing log files on Linux systems can be incredibly easy or painful. It all depends on what you mean by log management.If all you mean is how you can go about ensuring that your log files don’t eat up all the disk space on your Linux server, the issue is generally quite straightforward. Log files on Linux systems will automatically roll over, and the system will only maintain a fixed number of the rolled-over logs. Even so, glancing over what can easily be a group of 100 files can be overwhelming. In this post, we'll take a look at how the log rotation works and some of the most relevant log files. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] Automatic log rotation Log files rotate frequently. What is the current log acquires a slightly different file name and a new log file is established. Take the syslog file as an example. This file is something of a catch-all for a lot of normal system messages. If you cd over to /var/log and take a look, you’ll probably see a series of syslog files like this:To read Continue reading
Linux built-ins are commands that are built into the shell, much like shelves that are built into a wall. You won’t find them as stand-alone files the way standard Linux commands are stored in /usr/bin and you probably use quite a few of them without ever questioning how they’re different from commands such as ls and pwd.Built-ins are used just like other Linux commands. They are likely to run a bit faster than similar commands that are not part of your shell. Bash built-ins include commands such as alias, export and bg. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] As you might suspect, because built-ins are shell-specific, they won't be supplied with man pages. Ask man to help with bg and you'll see something like this:To read this article in full, please click here
User groups play an important role on Linux systems. They provide an easy way for a select groups of users to share files with each other. They also allow sysadmins to more effectively manage user privileges, since they can assign privileges to groups rather than individual users.While a user group is generally created whenever a user account is added to a system, there’s still a lot to know about how they work and how to work with them. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] One user, one group? Most user accounts on Linux systems are set up with the user and group names the same. The user "jdoe" will be set up with a group named "jdoe" and will be the only member of that newly created group. The user’s login name, user id, and group id will be added to the /etc/passwd and /etc/group files when the account is added, as shown in this example:To read this article in full, please click here
IBM's acquisition of Red Hat for $34 billion is now a done deal, and statements from the leadership of both companies sound extremely promising. But some in the Linux users have expressed concern.Questions being asked by some Linux professionals and devotees include: Will Red Hat lose customer confidence now that it’s part of IBM and not an independent company? Will IBM continue putting funds into open source after paying such a huge price for Red Hat? Will they curtail what Red Hat is able to invest? Both companies’ leaders are saying all the right things now, but can they predict how their business partners and customers will react as they move forward? Will their good intentions be derailed? Part of the worry simply comes from the size of this deal. Thirty-four billion dollars is a lot of money. This is probably the largest cloud computing acquisition to date. What kind of strain will that price tag put on how the new IBM functions going forward? Other worries come from the character of the acquisition – whether Red Hat will be able to continue operating independently and what will change if they cannot. In addition, a few Linux devotees hark Continue reading
In the past few years, edge computing has been revolutionizing how some very familiar services are provided to individuals like you and me, as well as how services are managed within major industries. Try to get your arms around what edge computing is today, and you might just discover that your arms aren’t nearly as long or as flexible as you’d imagined. And Linux is playing a major role in this ever-expanding edge.One reason why edge computing defies easy definition is that it takes many different forms. As Jaromir Coufal, principal product manager at Red Hat, recently pointed out to me, there is no single edge. Instead, there are lots of edges – depending on what compute features are needed. He suggests that we can think of the edge as something of a continuum of capabilities with the problem being resolved determining where along that particular continuum any edge solution will rest.To read this article in full, please click here
Linux debugging has taken a giant step forward with the release of Live Recorder 5.0 from Undo. Just released on Wednesday, this product makes debugging on multi-process systems significantly easier. Based on flight recorder technology, it delves more deeply into processes to provide insight into what’s going on within each process. This includes memory, threads, program flow, service calls and more. To make this possible, Live Recorder 5.0's record, replay and debugging capabilities have been enhanced with the ability to: Record the exact order in which processes altered shared memory variables. It is even possible to zero in on specific variables and skip backward to the last line of code in any process that altered the variable. Expose potential defects by randomizing thread execution to help reveal race conditions, crashes and other multi-threading defects. Record and replay the execution of individual Kubernetes and Docker containers to help resolve defects faster in microservices environments The Undo Live Recorder enables engineering teams to record and replay the execution of any software program -- no matter how complex -- and to diagnose and fix the root cause of any issue in test or production.To read this article in full, please Continue reading
Linux debugging has taken a giant step forward with the release of Live Recorder 5.0 from Undo. Just released on Wednesday, this product makes debugging on multi-process systems significantly easier. Based on flight recorder technology, it delves more deeply into processes to provide insight into what’s going on within each process. This includes memory, threads, program flow, service calls and more. To make this possible, Live Recorder 5.0's record, replay and debugging capabilities have been enhanced with the ability to: Record the exact order in which processes altered shared memory variables. It is even possible to zero in on specific variables and skip backward to the last line of code in any process that altered the variable. Expose potential defects by randomizing thread execution to help reveal race conditions, crashes and other multi-threading defects. Record and replay the execution of individual Kubernetes and Docker containers to help resolve defects faster in microservices environments. The Undo Live Recorder enables engineering teams to record and replay the execution of any software program -- no matter how complex -- and to diagnose and fix the root cause of any issue in test or production.To read this article in full, please Continue reading
While not nearly commonly seen on Linux systems, library (shared object files on Linux) injections are still a serious threat. On interviewing Jaime Blasco from AT&T's Alien Labs, I've become more aware of how easily some of these attacks are conducted.In this post, I'll cover one method of attack and some ways that it can be detected. I'll also provide some links that will provide more details on both attack methods and detection tools. First, a little background. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] Shared library vulnerability Both DLL and .so files are shared library files that allow code (and sometimes data) to be shared by various processes. Commonly used code might be put into one of these files so that it can be reused rather than rewritten many times over for each process that requires it. This also facilitates management of commonly used code.To read this article in full, please click here
While not nearly commonly seen on Linux systems, library (shared object files on Linux) injections are still a serious threat. On interviewing Jaime Blasco from AT&T's Alien Labs, I've become more aware of how easily some of these attacks are conducted.In this post, I'll cover one method of attack and some ways that it can be detected. I'll also provide some links that will provide more details on both attack methods and detection tools. First, a little background. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] Shared library vulnerability Both DLL and .so files are shared library files that allow code (and sometimes data) to be shared by various processes. Commonly used code might be put into one of these files so that it can be reused rather than rewritten many times over for each process that requires it. This also facilitates management of commonly used code.To read this article in full, please click here
If you haven’t been paying close attention, you might not have noticed a small but significant change in how Linux systems work with respect to runtime data. A re-arrangement of how and where it’s accessible in the file system started taking hold about eight years ago. And while this change might not have been big enough of a splash to wet your socks, it provides some additional consistency in the Linux file system and is worthy of some exploration.To get started, cd your way over to /run. If you use df to check it out, you’ll see something like this:$ df -k . Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 609984 2604 607380 1% /run Identified as a “tmpfs” (temporary file system), we know that the files and directories in /run are not stored on disk but only in volatile memory. They represent data kept in memory (or disk-based swap) that takes on the appearance of a mounted file system to allow it to be more accessible and easier to manage.To read this article in full, please click here
There are several ways to send email from the Linux command line. Some are very simple and others more complicated, but offer some very useful features. The choice depends on what you want to do -– whether you want to get a quick message off to a co-worker or send a more complicated message with an attachment to a large group of people. Here's a look at some of the options:mail The easiest way to send a simple message from the Linux command line is to use the mail command. Maybe you need to remind your boss that you're leaving a little early that day. You could use a command like this one:$ echo "Reminder: Leaving at 4 PM today" | mail -s "early departure" myboss [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] Another option is to grab your message text from a file that contains the content you want to send:To read this article in full, please click here