Sandra Henry-Stocker

Author Archives: Sandra Henry-Stocker

Keeping track of Linux users: When do they log in and for how long?

The Linux command line provides some excellent tools for determining how frequently users log in and how much time they spend on a system. Pulling information from the /var/log/wtmp file that maintains details on user logins can be time-consuming, but with a couple easy commands, you can extract a lot of useful information on user logins.One of the commands that helps with this is the last command. It provides a list of user logins that can go quite far back. The output looks like this:$ last | head -5 | tr -s " " shs pts/0 192.168.0.14 Wed Aug 14 09:44 still logged in shs pts/0 192.168.0.14 Wed Aug 14 09:41 - 09:41 (00:00) shs pts/0 192.168.0.14 Wed Aug 14 09:40 - 09:41 (00:00) nemo pts/1 192.168.0.18 Wed Aug 14 09:38 still logged in shs pts/0 192.168.0.14 Tue Aug 13 06:15 - 18:18 (00:24) Note that the tr -s " " portion of the command above reduces strings of blanks to single blanks, and in this case, it keeps the output shown from being so wide that it would be wrapped around on Continue reading

How to manipulate PDFs on Linux

While PDFs are generally regarded as fairly stable files, there’s a lot you can do with them on both Linux and other systems. This includes merging, splitting, rotating, breaking into single pages, encrypting and decrypting, applying watermarks, compressing and uncompressing, and even repairing. The pdftk command does all this and more.The name “pdftk” stands for “PDF tool kit,” and the command is surprisingly easy to use and does a good job of manipulating PDFs. For example, to pull separate files into a single PDF file, you would use a command like this:$ pdftk pg1.pdf pg2.pdf pg3.pdf pg4.pdf pg5.pdf cat output OneDoc.pdf That OneDoc.pdf file will contain all five of the documents shown and the command will run in a matter of seconds. Note that the cat option directs the files to be joined together and the output option specifies the name of the new file.To read this article in full, please click here

How to manage logs in Linux

Managing log files on Linux systems can be incredibly easy or painful. It all depends on what you mean by log management.If all you mean is how you can go about ensuring that your log files don’t eat up all the disk space on your Linux server, the issue is generally quite straightforward. Log files on Linux systems will automatically roll over, and the system will only maintain a fixed number of the rolled-over logs. Even so, glancing over what can easily be a group of 100 files can be overwhelming. In this post, we'll take a look at how the log rotation works and some of the most relevant log files. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] Automatic log rotation Log files rotate frequently. What is the current log acquires a slightly different file name and a new log file is established. Take the syslog file as an example. This file is something of a catch-all for a lot of normal system messages. If you cd over to /var/log and take a look, you’ll probably see a series of syslog files like this:To read Continue reading

Getting help for Linux shell built-ins

Linux built-ins are commands that are built into the shell, much like shelves that are built into a wall. You won’t find them as stand-alone files the way standard Linux commands are stored in /usr/bin and you probably use quite a few of them without ever questioning how they’re different from commands such as ls and pwd.Built-ins are used just like other Linux commands. They are likely to run a bit faster than similar commands that are not part of your shell. Bash built-ins include commands such as alias, export and bg. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] As you might suspect, because built-ins are shell-specific, they won't be supplied with man pages. Ask man to help with bg and you'll see something like this:To read this article in full, please click here

Mastering user groups on Linux

User groups play an important role on Linux systems. They provide an easy way for a select groups of users to share files with each other. They also allow sysadmins to more effectively manage user privileges, since they can assign privileges to groups rather than individual users.While a user group is generally created whenever a user account is added to a system, there’s still a lot to know about how they work and how to work with them. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] One user, one group? Most user accounts on Linux systems are set up with the user and group names the same. The user "jdoe" will be set up with a group named "jdoe" and will be the only member of that newly created group. The user’s login name, user id, and group id will be added to the /etc/passwd and /etc/group files when the account is added, as shown in this example:To read this article in full, please click here

Will rolling into IBM be the end of Red Hat?

IBM's acquisition of Red Hat for $34 billion is now a done deal, and statements from the leadership of both companies sound extremely promising. But some in the Linux users have expressed concern.Questions being asked by some Linux professionals and devotees include: Will Red Hat lose customer confidence now that it’s part of IBM and not an independent company? Will IBM continue putting funds into open source after paying such a huge price for Red Hat? Will they curtail what Red Hat is able to invest? Both companies’ leaders are saying all the right things now, but can they predict how their business partners and customers will react as they move forward? Will their good intentions be derailed? Part of the worry simply comes from the size of this deal. Thirty-four billion dollars is a lot of money. This is probably the largest cloud computing acquisition to date. What kind of strain will that price tag put on how the new IBM functions going forward? Other worries come from the character of the acquisition – whether Red Hat will be able to continue operating independently and what will change if they cannot. In addition, a few Linux devotees hark Continue reading

Linux a key player in the edge computing revolution

In the past few years, edge computing has been revolutionizing how some very familiar services are provided to individuals like you and me, as well as how services are managed within major industries. Try to get your arms around what edge computing is today, and you might just discover that your arms aren’t nearly as long or as flexible as you’d imagined. And Linux is playing a major role in this ever-expanding edge.One reason why edge computing defies easy definition is that it takes many different forms. As Jaromir Coufal, principal product manager at Red Hat, recently pointed out to me, there is no single edge. Instead, there are lots of edges – depending on what compute features are needed. He suggests that we can think of the edge as something of a continuum of capabilities with the problem being resolved determining where along that particular continuum any edge solution will rest.To read this article in full, please click here

Undo releases Live Recorder 5.0 for Linux debugging

Linux debugging has taken a giant step forward with the release of Live Recorder 5.0 from Undo. Just released on Wednesday, this product makes debugging on multi-process systems significantly easier. Based on flight recorder technology, it delves more deeply into processes to provide insight into what’s going on within each process. This includes memory, threads, program flow, service calls and more. To make this possible, Live Recorder 5.0's record, replay and debugging capabilities have been enhanced with the ability to: Record the exact order in which processes altered shared memory variables. It is even possible to zero in on specific variables and skip backward to the last line of code in any process that altered the variable. Expose potential defects by randomizing thread execution to help reveal race conditions, crashes and other multi-threading defects. Record and replay the execution of individual Kubernetes and Docker containers to help resolve defects faster in microservices environments The Undo Live Recorder enables engineering teams to record and replay the execution of any software program -- no matter how complex -- and to diagnose and fix the root cause of any issue in test or production.To read this article in full, please Continue reading

Undo releases Live Recorder 5.0 for Linux debugging

Linux debugging has taken a giant step forward with the release of Live Recorder 5.0 from Undo. Just released on Wednesday, this product makes debugging on multi-process systems significantly easier. Based on flight recorder technology, it delves more deeply into processes to provide insight into what’s going on within each process. This includes memory, threads, program flow, service calls and more. To make this possible, Live Recorder 5.0's record, replay and debugging capabilities have been enhanced with the ability to: Record the exact order in which processes altered shared memory variables. It is even possible to zero in on specific variables and skip backward to the last line of code in any process that altered the variable. Expose potential defects by randomizing thread execution to help reveal race conditions, crashes and other multi-threading defects. Record and replay the execution of individual Kubernetes and Docker containers to help resolve defects faster in microservices environments. The Undo Live Recorder enables engineering teams to record and replay the execution of any software program -- no matter how complex -- and to diagnose and fix the root cause of any issue in test or production.To read this article in full, please Continue reading

Tracking down library injections on Linux

While not nearly commonly seen on Linux systems, library (shared object files on Linux) injections are still a serious threat. On interviewing Jaime Blasco from AT&T's Alien Labs, I've become more aware of how easily some of these attacks are conducted.In this post, I'll cover one method of attack and some ways that it can be detected. I'll also provide some links that will provide more details on both attack methods and detection tools. First, a little background. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] Shared library vulnerability Both DLL and .so files are shared library files that allow code (and sometimes data) to be shared by various processes. Commonly used code might be put into one of these files so that it can be reused rather than rewritten many times over for each process that requires it. This also facilitates management of commonly used code.To read this article in full, please click here

Tracking down library injections on Linux

While not nearly commonly seen on Linux systems, library (shared object files on Linux) injections are still a serious threat. On interviewing Jaime Blasco from AT&T's Alien Labs, I've become more aware of how easily some of these attacks are conducted.In this post, I'll cover one method of attack and some ways that it can be detected. I'll also provide some links that will provide more details on both attack methods and detection tools. First, a little background. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] Shared library vulnerability Both DLL and .so files are shared library files that allow code (and sometimes data) to be shared by various processes. Commonly used code might be put into one of these files so that it can be reused rather than rewritten many times over for each process that requires it. This also facilitates management of commonly used code.To read this article in full, please click here

Exploring /run on Linux

If you haven’t been paying close attention, you might not have noticed a small but significant change in how Linux systems work with respect to runtime data. A re-arrangement of how and where it’s accessible in the file system started taking hold about eight years ago. And while this change might not have been big enough of a splash to wet your socks, it provides some additional consistency in the Linux file system and is worthy of some exploration.To get started, cd your way over to /run. If you use df to check it out, you’ll see something like this:$ df -k . Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 609984 2604 607380 1% /run Identified as a “tmpfs” (temporary file system), we know that the files and directories in /run are not stored on disk but only in volatile memory. They represent data kept in memory (or disk-based swap) that takes on the appearance of a mounted file system to allow it to be more accessible and easier to manage.To read this article in full, please click here

How to send email from the Linux command line

There are several ways to send email from the Linux command line. Some are very simple and others more complicated, but offer some very useful features. The choice depends on what you want to do -– whether you want to get a quick message off to a co-worker or send a more complicated message with an attachment to a large group of people. Here's a look at some of the options:mail The easiest way to send a simple message from the Linux command line is to use the mail command. Maybe you need to remind your boss that you're leaving a little early that day. You could use a command like this one:$ echo "Reminder: Leaving at 4 PM today" | mail -s "early departure" myboss [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] Another option is to grab your message text from a file that contains the content you want to send:To read this article in full, please click here

How Linux can help with your spelling

Linux provides all sorts of tools for data analysis and automation, but it also helps with an issue that we all struggle with from time to time – spelling! Whether you're grappling with the spelling of a single word while you’re writing your weekly report or you want a set of computerized "eyes" to find your typos before you submit a business proposal, maybe it’s time to check out how it can help.look One tool is look. If you know how a word begins, you can ask the look command for provide a list of words that start with those letters. Unless an alternate word source is provided, look uses /usr/share/dict/words to identify the words for you. This file with its hundreds of thousands of words will suffice for most of the English words that we routinely use, but it might not have some of the more obscure words that some of us in the computing field tend to use — such as zettabyte.To read this article in full, please click here

NVMe on Linux

NVMe stands for “non-volatile memory express” and is a host controller interface and storage protocol that was created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSD). It works over a computer's high-speed Peripheral Component Interconnect Express (PCIe) bus. What I see when I look at this string of letters, however, is “envy me.” And the reason for the envy is significant.Using NVMe, data transfer happens much faster than it does with rotating drives. In fact, NVMe drives can move data seven times faster than SATA SSDs. That’s seven times faster than the SSDs that many of us are using today. This means that your systems could boot blindingly fast when an NVMe drive is serving as its boot drive. In fact, these days anyone buying a new system should probably not consider one that doesn’t come with NVMe built-in — whether a server or a PC.To read this article in full, please click here

A deeper dive into Linux permissions

Sometimes you see more than just the ordinary r, w, x and - designations when looking at file permissions on Linux. Instead of rwx for the owner, group and other fields in the permissions string, you might see an s or t, as in this example:drwxrwsrwt One way to get a little more clarity on this is to look at the permissions with the stat command. The fourth line of stat’s output displays the file permissions both in octal and string format:$ stat /var/mail File: /var/mail Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 801h/2049d Inode: 1048833 Links: 2 Access: (3777/drwxrwsrwt) Uid: ( 0/ root) Gid: ( 8/ mail) Access: 2019-05-21 19:23:15.769746004 -0400 Modify: 2019-05-21 19:03:48.226656344 -0400 Change: 2019-05-21 19:03:48.226656344 -0400 Birth: - This output reminds us that there are more than nine bits assigned to file permissions. In fact, there are 12. And those extra three bits provide a way to assign permissions beyond the usual read, write and execute — 3777 (binary 011111111111), for example, indicates that two extra settings are in use.To read this article in full, please click here

4 vulnerabilities and exposures affect Intel-based systems; Red Hat responds

Four vulnerabilities were publicly disclosed related to Intel microprocessors. These vulnerabilities allow unprivileged attackers to bypass restrictions to gain read access to privileged memory. They include these common vulnerabilities and exposures (CVEs): CVE-2018-12126 - a flaw that could lead to information disclosure from the processor store buffer CVE-2018-12127 - an exploit of the microprocessor load operations that can provide data to an attacker about CPU registers and operations in the CPU pipeline CVE-2018-12130 - the most serious of the three issues and involved the implementation of the microprocessor fill buffers and can expose data within that buffer CVE-2019-11091 - a flaw in the implementation of the "fill buffer," a mechanism used by modern CPUs when a cache-miss is made on L1 CPU cache [ Also read: Linux hardening: a 15-step checklist for a secure Linux server ] Red Hat customers should update their systems Security updates will degrade system performance, but Red Hat strongly suggests that customers update their systems whether or not they believe themselves to be at risk.To read this article in full, please click here

4 vulnerabilities and exposures affect Intel-based systems; Red Hat responds

Four vulnerabilities were publicly disclosed related to Intel microprocessors. These vulnerabilities allow unprivileged attackers to bypass restrictions to gain read access to privileged memory. They include these common vulnerabilities and exposures (CVEs): CVE-2018-12126 - a flaw that could lead to information disclosure from the processor store buffer CVE-2018-12127 - an exploit of the microprocessor load operations that can provide data to an attacker about CPU registers and operations in the CPU pipeline CVE-2018-12130 - the most serious of the three issues and involved the implementation of the microprocessor fill buffers and can expose data within that buffer CVE-2019-11091 - a flaw in the implementation of the "fill buffer," a mechanism used by modern CPUs when a cache-miss is made on L1 CPU cache [ Also read: Linux hardening: a 15-step checklist for a secure Linux server ] Red Hat customers should update their systems Security updates will degrade system performance, but Red Hat strongly suggests that customers update their systems whether or not they believe themselves to be at risk.To read this article in full, please click here

When to be concerned about memory levels on Linux

Running out of memory on a Linux system is generally not a sign that there's a serious problem. Why? Because a healthy Linux system will cache disk activity in memory, basically gobbling memory that isn't being used, which is a very good thing.In other words, it doesn't allow memory to go to waste. It uses the spare memory to increase disk access speed, and it does this without taking memory away from running applications. This memory caching, as you might well imagine, is hundreds of times faster than working directly with the hard-disk drives (HDD) and significantly faster than solid-state drives. Full or near full memory normally means that a system is running as efficiently as it can — not that it's running into problems.To read this article in full, please click here

Red Hat Summit 2019: RHEL 8 arrives

Red Hat Summit 2019 is off to an exciting start. The conference, running from today until Thursday in Boston, is already tickling attendees’ fancies by announcing some very exciting developments.The first is Red Hat Enterprise Linux (RHEL) 8 — available now for everything from bare-metal servers and Linux containers to public and private clouds. [ Two-Minute Linux Tips: Learn how to master a host of Linux commands in these 2-minute video tutorials ] RHEL 8 introduces Application Streams, which allow languages, frameworks, and developer tools to be updated frequently without impacting the core resources that have made Red Hat Enterprise Linux an enterprise benchmark. This feature brings quick developer innovation and production stability into the OS.To read this article in full, please click here

1 2 3 12