Open main menu

Sometimes one-liners are so cool, you just want to remember them. And good one-liners can also teach you the intricacies and features of the Bash shell. Although there are better sites on the Internet for finding one-liners, understanding one-liners or playing on the command line, we'd still like to illustrate a few here.

Mount remote filesystem

Using sshfs is a great tool for mounting remote filesystems so that you can use your local tools on them. This example supplies a complex SSH command, including port-forwarding at the same time, to the SSHFS tool.

sshfs -o idmap=user -o ssh_command='ssh -t -i /home/greg/.ssh/eQualityTech-Test.pem -o IdentitiesOnly=true -o ForwardAgent=true -L 127.0.0.1:43306:10.0.50.53:3306 centos@ec2-52-203-160-83.compute-1.amazonaws.com ssh -A' centos@10.0.50.161:/ /mnt/es1

Compare two wikis for extensions and skins

This one-liner invokes the API of two wikis asking for info on siteinfo, general, extensions and skins; in json format. Since that data is returned without any newlines, we use `jq` to pretty-print the json output. Then it's an easy `meld` or `diff` to compare them. The `--silent` option to `curl` just suppresses the connection and retrieval metadata; while the `-L` is customary to follow redirects.

A='https://freephile.org/' B='https://www.mediawiki.org/' API='w/api.php?action=query&meta=siteinfo&siprop=general%7Cextensions%7Cskins&format=json' meld <(curl --silent -L "${A}${API}" | jq '.') <(curl --silent -L "${B}${API}" | jq '.')

Perl edit

Sometimes you want to make a bunch of changes (substitutions) of the same text across multiple files. Like changing a product name across multiple pages of documentation. With a one-line perl command, you can do just that. Furthermore, the example below uses a ls command to select which files to operate on -- giving you even more powerful control over your one-line edit.

perl -p -i -e "s/lemons/lemonade/" $(/bin/ls my/life*)

Free Memory

Use echo to output the result of a sub-shell, and a few extra characters (' - + p'), which is then piped to the (reverse-polish) desk calculator. Concatenate the /proc/meminfo file, printing it on STDOUT. Using extended-regex grep, we search for lines of output that begin with "MemFree", "Cached" or "Writeback" followed by the colon character. Piping to awk, we can print out the string in position 2 of each line. Those values are ultimately processed in the calculator by popping the last two numbers off the stack (Writeback and Cached), and adding that result to the first number (MemFree).[1]

echo $(cat /proc/meminfo | egrep '^(MemFree|Cached|Writeback):' | awk '{print $2}') - + p | dc

Result:

3033240

Size of Graphical Desktop (X Window System)

So you think your graphical desktop is slowing things down compared to using a pure console based system. Short of logging in single user mode, how much memory does the graphical desktop consume? Since everything is a file, we can look in the folder for processes (/proc), and specifically the folder created for the process id of "X" (X.org). grepping for the line starting with 'VmSize', we can see the Virtual Memory size of our graphical desktop.

grep ^VmSize /proc/$(pidof X)/status

Result:

VmSize:   158212 kB

Delete old stuff

You stumble upon a directory full of backups, which is great. But you also realize that nobody setup logrotate or other command to prune old content. Maybe that's because these backups are produced manually, say during upgrades, and so they are also deleted manually. What's a quick one-liner to remove old files? Use the mtime (modification time) option to find combined with the exec option to execute rm (remove) said files.

# Make sure we've got backups; look for recent files
sudo ls -al /backups
# list everything in the backups folder that's older than 30 days
sudo find /backups -mtime +30 -ls
# OK, delete those files
sudo find /backups -mtime +30 -exec rm {} \;

Reports with Find

Want to see all the .htaccess files in your webroot and see what they do? You can use -exec bash -c to perform multiple commands with one exec. (you can also use multiple -exec options in find). The example below echo's out the name of the found file; then cat's it with numbered lines. Note that the underscore is a throwaway value (could be any text, such as 'foobar') which consumes the first positional argument ($0) to bash -c making it "more readable" to reference our found filename as $1 (since $0 is commonly understood to refer to the script itself).

# All give similar output
find _mw -name .htaccess -exec bash -c 'echo -e "\n$1\n"; cat -n "$1"' _ '{}' \;
find _mw -name .htaccess -exec bash -c 'echo -e "\n$0\n"; cat -n "$0"' '{}' \;
find _mw -name .htaccess -exec bash -c 'echo -e "\n$0$1\n"; cat -n "$1"' 'Reporting on '  '{}' \;
find _mw -name .htaccess -exec echo -e "\nReporting on " '{}' "\n" \; -exec cat -n '{}' \;

[2]

And this one in your /opt/conf-meza/public "config" directory

find . -name '*yml' -o -name '*php' -exec bash -c 'echo -e "\n$0\n"; grep --perl-regexp --only-matching "^\s*(\\\$[^\[ ]+)" '{}' | sed -e "s/^[[:space:]]*//" | sort -u ' '{}' \;

Split a big file

Say you have a file with 50,000 lines in it, which becomes unwieldy to deal with in a spreadsheet or otherwise. You can easily split the file into segments with the split command. Be default it uses alpha suffixes (little_file.aa, little_file.ab, etc.) If you add the option --numeric-suffixes, then you'll end up with little_file.00, little_file.01, etc. If you would like to re-add the original suffix, then you must use the option called --additional-suffix

The following command takes BIG_FILE.txt and for every 10,000 lines of that file, it generates new files called 'little_file.00.txt', 'little_file.01.txt', 'little_file.02.txt', and so on.

split --lines=10000 --numeric-suffixes --additional-suffix='.txt' BIG_FILE.txt little_file.

References