Mktemp with suffix/extension

If you want to create a unique temporary file in a shell script, you would use the mktemp command. You can even specify a template where XXXXXX would be replaced by a unique combination. For instance:

mktemp ./my-temporary-file.XXXXXX
./my-temporary-file.j2iuMR

Now what happen if you want to create a temporary file ending with some particular suffix? You may want to do that because the program to which you feed your temp file expects some file extension to parse it properly. For example most web browsers expect files ending in .htm or .html to parse them as HTML documents. However, if you try to provide mktemp with a template ending with the appropriate suffix, that wouldn’t work (at least not on FreeBSD and OpenBSD):

mktemp /tmp/tmpXXXXXX.html
/tmp/tmpXXXXXX.html
# AAAARG! THIS IS NOT VERY UNIQUE! x_x

The version of mktemp in GNU coreutils comes with a --suffix option that allows you to do just that. But this is specific to GNU so you should not use that if you care about your scripts and other people. And please, do care about other people. Truly, scripts expecting /bin/sh to be bash or some other Linuxism are a real chore to work with, even if you work on Linux. So please restrain yourself and do the right thing.

A first solution that would come to mind would be to create the temporary file and move it to the same name but with the appropriate suffix, like this:

# Don't do that it's wrong!
tmp=$(mktemp)
tmp_html="${tmp}.html"
mv "$tmp" "$tmp_html"

But this is wrong! So don’t do that. The problem is that when you move your file, you don’t know if another file with the name "$tmp_html" is already present. It may be very unlikely, but not impossible. You may want to check if the file exists before executing the move but you could never completely avoid a potential race condition that mktemp was supposed to fix.

So a more correct answer is to create a temporary directory, and create your file in it:

tmp_d=$(mktemp -d)
tmp_f="$tmp/myfile.html"

... do your things ...

rm -r "$tmp_d"

With this, you know that your directory is unique, and as long as you are the only person using it, any file created in it should be unique too.

Don’t forget the pipe subshell

This is a common error while using pipe over while loops. Consider this shell snippet:

#!/bin/sh

cat file.txt | while read line
do
  echo "inside loop"
  exit 1
done

echo "outside loop"
exit 0

You’d expect the script to exit on the first line in file.txt. However execute this script and you have:

inside loop
outside loop

It is as if the exit 1 inside the loop is ignored. Another example:

#!/bin/sh

a=0
cat file.txt | while read line
do
  echo "inside loop"
  a=1
done

echo "outside loop"
echo "a=$a"

Here you’d expect the value of a to be 1 at the end of the script. Instead, if you execute this you have:

inside loop
outside loop
a=0

It’s as if the variable a isn’t even updated. In fact it is, though only inside the loop. So what is happening here?

The pipe (|) you use to feed the loop creates a subshell. In fact this is really just another process. So the exit 1 or a=1 only apply to these piped processes.

How can you fix that?
In the simple case presented above, you can simply use file redirection:

while read line
do
  ...
done < file.txt

But what if you really want to feed the loop with the output of another process. Like you would do with find for instance.

If you use bash you can use process substitution as described here. But you shouldn’t use bash for scripting anyway. For shell scripting you might be tempted to use a temporary file to store the process output:

# Use a temporary file.
tmp=$(mktemp)
find . > $tmp
while read line
do
  ...
done < $tmp 
rm $tmp

However this consumes disk space, and the loop only starts after the find process exited. Another option would be to use a named fifo:

fifo=$(mktemp -u)
mkfifo $fifo
find . > $fifo &

while read file
do
  ...
done < $fifo
rm $fifo

This time you create a single file, yet no disk space is used (apart for the fifo inode itself). Also the find command is a child process, so the loop reads find output as it comes.

Although the version above already works as it should, you may want to use an anonymous fifo. This way you only need to create a fifo file, although you can delete it immediatly. You can achieve this with a little help from our beloved file descriptor 3.

fifo=$(mktemp -u)

# Create fifo
mkfifo $fifo

# Create fd 3 and unlink fifo file.
exec 3<> $fifo
rm $fifo

# Redirect find to fd 3.
find . >&3 &

# Feed fd 3 to while loop.
while read line
do
  ...
done <&3 # Close fd 3. exec 3>&-

Dedibox serial shell access

If you cannot access the shell on a FreeBSD dedibox from the online.net console, here is a quick tip. Create an alternate root account with csh or tcsh and access ttyu1 on this account instead. That’s almost precisely what the toor account is made for, except that we generally leave the default shell on root.

No idea why sh, bash and zsh are not working through the serial connection, more doesn’t work either, but vi does. Probably a termcap thingy ? If anyone has a clue…

FP comparison in Shell

People tend to not like Shell. But I do!
Here is a simple example, try this floating point comparison:

$ [ 0.1 -gt 0.01 ]
[: 0.1: bad number

The shell itself cannot use float.
But there are multiple workarounds. Here is the one I prefer:

if rpnc "$a" "$b" - | grep "^-" > /dev/null
then
  echo "a < b"
fi

Although you may not have the rpnc command, so here is another one:

if [ $(echo "$a < $b" | bc) -eq 1 ]
then
  echo "a < b"
fi

Get rid of SIGINT

Yesterday I had some problems trying to get rid of the SIGINT signal (when you press C-c on your terminal). Imagine you have the following script :

# a.sh

b &
sleep

Let’s suppose that b is a sleep or another simple command or a shell script. In that case a SIGINT on the terminal will kill a.sh but b will be rattached to init. This is strange because:

  1. b effectively receives the signal (seen with strace)
  2. the default action should terminate the process
The reason behind this is that the shell will (roughly) do the following sequence of operations before starting the process in the background:
if(!fork()) {
  /* child */
  signal(SIGINT, SIG_IGN);
  signal(SIGQUIT, SIG_IGN);

  execve(...cmd...);
}

 

The execve manpage has this to say about signals:

*  POSIX.1-2001 specifies that the dispositions of any signals that are
ignored or set to the default are  left  unchanged.

i.e. the process will inherit SIG_IGN signals and set all the others to default. So with our shell the SIGINT is ignored for the background processes. However if the process defines its own handler and catch the signal, it will override the inherited SIG_IGN value. This is the case for example with tcpdump which will catch this signal to terminate properly. This is also the case for the following code:

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>

void sigint(int n)
{
  printf("child int; exitn");
  exit(0);
}

int main()
{
  signal(SIGINT, sigint);
  usleep(-1);
}

In this case the process will catch SIGINT and terminate. So if we cannot prevent the process to catch the signal the question now is: How can we prevent b to receive the signal?

When this signal is generated, it is sent to the foreground process group on the session associated to the tty. If you have a shell open on your tty, it will be the currently running command if any. In this case the current command is a non-interactive shell script. In this non-interactive shell, all commands even those sent to background with “&” are attached to the same process group. So here the background commands of the shell script are associated to the foreground process group.  This is why they still receive SIGINT even when the parent process die and they are rattached to init.

The solution is to create the processes which we want to make unaware of SIGINT in a new process group, different from the foreground process group and therefore a background process group. However there is no command to start something in a new process group. Instead we have a command which can start something in a new session and therefore a new process group too. Better still, your process is detached from your tty which makes it (nearly) a real daemon. So the solution is:

# a.sh

setsid b &
sleep

Another solution forces the shell to turn on job control in non-interactive mode with set -m. This way each background process will be created in a new process group. This could be embarrassing though because you cannot background processes in the same process group anymore. Basically we want processes in the same process group to make sure that a SIGINT will terminate the entire script, background commands included. Otherwise you have to save the pid, trap the SIGINT signal and take care of terminating everything by yourself.

Moreover using setsid is a part of the REAL answer to this question:

“How do we fully detach a process from its tty?”

For which we almost see:

“Use nohup…”

Which is wrong! According to man nohup justs: “run a command immune to hangups, with output to a non-tty”. That is, it will just adjust input/output, ignore SIGHUP and exec your command. However if you do something like signal(SIGHUP, SIG_DFL); then nohup has no effect anymore and the process will be terminated when the terminal hang up.