Redirect traffic to loopback

Today I wanted to transparantly redirect the DNS requests coming at the output of a tunnel to a local caching DNS resolver. The caching DNS was listening only on the loopback as port 53 was already bound to other interfaces. That would be fairly simple on Linux:

echo 1 > /proc/sys/net/ipv4/ip_forward

iptables -t nat -A PREROUTING -i tun0 -p udp --dport 53 -j DNAT --to-destination 127.0.0.1
iptables -A FORWARD -i tun0 -o lo -p udp --dport 53 -j ACCEPT

But… The kernel will refuse to route packets with the loopback as source or destination because this qualify as a martian packet. The solution was to enable the route_localnet flag. As stated in the kernel documentation:

route_localnet – BOOLEAN: Do not consider loopback addresses as martian source or destination while routing. This enables the use of 127/8 for local routing purposes (default FALSE).

This is per interface. So I just had to enable this on the tunnel interface:

echo 1 > /proc/sys/net/ipv4/conf/tun0/route_localnet

Rename interfaces on Linux

I just reinstalled a Debian stable on a laptop but messed with the interfaces so that an external USB WiFi card appeared as wlan0 while the main card appeared as wlan1. In case you wondered you can rename or reset interface names in /etc/udev/rules.d/70-persistent-net.rules. That’s on systemd though.
I wonder how we can change that on sysvinit? Nobody cares, probably, but I do.

According to what I read there, it is not consistent. Interfaces are named in the order in which they appear during the boot process. However it is possible to use ifrename from the wireless tools package. Why this tool that should work for all type of interface is part of the wireless tools package is beyond my comprehension. But hey whatever, Linux, and it just works.

If you are curious and want to know how ifrename actually does rename an interface, according to the code it uses a SIOCSIFNAME ioctl on a socket file descriptor. There it passes a struct ifreq in which you can provide a new name for the interface. Just man netdevice(7) for more info.

Constant SD-Card corruption on the RPi

Our home servers broke. Here we are again.

I spent weeks of my time, countless evenings up to 4AM, entire weekends since months trying to design and configure our reborn home-servers and gateways.

And it was neat.

  • DNSSEC all the way down
  • RPC accross the nodes
  • Easy configuration
  • Caching and stuff
  • Automatic tests

It took me a lot of time to assemble all of this in something that I liked. And to document everything so that we could easily install a new node from scratch.

I installed two nodes and it worked well for several weeks. Until a week ago or so I started to see corruption on the first node. And by corruption I mean random garbage in a lot of binaries and libraries. Exec format error at every corner. At this point it was completely broken and useless so the only option was to reinstall it.

So I used a new SD-Card, changed the power supply and reinstalled everything last weekend. Just finished today and also fixed bugs in some of our scripts. Had to search for a package on the second node which at this point was still in a pretty good shape.

$ apt-cache
zsh: exec format error: apt-cache
$ su
zsh: exec format error: su

Dang! So there goes another weekend I will spend to reinstall the thing. And who knows how long until the first node gets corrupted again.

Checked the TP1-TP2 voltage, 4.65V, probably because of the second USB Ethernet adapter. I tried to limit the amount of writes on the SD-Card. No heavy writers, no swapping, no overclocking.

So I must be doing something wrong, right? Right?! The RaspberryPi can be that unreliable. I wonder how many power supplies and SD-Cards I will have to buy and try until, by sheer luck, I do not have to reinstall everything in the following three months or so.

I ran into this problem years ago. And now it seems that I will run in the same problem over and over again. Any recommendation is welcome of course. Though to be honest, for now, I just want to fly the damn thing across the room.

Where did my PGP keys go?

Today I noticed that one of my PGP private key just disappeared of GPG. The key did not appear when I did gpg --list-secret-keys. After a bit of investigation I discovered that the problem did not affect Linux hosts but only FreeBSD hosts. Weird…

The source of the problem was a migration from GnuPG v2.0 to v2.1. According to this page, GPG does not handle the private keys anymore and delegates all private keys operations to the gpg-agent. Therefore GPG v2.1 migrates the legacy secret keyring, secring.gpg, to the gpg-agent key store, private-keys-v1.d and then forgets about it.

Though, you see, my GPG keyrings were synchronized across all hosts. But the GnuPG package on Debian is still v2.0, while FreeBSD is v2.1. Get the picture?

I synced my keyring on FreeBSD hosts where GPG migrated my private keys to the gpg-agent key store. Then I generated a new key pair on a Debian host, which was added to the legacy keyring. Resynced, but the newer version of GPG didn’t care, they already migrated to the new key store.

Fortunately it was easy to fix, all you have to do is re-import your legacy keyring with one of the newer versions of GPG. The private keys are now also present in the new key store so you can sync to all other hosts.

gpg --import $HOME/.gnupg/secring.gpg
gpg --list-secret-keys

IPs ban on Linux

Ban Hammer

Who needs a quick ban?

Today we had a bruteforce attack on our nginx server. Well cannot say he was anywhere near successful though, the guy did POST /wp-login.php several times per second and all he got as an answer was 404. Fat chance…

But still, he had our access logs growing far larger than they usually do. So I tried to ban him. Unfortunately nginx does not use TCP wrappers by default (you can use ngx_tcpwrappers although it will have to be rebuilt from source).

So I made a little script, called ban-hammer to temporarily ban IPs using IPTables. There is also a cron.daily script to unban IPs each day. The script requires rpnc, but it is easy to adapt without it.

These scripts add and remove the IPs into a special IPT chain (which you can configure in the script). So you also have to configure your firewall to jump to the two chains and load banned IPs on boot:

echo "Bans"

load_bans() {
  ban_table=$1
  ban_chain=$2
  iptables=$3

  $iptables -N $ban_chain

  while read ban
  do
    ip=$(echo "$ban" | cut -d'=' -f 1)
    $iptables -A $ban_chain -s "$ip" -j DROP
  done < "$ban_table"

  $iptables -A INPUT -j $ban_chain
}

load_bans /etc/firewall/ip4.ban IP4BAN iptables
load_bans /etc/firewall/ip6.ban IP6BAN ip6tables

Hello FreeBSD!

It’s been more than two weeks now that I switched from Linux to FreeBSD. There are multiple reasons behind this change and I will not dwell on all of them. If you read this blog (do you? :)), you probably know that I am a long time advocate of Debian. One particular thing that I like with Debian is that it doesn’t tie your hand with a large set of packages. It is an universal operating system that you can tailor to better suit your needs.

However, as time passes it became harder to modify anything. More and more I find myself patching programs that just want to do things on their own fancy way. More and more some random daemon just gets in my way because it supposedly covers all possible use cases. And recently I came under the impression that my system was just a bunch of layers of layers of various daemons doing their stuff somehow, somewhere, all of them trying to reinvent the wheel, with a twist.

Finally there is one important thing you should remember, Linux is not UNIX. Actually in the past few years, it started to diverge from this philosophy quite significantly. This article presents some differences between the UNIX and the Linux/FLOS model much better than I could do. And this is where we come to the root of my decision. While I can understand some of the benefits of the later approach, it dawns on me that as an user, I do not fit in FLOS and if I keep using Linux as a desktop, this life will be a hell of frustration and ranting without end. Note that this transition was long time foreseeable. I always spent a lot of time with BSDs. However these were casual and experimental setups and I didn’t do much more than porting stuff to it.

I could as well use this system on a daily basis. So I decided to take the leap and use FreeBSD on my laptop (ThinkPad X201). I first installed FreeBSD 10 (RELEASE), but it didn’t work as expected. In particular the Intel KMS driver did not work properly. Also xrandr did not work, and the performances were far lower than Linux. Needless to say, I was a bit downhearted. I expected so much from this first installation.

After an evening weighting the pros and cons, sadly contemplating the idea of returning to Linux, I decided to give it another try with FreeBSD 11 (CURRENT). Fortunately almost everything worked perfectly then. The Intel KMS driver works, although I don’t have access to the ttys (ttys and suspend now work on HEAD). Xrandr works perfectly which is imperative to give a presentation. The wireless card, sound card, fingerprint reader and ultra base also work with no apparent problem.

However I still have some problems with the function keys not detected on the external ThinkPad keyboard. Also xscreensaver does not always detect the finger print reader. Finally the secondary mouse and keyboard are not always properly detected by X. I guess this is probably a problem with HAL. But I did not look into it yet (ums_load="YES" in /boot/loader.conf).

I did several quick benchmarks to compare the performances with the Debian installation. I will post the results in a few days. I will also leverage the change to update some of my projects and also to clean my configurations a little bit. I already did so for Emacs and Awesome WM, though for now I’ve something else to do.

Source based routing

My two home servers are down for the moment. This also means that our two IPv6 SixXS tunnels are down which costs us 100 ISK per week. Argh! I need to get these up and running as soon as possible. Fortunately we have another VPS on Linux that can save us. So we just have to enable the two tunnels there and make sure that we can ping to/from both interfaces.

Setting up the two tunnels is easy. Use one configuration file per tunnel. Ensure that you change the parameters “tunnel_id” to the tunnel associated to this configuration file, one “pidfile” and “ipv6_interface” for each tunnel and “defaultroute” to false because we already have a default IPv6 route. Now you can start/stop each tunnel with:

aiccu start /etc/aiccu/tunnel0.conf
aiccu start /etc/aiccu/tunnel1.conf

aiccu stop  /etc/aiccu/tunnel0.conf
aiccu stop  /etc/aiccu/tunnel1.conf

Don’t forget to hack /etc/init.d/aiccu to start/stop both tunnel on each reboot. OK! So now ifconfig list the two interfaces, up and running sixxs0 and sixxs1. This is great but wait… Nobody outside can ping these interfaces. The tunnel must ping to be considered active by SixXS so we better get this running.

For now we have these three interfaces and IPs (not the actual names/IPs):

  1. net0 (2001::1) default
  2. sixxs0 (2a02::1)
  3. sixxs1 (2a02::2)

By default, all our IPv6 traffic goes through net0. However and unsurprisingly our ISP filters the traffic at the output of net0. So we cannot use this interface to answer the echo-requests. Actually, what we want is that traffic originating 2a02::1 goes through sixxs0 and from 2a02::2 goes through sixxs1. That is, one default route based on the source address.

Linux has long had support for multiple routing tables (CONFIG_IP_MULTIPLE_TABLES). Basically what we will do here:

  • Create two routing tables for each tunnel interface (sixxs0, sixxs1).
  • Each table will have a default route through its interface.
  • Lookup into one of the two tables according to the source IP.

You can find some relevant documentation in Linux Advanced Routing & Traffic Control, Chapter 4.

We first list the actual rules:

# ip rule list
0:  from all lookup local
32766:  from all lookup main
32767:  from all lookup default

We can see that we have three routing tables. One for the local addresses, the normal routing table (what you get with ip -6 route) and the fallback default table.
Let’s first check the local routing table (we are just curious):

# ip -6 route list table local
local ::1 via :: dev lo  proto none  metric 0  mtu 16436 advmss 16376 hoplimit 0
local 2a02::1 via :: dev lo  proto none  metric 0  mtu 16436 advmss 16376 hoplimit 0
local 2001::1 via :: dev lo  proto none  metric 0  mtu 16436 rtt 6ms rttvar 7ms cwnd 10 advmss 16376 hoplimit 0
local 2a02::2 via :: dev lo  proto none  metric 0  mtu 16436 advmss 16376 hoplimit 0
local fe80::1 via :: dev lo  proto none  metric 0  mtu 16436 advmss 16376 hoplimit 0
local fe80::2 via :: dev lo  proto none  metric 0  mtu 16436 advmss 16376 hoplimit 0
ff00::/8 dev net0  metric 256  mtu 1500 advmss 1440 hoplimit 0
ff00::/8 dev sixxs1  metric 256  mtu 1280 advmss 1220 hoplimit 0
ff00::/8 dev sixxs0  metric 256  mtu 1280 advmss 1220 hoplimit 0

So now you know what’s going on when you ping one of your local interfaces. But back to our point. We name our two new routing tables in /etc/iproute2/rt_tables:

# SixXS tables
200 sixxs0
201 sixxs1

Now we add the default route in each of these two tables:

ip -6 route add default dev sixxs0 table sixxs0
ip -6 route add default dev sixxs1 table sixxs1

And finally we use two rules to map the source address to the correct routing table:

# ip -6 rule add from 2a02::1 table sixxs0
# ip -6 rule add from 2a02::2 table sixxs1
# ip -6 rule list
0:  from all lookup local
16383:  from 2a02::1 lookup sixxs0
16383:  from 2a02::2 lookup sixxs1
32766:  from all lookup main
32767:  from all lookup default

It should be OK but let’s check that. We can ping from the sixxs interfaces:

ping6 -c1 -I 2a02::1 www.kame.net
ping6 -c1 -I 2a02::2 www.kame.net

We also check that we can ping our interfaces from another host:

ping6 -c1 2a02::1
ping6 -c1 2a02::2

Everything works, that’s great! Finally we just hack /etc/init.d/aiccu to configure the routing tables on each reboot. Note that you need to sleep a bit when you issue the aiccu start because the daemon needs a bit of time to enable the tunnels. Also note that you must be careful when you test your script (quoting the SixXS FAQ):

“If a client connects more than 4 times in 60 seconds (1 minute) the client will not be allowed to connect again for the next 5 minutes. In case this threshold is exceeded more than once in 24 hours a client will be automatically blocked for a week.”

As you can guess, I have been blocked. Oops!

SANE USB permissions

Today I had a permission problem with SANE on Linux. SANE stands for “Scanner Access Now Easy”, it provides standardized access to scanner hardware (http://www.sane-project.org) and this is the most commonly used scanning tool on UNIX/Linux.

In my case the USB scanner was not recognized when issuing scanimage -L from my user account although it worked correctly under root and my user is in the scanner group. What more is sane-find-scanner reported permissions errors while running the command as user. The owner and group for the device (in my case it was /dev/bus/usb/002/004) were root:root. At this point we already know that something weird is happening and I expected something like root:scanner instead.

Looking into /lib/udev/rules.d/60-libsane.rules, the line in charge of changing the permissions for each scanner device matched by SANE:

ENV{libsane_matched}=="yes", RUN+="/bin/setfacl -m g:scanner:rw $env{DEVNAME}"

This is nice but I do not use ACL and they are disabled in kernel,  so this command is useless. So I replaced this line with:

ENV{libsane_matched}=="yes", RUN+="/bin/setfacl -m g:scanner:rw $env{DEVNAME}", MODE="0664", GROUP="scanner"

Now the owner and group are correctly set to root:scanner and I can use my scanner as a regular user.

Note that on my system the libsane, sane-utils and xsane are the only packages depending on the acl package. According to what I’ve seen in the ChangeLog they do so in order to cope with MFP which I presume should be accessible as a scanner and printer device at the same time. What I would have done instead would be to create special group for MFP devices and use this instead. IMO still less of a mess than enabling ACL on the whole system for a single package.

GTalk browser plugin on Debian (testing)

So you installed the GTalk browser plugin on Debian testing and it doesn’t work. However GTalk is listed correctly when you list the plugins in your browser. So what now?

Well you can try to remove libudev0. It seems that the plugin has some problems when both libudev1 and libudev0 are present on the system.