Error 500 in nextcloud polls

Today we found the poll application on one of our nextcloud instance to be completely broken. Attempting to create a new poll or add date options to an existing one resulted in HTTP 500 errors. In nextcloud logs, there were errors like:

duplicate key value violates unique constraint "oc_polls_polls_pkey"
DETAIL: Key (id)=(9) already exists.

or when adding options to an existing poll:

OCP\AppFramework\Db\DoesNotExistException: Did expect one result but found none

As a clue, this happened after a migration of the nextcloud DB from MySQL to PostgreSQL. My guess is that the PostgreSQL sequences were not updated correctly during the migration. So it was not possible to create either new rows for options in a poll or even a poll itself after a certain point, when the auto generated ID conflicted with an existing ID in the tables, which is the unique constraint violation you see in the error.

So the behavior was, we could add some options and some polls, but as soon as the sequence catches up to an existing ID, any INSERT fails with a primary key conflict. In our case only the polls tables seemed to be affected, but this could happen on other tables too because of the migration.

The fix was to reset the sequences to the current maximum ID for each table of the polls application:

SELECT setval('oc_polls_polls_id_seq',    (SELECT MAX(id) FROM oc_polls_polls));
SELECT setval('oc_polls_options_id_seq',  (SELECT MAX(id) FROM oc_polls_options));
SELECT setval('oc_polls_votes_id_seq',    (SELECT MAX(id) FROM oc_polls_votes));
SELECT setval('oc_polls_share_id_seq',    (SELECT MAX(id) FROM oc_polls_share));
SELECT setval('oc_polls_comments_id_seq', (SELECT MAX(id) FROM oc_polls_comments));
SELECT setval('oc_polls_log_id_seq',      (SELECT MAX(id) FROM oc_polls_log));

Open with and external programs on Claws-Mail

I was integrating rrr with claws-mail and use it to open external files, so I could centralize file opening configuration there. I created a mail profile in ~/.config/rrr.conf with a catch-all rule * echo "'%s'" >> /tmp/mail-rrr.log to debug which files and extensions were being passed. The plan was to set rrr -p mail -F '%s' as the default external application, but the command kept failing.

I wrote a small test.sh script to display each argument individually as passed by claws-mail. That’s when I discovered the issue: claws-mail treats each space in the command as a separate argument delimiter. So rrr -p  mail -F '%s' (note the double space between -p and mail) resulted in the argument list: ["rrr", "-p", "", "mail", "-F", "the-actual-file"]. The empty string broke everything.

So the fix was simple, just had to make sure there were no extra space everywhere.

Disable RustAnalyzer warnings

Currently, I mainly use Neovim to code in Rust (and pretty much any other language). The problem is that when you start a new Rust project and begin creating structs and functions all over the place, you get flooded with warnings about “unused this” and “unused that”.

Maybe many of you probably already know this, but a quick trick to avoid those warnings is to simply run:
export RUSTFLAGS="-Awarnings".

This will disable all warnings, but it can be very convenient during early development.

Unbound stub-zone for reverse private IPv6

Today I tried to configure a stub-zone on a unbound resolver. This was for the reverse resolution of some private IPv6. In unbound.conf, it looks something like this:

stub-zone:
  name: X.X.X.X.X.X.d.f.ip6.arpa.
  stub-addr: {authoritative-server-ip}

But trying a reverse resolution on any of those private IPv6 failed:

$ drill -x fdXX:XXXX::XXXX
;; AUTHORITY SECTION:
d.f.ip6.arpa.	10800	IN	SOA	localhost. nobody.invalid. 1 3600 1200 604800 10800

Found out the problem in a snippet from unbound.conf.sample:

# By default, for a number of zones a small default 'nothing here'
# reply is built-in.  Query traffic is thus blocked.  If you
# wish to serve such zone you can unblock them by uncommenting one
# of the nodefault statements below.
# You may also have to use domain-insecure: zone to make DNSSEC work,
# unless you have your own trust anchors for this zone.
# local-zone: "localhost." nodefault
# local-zone: "127.in-addr.arpa." nodefault
# local-zone: "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa." nodefault
# local-zone: "home.arpa." nodefault
# local-zone: "resolver.arpa." nodefault
# local-zone: "service.arpa." nodefault
# local-zone: "onion." nodefault
# local-zone: "test." nodefault
# local-zone: "invalid." nodefault
# local-zone: "10.in-addr.arpa." nodefault
# local-zone: "16.172.in-addr.arpa." nodefault
# local-zone: "17.172.in-addr.arpa." nodefault
# local-zone: "18.172.in-addr.arpa." nodefault
# local-zone: "19.172.in-addr.arpa." nodefault
# local-zone: "20.172.in-addr.arpa." nodefault
# local-zone: "21.172.in-addr.arpa." nodefault
# local-zone: "22.172.in-addr.arpa." nodefault
# local-zone: "23.172.in-addr.arpa." nodefault
# local-zone: "24.172.in-addr.arpa." nodefault
# local-zone: "25.172.in-addr.arpa." nodefault
# local-zone: "26.172.in-addr.arpa." nodefault
# local-zone: "27.172.in-addr.arpa." nodefault
# local-zone: "28.172.in-addr.arpa." nodefault
# local-zone: "29.172.in-addr.arpa." nodefault
# local-zone: "30.172.in-addr.arpa." nodefault
# local-zone: "31.172.in-addr.arpa." nodefault
# local-zone: "168.192.in-addr.arpa." nodefault
# local-zone: "0.in-addr.arpa." nodefault
# local-zone: "254.169.in-addr.arpa." nodefault
# local-zone: "2.0.192.in-addr.arpa." nodefault
# local-zone: "100.51.198.in-addr.arpa." nodefault
# local-zone: "113.0.203.in-addr.arpa." nodefault
# local-zone: "255.255.255.255.in-addr.arpa." nodefault
# local-zone: "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa." nodefault
# local-zone: "d.f.ip6.arpa." nodefault
# local-zone: "8.e.f.ip6.arpa." nodefault
# local-zone: "9.e.f.ip6.arpa." nodefault
# local-zone: "a.e.f.ip6.arpa." nodefault
# local-zone: "b.e.f.ip6.arpa." nodefault
# local-zone: "8.b.d.0.1.0.0.2.ip6.arpa." nodefault
# And for 64.100.in-addr.arpa. to 127.100.in-addr.arpa.

As you can see d.f.ip6.arpa. is blocked by default, so just had to add this line to unblock it:

local-zone: "d.f.ip6.arpa." nodefault

Custom Mail-From with Amazon SES

We manage our own mail server; however, some big mail provider company (among other things), let’s name it the big M, has the very annoying habit of randomly blacklisting the entire IP range of our hosting provider because, maybe, some hosts within the DC are botnet-ed to the core. And we could ask to whitelist our IP, the entire block would get blacklisted again the following month, and that would take precedence.

Now since virtually everybody on the Internet uses either Outlook or Gmail as it’s email provider, that meant that at times a large chunk of our outbound emails would just get totally dropped, and we wouldn’t know about it until users complained.

Having given up with the big M since early 2000, I configured an Amazon SES relay for those problematic destinations. Our emails are relayed there when they need to reach an Outlook destination, and thankfully, they didn’t block AWS yet.

In addition, we also configured SPF and DKIM, so our domain doesn’t get wickedly used for spam. That works well when using our own SMTP, but when passing through the SES relay, the Mail-From/Return-Path/Bounce-Address would be rewritten as coming from {region}.amazonses.com. Hence, the relaxed SPF alignment check would fail. It requires that the Return-Path (rewritten to {region}.amazonses.com) be a subdomain or exact match of the From header (our own domain).

To remedy that, you need to have your domain registered as a verified identity in SES. Then in Amazon SES > Configuration: Identities > yourdomain, edit Custom MAIL FROM domain to a subdomain of yours. In our case, we used ses-relay.ourdomain, but it can be anything really. It just has to be a sub-domain.

You will also have to configure an MX record on that subdomain to bounce back to Amazon SES. The console gives you proper record to use, it looks like MX 10 feedback-smtp.{region}.amazonses.com. You also need to configure an SPF record on that subdomain to ensure that mail sent from the relay pass the SPF check. They recommend "v=spf1 include:amazonses.com ~all" for that, but we were a bit stricter and went with "v=spf1 include:{region}.amazonses.com -all".

You can choose the behavior should your MX on the subdomain be improperly configured or unreachable. Either it will reject the mail, or it will fallback to rewriting the Mail-From to the default of {region}.amazonses.com. I choose the former, because if it fails, I want it to fail hard. The latter, you would only find in your DMARC reports if you dutifully analyze all of them, all the time. That could be done if the DMARC reports were somehow automatically analyzed be we don’t do that yet.

Another thing that got me scratching my head for like an hour is that, in our case the custom Mail-From was not honored. Even though it was configured correctly in the console and the configuration marked as Successful, the source of the mail were still showing with a Return-Path under {region}.amazonses.com. The reason why was that we still had other SES identities that were used for testing, especially Email address identities. And seeing the matching email address, it would use those identities configuration instead of the domain identity configuration, thus ignoring the custom Mail-From. Actually found that out from this stackoverflow post.

Avoid rebuilding rust for FreeBSD ports

Today I tried to rebuild a rust-based port on FreeBSD. It tried to build lang/rust from scratch even though it was already installed. The problem was that the latest binary package for rust was 1.86.0 and the latest version in the ports was 1.87.0. Digging in /usr/ports/Mk/Uses/cargo.mk there is:

CARGO_BUILDDEP?=··yes
.  if ${CARGO_BUILDDEP:tl} == "yes"
BUILD_DEPENDS+=·${RUST_DEFAULT}>=1.87.0:lang/${RUST_DEFAULT}
.  elif ${CARGO_BUILDDEP:tl} == "any-version"
BUILD_DEPENDS+=·${RUST_DEFAULT}>=0:lang/${RUST_DEFAULT}
.  endif

That’s the bit actually enforcing the build dependency. But as you can see, it’s easy to bypass this dependency with export CARGO_BUILDDEP=no. Just ensure that you have rust installed either with rustup or from the packages.

Machines

Quote

In this day and age, this quote needs to be plastered in every school, and put as a fore-note to any article about AI.

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.
— Frank Herbert

One Million Robbery Queries

Today I had the very unpleasant surprise to find out that over the last month we had nearly one million requests from ChatGPT scraping bots. That was especially the case on our photos gallery website where they made a request every single second to check if there were any new pictures to steal so that our cats and dogs could be featured in an AI generated image.

First step was to ban them, but that might not be sufficient as this is just a random IP block within a Azure DC. Ironically, I asked ChatGPT to generate me a robots.txt file that bans the ChatGPT scraping bots. Here it is:

User-agent: ChatGPT-User
Disallow: /
User-agent: ChatGPT
Disallow: /
User-agent: GPTBot
Disallow: /