crush depth


Deployed DKIM on the mail server today. All mail that originates from will be signed, and I've published DMARC and ADSP policies that tell other servers to be suspicious of any mail that isn't signed. Seems to be working well.


I've enabled TLS on all domains.

I'm using certificates from These aren't trusted by all browsers, but I don't care all that much. If trust becomes a serious issue at some point, I'll buy real TLS certs. I PGP sign all text on the server anyway, so anyone that really cares about checking to see if some nefarious third party has changed the data in transit can do so, albeit manually.

No, I'm not using Let's Encrypt. I've looked into it several times and I just can't get around the fact that it requires a huge number of moving parts and that the average ACME client requires a ridiculous level of privileges to work properly: If you want any possibility of security, this is what it takes to get it.

At a minimum:

  • The client has to be able to make connections to a remote server in order to download an extremely security-critical bit of data (the certificate). If this step fails for any reason (the remote side being unavailable, breakage in the client), the existing certificate expires and the https service is dead.

  • The client has to be intelligent enough to know when to try to get a new certificate. When is the right time? Who knows. Trying to request a new certificate a minute before the current one expires is suicidally reckless. Trying to do it the day before might be acceptable, but what if it goes wrong? Is a day long enough to try to fix a problem in something as gruesomely complex as the average ACME client? The availability of your https service essentially becomes tied to the availability of the ACME server. This wouldn't be so bad if the update was a yearly thing and could be planned for, but LE certificates are valid for 90 days.

  • The client has to be able to write to the directory being served by the http server in order to be able to respond to challenges. If the client is compromised, it has the ability to trash the served web site(s). I run my services in a highly compartmentalized manner, and having to allow this would be horrible. There are other challenge types, such as publishing a DNS record containing a response to a challenge, but those have the same problem of what should be an unprivileged program having to cross a security boundary and threatening the integrity of another service's data.

  • The client has to be able to write to the http server's configuration data (in order to publish the new certificate). Again, if the client is compromised, it can trash the http server's configuration. If the update fails here, the existing certificate expires and the https service is dead.

  • Assuming that the client actually does manage to respond to a challenge and get a certificate issued, and does actually manage to write that certificate to the filesystem, the problem then becomes getting the https server to use it. Most servers read certificates once on startup and don't reread them. Therefore, the client needs privileges to restart the server. This is totally unacceptable; no service should ever be able to forcefully restart another.

There are so many possible points of failure and every single one results in a broken https service at best, or a system compromise at worst. I'm vaguely surprised at how little in the way of criticism I've seen online of the complexity of Let's Encrypt given the usual reaction of information security experts to any new software system. To paraphrase Neal Stephenson, the usual reaction is to crouch in a fetal position under a blanket, screaming hoarsely that the new system is way too complex and is a security disaster. I have to wonder how many unpublished attacks against the ACME protocol there are out there.

Contrast this to a typical TLS certificate provider: I download a text file once a year and put it on the server. I then restart the https service. Done. No extra software running, no other points of failure.

Maven JavaDoc Plugin Fixed

The Maven JavaDoc plugin 3.0.0 is finally ready for release. This means that I can migrate 60+ projects to Java 9 and finally get the new versions pushed to Central.

Big thanks to Robert Scholte who worked hard to ensure that everything worked properly, and even got my rather unusual usage of the plugin (aggregating documentation into a single module) working as well.

Obstructing JavaDoc

I've been anxiously awaiting the 3.0.0 release of the maven-javadoc-plugin for weeks, and in an ironic twist of fate, I'm now responsible for delaying the release even further.

I found two rather nasty bugs in the version that was to become 3.0.0, but submitted a fix for the first and had it merged. The second problem seems like it's going to take rather more work to fix though, and my message asking for implementation advice to the javadoc-dev list is currently sitting in a moderation queue.

Expected But Got