crush depth

How To Verify blog.io7m.com

This blog was designed to be verifiable:

$ gpg --recv-keys 8168DAE22B15D3EDC722C23D0F15B7D06FA80CB8
$ wget -r http://blog.io7m.com
$ find blog.io7m.com -name '*.asc' -exec gpg --verify {} \;

Note that the 8168DAE22B15D3EDC722C23D0F15B7D06FA80CB8 key ID assumes you're reading this in 2017. By the time 2018 comes around, this blog will be signed with a new key (and a new one for each passing year).

Possible points of failure:

  1. A malicious actor gets the remote keyserver to serve a different key than the one with fingerprint 8168DAE22B15D3EDC722C23D0F15B7D06FA80CB8. Does gnupg verify that a received key had the fingerprint that the user specified on the command line? What if the user specified my name and key comment instead of a fingerprint? The actor at this point might be able to convince you that the signatures on files on blog.io7m.com are invalid. It might be able to convince you that its own key is mine.

  2. A malicious actor modifies the files and signatures when wget downloads them. The actor can't generate valid signatures for the key 8168DAE22B15D3EDC722C23D0F15B7D06FA80CB8 (unless it can break RSA), but it can try to convince you that its own key is actually my key and therefore have you trust that the data you're seeing is unmodified and was signed by me. If the actor managed to perform step 1 above, then you're completely compromised.

  3. A malicious actor removes some of the signatures. If you didn't know exactly how many pages there should be, you'd not notice if gpg didn't verify one of them.

Step 1 has no technical solution; you need to verify the keys you receive and check the signatures on those keys assuming they come from other keys that you trust. If you're lucky, there is a chain of signatures leading to a key that you do trust with certainty. I have yearly expiring keys, and I sign each new year's keys with the old keys. Unless my keys have been compromised yearly, there's a reasonable chance that the keys you see are mine!

Step 2 is partially mitigated by https, assuming that the actor doesn't have backdoor access to the CA issuing the certificate. The actor can have the CA issue a new certificate, redirect you to a server that the actor controls, decrypt the traffic, modify it, and then re-encrypt it. You'd never know anything was wrong.

Step 3 is manually mitigated by reading the posts by year page and checking that you have at least as many signatures as pages. I may start publishing a CHECKSUMS file that just contains signed SHA512 hashes of every file on the site.

I'm considering using keybase in addition to publishing PGP keys on the public keyservers. I don't do social media, so I'm not sure how much it'd apply to me. I do use GitHub heavily though.

New PGP Key
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

2017-12-17T14:19:49+0000

Pushed a new key for signing commits in Android work. I'll still be
releasing new personal and release signing keys in January (and I'll
be switching to ed25519 keys from RSA).

E134 3512 A805 115A C1A0 1D15 C578 C3C0 C9B1 2BFF
Mark Raynsford (2018 android commit signing)

-----BEGIN PGP SIGNATURE-----

iQJCBAEBCgAsFiEEgWja4isV0+3HIsI9DxW30G+oDLgFAlo2fa4OHG1hcmtAaW83
bS5jb20ACgkQDxW30G+oDLjKiA/9HRrc0F7d/4C4ybdKFpp8N3S/RT/NJLfrYOTV
XQvt9Nw+eJydsygY8IbaZiWo9hkxddI1DLuWtahcrXFWGQ/VpAUmZcIondaJzLna
42Ui5jFpkweOVH2VYmuuDTV5rpkfH7IkTml5m2OsnsVU4hO9V1DGoNL+/5p2xv0E
sLkopFX/9qaRtw0qJ7u8Bl5217kodlI2inEMfomI8QcMp+JarkTogEdkkzFBz7Qo
/XGevkdeIBMCQ0NpsvoQmclGbgOtu6js1LvoQjDaXoVtT1yyIM831FDgFxbKCMw8
gzxx0f/TZjoRizIE0fNIwPmLYG5HXLCt6wN7iT6MYhp7ijBABZB8tH7cffCF1sM5
QE+eg8bzOL4FT57XLPpp9eDXZmLqmy6EdrWedaWughPPjUofqaIw0Bar1iIp4pY0
lIkAUTsqtEI32sAdInJ0PPar8nYVM8COSoyZ08kdxImO3DHRGerI7DSi88JpTiRy
vX96LZ0UvwyUhFxUSJQuxXYd82bQgoBhdjsLWiurZZYdZC5EGCwA7m5zapLWF76m
SJloK1ogK628TygNrhnNNsUirrEsDJM2CaNnp8/viN1eFlWMb13dgyLOQijyFyYV
8Q56XeILyckUY3ERw+v5BPN5g3qdF4nL5O23L3zrSQKcn6yUGgcFa6gia9cc1/Hm
EiTv9cc=
=D3uo
-----END PGP SIGNATURE-----
Back To Java

I initially wrote jstructural as a set of XSLT stylesheets. That quickly became unmaintainable as the complexity of the stylesheets increased. I rewrote them in Java. I eventually got tired of writing documentation in XML (I didn't know that better XML editors existed) so I ended up wanting to add the ability to write documents as S-expressions.

The right way to do this was to redesign jstructural so that it defined a core AST type and added multiple parsers that all produced values of this core type. Converting between documents of different types would then be a matter of parsing using one syntax, and then serializing the AST using another.

The most natural way to express an AST is as a large algebraic data type. Java doesn't have direct support for these yet (although it's slowly getting them via Project Amber). I therefore looked at other JVM languages that did support them, and the choice was either Scala or Kotlin. I chose Kotlin, after multiple nasty experiences with Scala. kstructural was born.

Fast forward to the present day, and I'm utterly sick of working on Kotlin code. The tools are slow. Compilation is slow. The people in charge of the ecosystem think devolving to special snowflake imperative build tools like Gradle is a good idea.

I'm going to do a new Java implementation of the structural language, and I'm probably going to redesign the language to make it even easier to implement. Nowadays, XML editing is fast and painless thanks to various tools (with real-time validation errors). XOM is essentially unmaintained, so I'll replace the XML parsing code with an efficient SAX parser (and actually get lexical information in the parsed AST, unlike with XOM).

DKIM

Deployed DKIM on the mail server today. All mail that originates from io7m.com will be signed, and I've published DMARC and ADSP policies that tell other servers to be suspicious of any mail that isn't signed. Seems to be working well.

TLS

I've enabled TLS on all io7m.com domains.

I'm using certificates from CaCert.org. These aren't trusted by all browsers, but I don't care all that much. If trust becomes a serious issue at some point, I'll buy real TLS certs. I PGP sign all text on the server anyway, so anyone that really cares about checking to see if some nefarious third party has changed the data in transit can do so, albeit manually.

No, I'm not using Let's Encrypt. I've looked into it several times and I just can't get around the fact that it requires a huge number of moving parts and that the average ACME client requires a ridiculous level of privileges to work properly: If you want any possibility of security, this is what it takes to get it.

At a minimum:

  • The client has to be able to make connections to a remote server in order to download an extremely security-critical bit of data (the certificate). If this step fails for any reason (the remote side being unavailable, breakage in the client), the existing certificate expires and the https service is dead.

  • The client has to be intelligent enough to know when to try to get a new certificate. When is the right time? Who knows. Trying to request a new certificate a minute before the current one expires is suicidally reckless. Trying to do it the day before might be acceptable, but what if it goes wrong? Is a day long enough to try to fix a problem in something as gruesomely complex as the average ACME client? The availability of your https service essentially becomes tied to the availability of the ACME server. This wouldn't be so bad if the update was a yearly thing and could be planned for, but LE certificates are valid for 90 days.

  • The client has to be able to write to the directory being served by the http server in order to be able to respond to challenges. If the client is compromised, it has the ability to trash the served web site(s). I run my services in a highly compartmentalized manner, and having to allow this would be horrible. There are other challenge types, such as publishing a DNS record containing a response to a challenge, but those have the same problem of what should be an unprivileged program having to cross a security boundary and threatening the integrity of another service's data.

  • The client has to be able to write to the http server's configuration data (in order to publish the new certificate). Again, if the client is compromised, it can trash the http server's configuration. If the update fails here, the existing certificate expires and the https service is dead.

  • Assuming that the client actually does manage to respond to a challenge and get a certificate issued, and does actually manage to write that certificate to the filesystem, the problem then becomes getting the https server to use it. Most servers read certificates once on startup and don't reread them. Therefore, the client needs privileges to restart the server. This is totally unacceptable; no service should ever be able to forcefully restart another.

There are so many possible points of failure and every single one results in a broken https service at best, or a system compromise at worst. I'm vaguely surprised at how little in the way of criticism I've seen online of the complexity of Let's Encrypt given the usual reaction of information security experts to any new software system. To paraphrase Neal Stephenson, the usual reaction is to crouch in a fetal position under a blanket, screaming hoarsely that the new system is way too complex and is a security disaster. I have to wonder how many unpublished attacks against the ACME protocol there are out there.

Contrast this to a typical TLS certificate provider: I download a text file once a year and put it on the server. I then restart the https service. Done. No extra software running, no other points of failure.