crush depth

Checkstyle Rules

I'm going to start making all projects use a common set of Checkstyle rules rather than having each project carry its own rules around. I can't remember exactly why I avoided doing this in the beginning. I think it may've been that I wasn't confident that I could write one set of rules that would work everywhere. I've decided instead that I'll beat code with a shovel until it follows the rules, rather than beat the rules with a shovel until they follow the code.

Chasing Modules

I've been moving all of the projects I still maintain to Java 9 modules. In order to do this, however, I've needed to assist third party projects upon which I have dependencies to either modularize their projects or publish Automatic-Module-Name entries in their jar manifests. If you try to specify a dependency on a project that either hasn't been modularized or hasn't published an Automatic-Module-Name entry, you'll see this when building with Maven:

[WARNING] ********************************************************************************************************************
[WARNING] * Required filename-based automodules detected. Please don't publish this project to a public artifact repository! *
[WARNING] ********************************************************************************************************************

The reasons for this are documented on Stephen Colebourne's blog.

Here's a table of all of the third party projects upon which I depend, and an indication of the current state of modularization (I'll try to keep this updated as projects are updated):

Project State Updated
fastutil Considering 2018-02-07
jcpp Considering 2018-02-08
ed25519-java Considering 2018-02-09
LWJGL Fully Modularized 2018-02-07
Dyn4j Fully Modularized 2018-02-07
protobuf Ignored? 2018-02-07
antlr In Progress 2018-02-11
autovalue In Progress 2018-02-07
rome In Progress 2018-02-07
JGraphT Automatic Module Names (Full modularization in progress) 2018-02-11
Vavr Automatic Module Names 2018-02-07
commonmark-java Automatic Module Names 2018-02-07
javapoet Automatic Module Names 2018-02-07
xom Unclear 2018-02-07

Generally, if a project isn't planning to either modularize or publish automatic module names, then I'm looking for a replacement for that project.

Signing Issues

I'm having trouble deploying packages to Maven Central. The repository claims that it cannot validate the signatures I provide. I've filed a bug but haven't gotten a response. I'm wondering if it's down to me switching to Curve25519 PGP keys...

New PGP Keys

New PGP keys have been published.

Fingerprint                                       | UID
B84E 1774 7616 C617 4C68 D5E5 5C1A 7B71 2812 CC05 | Mark Raynsford (2018 personal)
F8C3 C5B8 C86A 95F7 42B9 36D2 97E0 2011 0410 DFAF | (2018 release-signing)
Verifying Again

See the the previous blog post:

$ gpg --recv-keys 8168DAE22B15D3EDC722C23D0F15B7D06FA80CB8
$ wget -r
$ cd
$ gpg < checksum.asc | sha512sum --check


How To Verify

This blog was designed to be verifiable:

$ gpg --recv-keys 8168DAE22B15D3EDC722C23D0F15B7D06FA80CB8
$ wget -r
$ find -name '*.asc' -exec gpg --verify {} \;

Note that the 8168DAE22B15D3EDC722C23D0F15B7D06FA80CB8 key ID assumes you're reading this in 2017. By the time 2018 comes around, this blog will be signed with a new key (and a new one for each passing year).

Possible points of failure:

  1. A malicious actor gets the remote keyserver to serve a different key than the one with fingerprint 8168DAE22B15D3EDC722C23D0F15B7D06FA80CB8. Does gnupg verify that a received key had the fingerprint that the user specified on the command line? What if the user specified my name and key comment instead of a fingerprint? The actor at this point might be able to convince you that the signatures on files on are invalid. It might be able to convince you that its own key is mine.

  2. A malicious actor modifies the files and signatures when wget downloads them. The actor can't generate valid signatures for the key 8168DAE22B15D3EDC722C23D0F15B7D06FA80CB8 (unless it can break RSA), but it can try to convince you that its own key is actually my key and therefore have you trust that the data you're seeing is unmodified and was signed by me. If the actor managed to perform step 1 above, then you're completely compromised.

  3. A malicious actor removes some of the signatures. If you didn't know exactly how many pages there should be, you'd not notice if gpg didn't verify one of them.

Step 1 has no technical solution; you need to verify the keys you receive and check the signatures on those keys assuming they come from other keys that you trust. If you're lucky, there is a chain of signatures leading to a key that you do trust with certainty. I have yearly expiring keys, and I sign each new year's keys with the old keys. Unless my keys have been compromised yearly, there's a reasonable chance that the keys you see are mine!

Step 2 is partially mitigated by https, assuming that the actor doesn't have backdoor access to the CA issuing the certificate. The actor can have the CA issue a new certificate, redirect you to a server that the actor controls, decrypt the traffic, modify it, and then re-encrypt it. You'd never know anything was wrong.

Step 3 is manually mitigated by reading the posts by year page and checking that you have at least as many signatures as pages. I may start publishing a CHECKSUMS file that just contains signed SHA512 hashes of every file on the site.

I'm considering using keybase in addition to publishing PGP keys on the public keyservers. I don't do social media, so I'm not sure how much it'd apply to me. I do use GitHub heavily though.

New PGP Key
Hash: SHA512


Pushed a new key for signing commits in Android work. I'll still be
releasing new personal and release signing keys in January (and I'll
be switching to ed25519 keys from RSA).

E134 3512 A805 115A C1A0 1D15 C578 C3C0 C9B1 2BFF
Mark Raynsford (2018 android commit signing)


Back To Java

I initially wrote jstructural as a set of XSLT stylesheets. That quickly became unmaintainable as the complexity of the stylesheets increased. I rewrote them in Java. I eventually got tired of writing documentation in XML (I didn't know that better XML editors existed) so I ended up wanting to add the ability to write documents as S-expressions.

The right way to do this was to redesign jstructural so that it defined a core AST type and added multiple parsers that all produced values of this core type. Converting between documents of different types would then be a matter of parsing using one syntax, and then serializing the AST using another.

The most natural way to express an AST is as a large algebraic data type. Java doesn't have direct support for these yet (although it's slowly getting them via Project Amber). I therefore looked at other JVM languages that did support them, and the choice was either Scala or Kotlin. I chose Kotlin, after multiple nasty experiences with Scala. kstructural was born.

Fast forward to the present day, and I'm utterly sick of working on Kotlin code. The tools are slow. Compilation is slow. The people in charge of the ecosystem think devolving to special snowflake imperative build tools like Gradle is a good idea.

I'm going to do a new Java implementation of the structural language, and I'm probably going to redesign the language to make it even easier to implement. Nowadays, XML editing is fast and painless thanks to various tools (with real-time validation errors). XOM is essentially unmaintained, so I'll replace the XML parsing code with an efficient SAX parser (and actually get lexical information in the parsed AST, unlike with XOM).


Deployed DKIM on the mail server today. All mail that originates from will be signed, and I've published DMARC and ADSP policies that tell other servers to be suspicious of any mail that isn't signed. Seems to be working well.


I've enabled TLS on all domains.

I'm using certificates from These aren't trusted by all browsers, but I don't care all that much. If trust becomes a serious issue at some point, I'll buy real TLS certs. I PGP sign all text on the server anyway, so anyone that really cares about checking to see if some nefarious third party has changed the data in transit can do so, albeit manually.

No, I'm not using Let's Encrypt. I've looked into it several times and I just can't get around the fact that it requires a huge number of moving parts and that the average ACME client requires a ridiculous level of privileges to work properly: If you want any possibility of security, this is what it takes to get it.

At a minimum:

  • The client has to be able to make connections to a remote server in order to download an extremely security-critical bit of data (the certificate). If this step fails for any reason (the remote side being unavailable, breakage in the client), the existing certificate expires and the https service is dead.

  • The client has to be intelligent enough to know when to try to get a new certificate. When is the right time? Who knows. Trying to request a new certificate a minute before the current one expires is suicidally reckless. Trying to do it the day before might be acceptable, but what if it goes wrong? Is a day long enough to try to fix a problem in something as gruesomely complex as the average ACME client? The availability of your https service essentially becomes tied to the availability of the ACME server. This wouldn't be so bad if the update was a yearly thing and could be planned for, but LE certificates are valid for 90 days.

  • The client has to be able to write to the directory being served by the http server in order to be able to respond to challenges. If the client is compromised, it has the ability to trash the served web site(s). I run my services in a highly compartmentalized manner, and having to allow this would be horrible. There are other challenge types, such as publishing a DNS record containing a response to a challenge, but those have the same problem of what should be an unprivileged program having to cross a security boundary and threatening the integrity of another service's data.

  • The client has to be able to write to the http server's configuration data (in order to publish the new certificate). Again, if the client is compromised, it can trash the http server's configuration. If the update fails here, the existing certificate expires and the https service is dead.

  • Assuming that the client actually does manage to respond to a challenge and get a certificate issued, and does actually manage to write that certificate to the filesystem, the problem then becomes getting the https server to use it. Most servers read certificates once on startup and don't reread them. Therefore, the client needs privileges to restart the server. This is totally unacceptable; no service should ever be able to forcefully restart another.

There are so many possible points of failure and every single one results in a broken https service at best, or a system compromise at worst. I'm vaguely surprised at how little in the way of criticism I've seen online of the complexity of Let's Encrypt given the usual reaction of information security experts to any new software system. To paraphrase Neal Stephenson, the usual reaction is to crouch in a fetal position under a blanket, screaming hoarsely that the new system is way too complex and is a security disaster. I have to wonder how many unpublished attacks against the ACME protocol there are out there.

Contrast this to a typical TLS certificate provider: I download a text file once a year and put it on the server. I then restart the https service. Done. No extra software running, no other points of failure.

Maven JavaDoc Plugin Fixed

The Maven JavaDoc plugin 3.0.0 is finally ready for release. This means that I can migrate 60+ projects to Java 9 and finally get the new versions pushed to Central.

Big thanks to Robert Scholte who worked hard to ensure that everything worked properly, and even got my rather unusual usage of the plugin (aggregating documentation into a single module) working as well.

Obstructing JavaDoc

I've been anxiously awaiting the 3.0.0 release of the maven-javadoc-plugin for weeks, and in an ironic twist of fate, I'm now responsible for delaying the release even further.

I found two rather nasty bugs in the version that was to become 3.0.0, but submitted a fix for the first and had it merged. The second problem seems like it's going to take rather more work to fix though, and my message asking for implementation advice to the javadoc-dev list is currently sitting in a moderation queue.

Expected But Got


What To Save And Throw Away

What to save and throw away?

pr?The last hour is on us both?mr.s?tuck this little kitty into the impenetrable brainpan?

pr?Contents under pressure?Do not expose to excessive heat, vacuum, blunt trauma, immersion in liquids, disintegration, reintegration, hypersleep, humiliation, sorrow or harsh language?

pr?When the time comes, whose life will flash before yours?

pr?A billion paths are here inside me? pr?yes, yes, yes, Bernhard, 110? pr?potential, jewels, jewels, yes, jewels? %

Decided to kill off some old packages. jnull and jfunctional in particular I've used in just about every project I've ever worked on. There's really very little reason for them to exist anymore though. Java 8 added Objects.requireNotNull which standardized terse null checking. Noone cares about the @NonNull annotations (including myself). The entire contents of jfunctional were made redundant by Java 8. Both of these packages served their purposes well back in the days of Java 6 when they were first designed, but now they're just a burden.

It's good to throw away code.

Maven Java 9 Bugs

I'm still waiting on a set of issues to be resolved in order to push modules to all of my projects.

  • MJAVADOC-489 causes JavaDoc generation to fail when one module requires another.

  • MDEP-559 causes the dependency:analyze goal to fail. I use this goal as part of all my builds in order to keep dependencies clean and correct. Getting this fixed depends on MSHARED-660.

I've also removed japicmp from all of my builds. I don't want to disparage the project at all; it's good at what it does, but using it would require using custom MAVEN_OPTS on JDK 9, and that's just not good enough. I'm in the process of writing a replacement for japicmp and will announce it within the next few weeks.

TCP MSS Clamping

Instead of using a non-default MTU on my network, I've instead implemented TCP MSS clamping.

Specifically, I reset all of the interfaces on my networks back to using an MTU of 1500 (including those on the router), and added the following pf rule:

scrub on $nic_ppp max-mss 1440

That rule clamps the maximum TCP segment length on the PPP interface to 1440. Why 1440? It's essentially down to the per-packet overhead of each protocol that's involved. Typically, that'll be 40 or so bytes for an IPv6 packet header, 8 bytes for PPPoE, and some loose change.

So far, nothing has broken with the new settings. No TLS handshake failures, no sudden broken pipes on SSH sessions, no issues sending mail.

IPv6 And MTU Woes

I've just recently deployed IPv6. Everything went well except for one painful issue that is still not really resolved my satisfaction. To recount the story requires covering quite a bit of ground and digging through a pile of acronyms. Hold on tight!

My ISP provides a native /48 network per customer. That means that I get a mere 1208925819614629174706176 public IP addresses spread over 65536 /64 networks to use as I please.

I want to use my existing FreeBSD router to do the routing for the individual networks. I want to do this for several reasons:

  1. The ISP-provided box is a standard consumer router and is fairly limited in what it can do. It's not actively harmful; it's a respectable brand and fairly powerful hardware, but it's still only a consumer-grade box with a web interface.

  2. I'd rather have the intricate configuration details of my network be stored in text configuration files on commodity hardware and on an operating system that I mostly trust. The ISP-provided box runs Linux on proprietary hardware and only provides shell access via an undocumented (authenticated) backdoor (side-door?).

  3. I trust myself to write safe pf rules.

  4. Exposing the FreeBSD machine directly to the WAN eliminates one routing hop.

However, in order to allow my FreeBSD machine to do the routing of the individual networks (as opposed to letting the entirely optional ISP-provided box do it), I had to get it to handle the PPP connection. The machine doesn't have a modem, so instead I have to run the ISP-provided modem/router in bridging mode and get the FreeBSD machine to send PPP commands using the PPPoE protocol. Encouragingly, my ISP suggested that yes, I should be using FreeBSD for this. It's a testament to the quality of IDNet: They are a serious technical ISP, they don't treat their customers like idiots, and they respect the freedom of choice of their customers to use whatever hardware and software they want.

For those that don't know, limitations in PPPoE mean that the MTU of the link is limited to at most 1492. For reference, most networks on the internet are using an MTU of 1500. In IPv4, if you send a packet that's larger than your router's MTU, the packet will be fragmented into separate pieces and then reassembled at the destination. This has, historically, turned out to be a rather nasty way to deal with oversized packets and therefore, in IPv6, packets that are larger than the MTU will be rejected by routers and will result in Packet Too Large ICMPv6 messages being returned to the sender.

In effect, this means that IPv6 networks are somewhat less tolerant of misconfigured MTU values than IPv4 networks. Various companies have written extensively about fragmentation issues.

So why am I mentioning this? Well, shortly after I'd enabled IPv6 for the network and all services, I suddenly ran into a problem where I couldn't send mail. The symptom was that my mail client would connect to the SMTP server, authenticate successfully, send an initial DATA command, and then sit there doing nothing. Eventually, the server would kick the client due to lack of activity. After asking on the mailing list for my mail client, Andrej Kacian pointed me at a thread that documented someone dealing with MTU issues. After some examination with Wireshark, I realized that my workstation was sending packets that were larger than the PPPoE link's MTU of 1492. My FreeBSD machine was dilligently responding with Packet Too Large errors, but for whatever reason, my Linux workstation was essentially ignoring them. Some conversations on the #ipv6 Freenode IRC channel have suggested that Linux handles this very badly. Worse, it seems that the MTU related issues are sporadic: Sometimes it works without issue, other times not.

The "solution" seems to be this: Set the MTUs of all interfaces on all machines in my network to 1492. If I, for example, set the MTU of my workstation's network interface to 1500 and set the FreeBSD router's interfaces to 1492, I can no longer SSH reliably into remote sites, and the majority of TLS handshakes fail. No Packet Too Large errors are generated, which seems counter to my understanding of how this stuff is supposed to work. I very much dislike having to use a non-default MTU on my network: It seems like I will inevitably forget to set it on one or more machines and will run into bizarre and intermittent network issues on that machine.

Some further conversation on the #ipv6 IRC channel suggests that I should not have to do this at all. However, I've so far spent roughly ten hours trying to debug the problem and am exhausted. Using a non-standard MTU in my LAN(s) works around the issue for now, and I'll re-examine the problem after my capacity for suffering has been replenished.

Maven Plugins Are Not Ripe Yet

I wanted to start moving all my projects to Java 9, but quickly discovered that a lot of Maven plugins I depend on aren't ready for Java 9 yet.


Chemriver/A on Vimeo


Chemriver/B on Vimeo


Sources available at GitHub.

Reproducible Builds

Considering moving to producing 100% reproducible builds for all of my packages.

It seems fairly easy. The following changes are required for the primogenitor:

  • Stop using The commit ID is enough!

  • Use the reproducible-build-maven-plugin to strip manifest headers such as Built-By, Build-JDK, etc, and repack jar files such that the timestamps of entries are set to known constant values and the entries are placed into the jar in a deterministic order.

  • Strip Bnd-LastModified and Tool headers from bundle manifests using the <_removeheaders> instruction in the maven-bundle-plugin configuration.

  • Stop using version ranges. This may be too painful.

Some early experiments show that this yields byte-for-byte identical jar files on each compile. This is pretty impressive.

The one open issue: Oracle (or OpenJDK's) javac appears to produce completely deterministic output; there aren't any embedded timestamps or other nonsense. However, someone building the packages from source isn't guaranteed to be using an Oracle JDK. I could use the Enforcer plugin to check that the user is using a known-deterministic JDK, but it would be pretty obnoxious to break builds if they aren't. Perhaps a warning message ("JDK is not known to produce deterministic output: Build may not be reproducible!") is enough.

Simulating Packet Loss And Damage

I'm currently working on some code that implements a simple reliable delivery protocol on top of UDP. UDP is used because latency must be minimized as much as possible.

In order to test that the protocol works properly in bad network conditions, I need a way to simulate bad network conditions. For example, I'd like to see how the protocol implementation copes when 50% of packets are lost, or when packets arrive out of order.

The Linux kernel contains various subsystems related to networking, and I found that a combination of network namespaces and network emulation was sufficient to achieve this.

The netem page states that you can use the tc command to set queuing disciplines on a network interface. For example, if your computer's primary network interface is called eth0, the following command would add a queueing discipline that would cause the eth0 interface to start dropping 50% of all traffic sent:

# tc qdisc add dev eth0 root netem loss 50%

This is fine, but it does create a bit of a problem; I want to use my network interface for other things during development, and imposing an unconditional 50% packet loss for my main development machine would be painful. Additionally, if I'm running a client and server on the same machine, the kernel will route network traffic over the loopback interface rather than sending packets to the network interface. Forcing the loopback interface to have severe packet loss and/or corruption would without a doubt break a lot of software I'd be using during development. A lot of software communicates with itself by sending messages over the loopback interface, and disrupting those messages would almost certainly lead to breakages.

Instead, it'd be nice if I could create some sort of virtual network interface, assign IP addresses to it, set various netem options on that interface, and then have my client and server programs use that interface. This would leave my primary network interface (and loopback interface) free of interference.

This turns out to be surprisingly easy to achieve using the Linux kernel's network namespaces feature.

First, it's necessary to create a new namespace. You can think of a namespace as being a named container for network interfaces. Any network interface placed into a namespace n can only see interfaces that are also in n. Interfaces outside of n cannot see the interfaces inside n. Additionally, each namespace is given its own private loopback interface. For the sake of example, I'll call the new namespace virtual_net0. The namespace can be created with the following command:

# ip netns add virtual_net0

The list of current network namespaces can be listed:

# ip netns show

Then, in order to configure interfaces inside the created namespace, it's necessary to use the ip netns exec command. The exec command takes a namespace n and a command c (with optional arguments) as arguments, and executes c inside the namespace n. To see how this works, let's examine the output of the ip link show command when executed outside of a namespace:

# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether f0:de:f1:7d:2a:02 brd ff:ff:ff:ff:ff:ff

You can see that it shows the lo loopback interface, and my desktop machine's primary network interface enp3s0. If the same command is executed inside the virtual_net0 namespace:

# ip netns exec virtual_net0 ip link show
1: lo: <LOOPBACK> mtu 65536 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

The only interface inside the virtual_net0 is lo, and that lo is not the same lo from the previous list - remember that namespaces get their own private lo interface. One obvious indicator that this is not the same lo interface is that the lo outside of the main system is in the UP state (in other words, active and ready to send/receive traffic). This namespace-private lo is DOWN. In order to do useful work, it has to be brought up:

# ip netns exec virtual_net0 ip link set dev lo up
# ip netns exec virtual_net0 ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

We can then create virtual "dummy" interfaces inside the namespace. These look and behave (mostly) like real network interfaces. The following commands create a dummy interface virtual0 inside the virtual_net0 namespace, and assign it an IPv6 address fd38:73b9:8748:8f82::1/64:

# ip netns exec virtual_net0 ip link add name virtual0 type dummy
# ip netns exec virtual_net0 ip addr add fd38:73b9:8748:8f82::1/64 dev virtual0
# ip netns exec virtual_net0 ip link set dev virtual0 up

# ip netns exec virtual_net0 ip addr show virtual0
2: virtual0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether aa:5f:05:93:5c:1b brd ff:ff:ff:ff:ff:ff
    inet6 fd38:73b9:8748:8f82::1/64 scope global
       valid_lft forever preferred_lft forever

In my case, I also created a second virtual1 interface and assigned it a different IPv6 address. It's then possible to, for example, run a client and server program inside that network namespace:

# ip netns exec virtual_net0 ./server
server: bound to [fd38:73b9:8748:8f82::1]:9999

# ip netns exec virtual_net0 ./client
client: connected to [fd38:73b9:8748:8f82::1]:9999

The server and client programs will do all of their networking in the virtual_net0 namespace and, because the Linux kernel knows that the addresses of the network interfaces are both on the same machine, the actual traffic sent between them will travel over the virtual_net0 namespace's lo interface.

A program like Wireshark can be executed in the virtual_net0 namespace and used to observe the traffic between the client and server by capturing packets on the lo interface.

Now, we want to simulate packet loss, corruption, and reordering. Well, unsurprisingly, the tc command from netem can be executed in the virtual_net0 namespace, meaning that its effects are confined to interfaces within that namespace. For example, to lose half of the packets that are sent between the client and server:

# ip netns exec virtual_net0 tc qdisc add dev lo root netem loss 50%

Finally, all of the above can be cleaned up by simply deleting the namespace:

# ip netns del virtual_net0

This destroys all of the interfaces within the namespace.

Bhante Henepola Gunaratana

“Discipline” is a difficult word for most of us. It conjures up images of somebody standing over you with a stick, telling you that you’re wrong. But self-discipline is different. It’s the skill of seeing through the hollow shouting of your own impulses and piercing their secret. They have no power over you. It’s all a show, a deception. Your urges scream and bluster at you; they cajole; they coax; they threaten; but they really carry no stick at all. You give in out of habit. You give in because you never really bother to look beyond the threat. It is all empty back there. There is only one way to learn this lesson, though. The words on this page won’t do it. But look within and watch the stuff coming up—restlessness, anxiety, impatience, pain—just watch it come up and don’t get involved. Much to your surprise, it will simply go away. It rises, it passes away. As simple as that. There is another word for self-discipline. It is patience.

-- Bhante Henepola Gunaratana


If you're reading this, then the migration to Vultr was successful. Additionally, this site should now be accessible to IPv6 users.

Host          | IPv4         | IPv6
------------------------------------------------------------------------      | | 2001:19f0:0005:061d:f000:0000:0000:0000/64 |  | 2001:19f0:0005:0752:f000:0000:0000:0000/64
A Change Of Scenery

I'm looking at changing my VPS provider from DigitalOcean to Vultr. These were the top two contenders when I initially chose a provider. I can't fault DigitalOcean's service, but Vultr have better pricing and give more control over DNS details such as PTR records.

I have the configurations ready to go, so I suspect I'll make the move over the next few days. I'll be taking this opportunity to enable IPv6 for the http and smtp services. Expect outages!


I've moved over to the new VPS. Enabling LZ4 compression on the ZFS filesystem has immediately halved my disk usage.

Mind Your Constants

A little known feature of javac is that it will inline constant references when compiling code. This can mean that it's possible to accidentally break binary compatibility with existing clients of a piece of code when changing the value of a constant. Worse, tools that analyze bytecode have no way of detecting a binary-incompatible change of this type.

For example, the following class defines a public constant called NAME:

public final class Constants
  public static final String NAME = "";

  private Constants()


Another class refers to NAME directly:

public final class Main0
  public static void main(
    final String args[])

Now, let's assume that NAME actually becomes part of an API in some form; callers may pass NAME to API methods. Because we've taken the time to declare a global constant, it should be perfectly safe to change the value of NAME at a later date without having to recompile all clients of the API, yes? Well, no, unfortunately not. Take a look at the bytecode of Main0:

public final class Main0
  minor version: 0
  major version: 52
Constant pool:
   #1 = Methodref          #7.#16         // java/lang/Object."<init>":()V
   #2 = Fieldref           #17.#18        // java/lang/System.out:Ljava/io/PrintStream;
   #3 = Class              #19            // Constants
   #4 = String             #20            //
   #5 = Methodref          #21.#22        // java/io/PrintStream.println:(Ljava/lang/String;)V
   #6 = Class              #23            // Main0
   #7 = Class              #24            // java/lang/Object
  #19 = Utf8               Constants
  #20 = Utf8     
  #21 = Class              #28            // java/io/PrintStream
  #22 = NameAndType        #29:#30        // println:(Ljava/lang/String;)V
  public Main0();
    descriptor: ()V
    flags: ACC_PUBLIC
      stack=1, locals=1, args_size=1
         0: aload_0
         1: invokespecial #1                  // Method java/lang/Object."<init>":()V
         4: return
        line 1: 0

  public static void main(java.lang.String[]);
    descriptor: ([Ljava/lang/String;)V
      stack=2, locals=1, args_size=1
         0: getstatic     #2                  // Field java/lang/System.out:Ljava/io/PrintStream;
         3: ldc           #4                  // String
         5: invokevirtual #5                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V
         8: return
        line 6: 0
        line 7: 8

You can see that the value of the NAME constant has been inlined and inserted into the Main0 class's constant pool directly. This means that if you change the value of NAME in the Constants class at a later date, the Main0 class will need to be recompiled in order to see the change.

What can be done instead? Wrap the constant in a static method:

public final class ConstantsWrapped
  private static final String NAME = "";

  public static final String name()
    return NAME;

  private ConstantsWrapped()


Call the method instead of referring to the constant directly:

public final class Main1
  public static void main(
    final String args[])

Now the resulting bytecode is:

public final class Main1
  minor version: 0
  major version: 52
Constant pool:
   #1 = Methodref          #6.#15         // java/lang/Object."<init>":()V
   #2 = Fieldref           #16.#17        // java/lang/System.out:Ljava/io/PrintStream;
   #3 = Methodref          #18.#19        //;
   #4 = Methodref          #20.#21        // java/io/PrintStream.println:(Ljava/lang/String;)V
   #5 = Class              #22            // Main1
   #6 = Class              #23            // java/lang/Object
   #7 = Utf8               <init>
   #8 = Utf8               ()V
   #9 = Utf8               Code
  #10 = Utf8               LineNumberTable
  #11 = Utf8               main
  #12 = Utf8               ([Ljava/lang/String;)V
  #13 = Utf8               SourceFile
  #14 = Utf8     
  #15 = NameAndType        #7:#8          // "<init>":()V
  #16 = Class              #24            // java/lang/System
  #17 = NameAndType        #25:#26        // out:Ljava/io/PrintStream;
  #18 = Class              #27            // ConstantsWrapped
  #19 = NameAndType        #28:#29        // name:()Ljava/lang/String;
  #20 = Class              #30            // java/io/PrintStream
  #21 = NameAndType        #31:#32        // println:(Ljava/lang/String;)V
  #22 = Utf8               Main1
  #23 = Utf8               java/lang/Object
  #24 = Utf8               java/lang/System
  #25 = Utf8               out
  #26 = Utf8               Ljava/io/PrintStream;
  #27 = Utf8               ConstantsWrapped
  #28 = Utf8               name
  #29 = Utf8               ()Ljava/lang/String;
  #30 = Utf8               java/io/PrintStream
  #31 = Utf8               println
  #32 = Utf8               (Ljava/lang/String;)V
  public Main1();
    descriptor: ()V
    flags: ACC_PUBLIC
      stack=1, locals=1, args_size=1
         0: aload_0
         1: invokespecial #1                  // Method java/lang/Object."<init>":()V
         4: return
        line 1: 0

  public static void main(java.lang.String[]);
    descriptor: ([Ljava/lang/String;)V
      stack=2, locals=1, args_size=1
         0: getstatic     #2                  // Field java/lang/System.out:Ljava/io/PrintStream;
         3: invokestatic  #3                  // Method;
         6: invokevirtual #4                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V
         9: return
        line 6: 0
        line 7: 9

This effectively solves the issue. The ldc opcode is changed to an invokestatic opcode, at no point does the string appear directly in the Main1 class, and the value of the constant can be changed at a later date without breaking binary compatibility. Additionally, the JIT compiler will inline the invokestatic call at run-time, meaning that there's no performance degradation over using the constant directly.

FreeBSD ZFS Root

When I set up the initial FreeBSD install to host, I didn't realize how trivial it was to use ZFS as the root partition. Having used this option several times since, I now wish I had done this for the io7m VPS. I might spin up a new VPS over the next few days with a ZFS root partition, copy the configuration data over to the new VPS, and then reconfigure DNS to point to the new system. If there's a mysterious outage, this will be the reason why.

Half Float Pain

Whilst working on smf, I ran into an issue when resampling 32-bit floating point mesh data to 16-bit floating point format. The issue turned out to be poor handling of subnormal values by my ieee754b16 package. I went looking for better implementations to borrow and found a nice paper by Jeroen van der Zijp called Fast Half Float Conversions. It uses precomputed lookup tables to perform conversions and appears to be drastically more accurate than my manual process (the mathematics of which I've almost entirely forgotten).

I decided to put together a simple C99 implementation in order to see how the code worked but am having some strange issues with some very specific values. My test suite basically tries to prove that packing a double value and then unpacking it should be an approximate identity operation. Essentially, ∀x. unpack(pack(x)) ≈ x. Unfortunately, some very specific values are failing. For some reason, my implementation yields these results:

unpack(pack(2048.0)) → 2048.0
unpack(pack(2047.0)) → -0.0
unpack(pack(2046.0)) → 2046.0
unpack(pack(16375.0)) → 16368.0
unpack(pack(16376.0)) → 0.0

All of the other values in the range [-32000, 32000] appear to be correct. The unusual 16375.0 → 16368.0 result is expected; the conversion is necessarily a lossy procedure and 16368.0 is simply the nearest representable value when converting down to 16-bits. However, the 0.0 values are utterly wrong. This suggests that there's an issue in implementation that's almost certainly caused by a mistake generating the conversion tables. It seems that packing is correct, but unpacking isn't. I've gone over the code several times, even going so far as to implement it twice in two different languages and have gotten the same results every time. I've spoken to Jeroen and he showed me some results from his own implementation and test suite that show that the above isn't a problem with the algorithm. So, assuming that I haven't managed to screw up the same implementation after some five clean-room attempts, there may be a transcription mistake in the paper. I'm waiting to hear more from Jeroen.

Maven Assembly Plugin Redux

A few months back, I filed a bug for the Maven Assembly Plugin. Karl Heinz Marbaise finally got back to me and solved the issue right away. Thanks again!

JEP 305 - Pattern Matching

This is a good first step towards getting algebraic data types into Java.