crush depth

Chemriver

Chemriver/A on Vimeo

Chemriver/A

Chemriver/B on Vimeo

Chemriver/B

Sources available at GitHub.

Reproducible Builds

Considering moving to producing 100% reproducible builds for all of my packages.

It seems fairly easy. The following changes are required for the primogenitor:

  • Stop using maven.build.timestamp. The commit ID is enough!

  • Use the reproducible-build-maven-plugin to strip manifest headers such as Built-By, Build-JDK, etc, and repack jar files such that the timestamps of entries are set to known constant values and the entries are placed into the jar in a deterministic order.

  • Strip Bnd-LastModified and Tool headers from bundle manifests using the <_removeheaders> instruction in the maven-bundle-plugin configuration.

  • Stop using version ranges. This may be too painful.

Some early experiments show that this yields byte-for-byte identical jar files on each compile. This is pretty impressive.

The one open issue: Oracle (or OpenJDK's) javac appears to produce completely deterministic output; there aren't any embedded timestamps or other nonsense. However, someone building the packages from source isn't guaranteed to be using an Oracle JDK. I could use the Enforcer plugin to check that the user is using a known-deterministic JDK, but it would be pretty obnoxious to break builds if they aren't. Perhaps a warning message ("JDK is not known to produce deterministic output: Build may not be reproducible!") is enough.

Simulating Packet Loss And Damage

I'm currently working on some code that implements a simple reliable delivery protocol on top of UDP. UDP is used because latency must be minimized as much as possible.

In order to test that the protocol works properly in bad network conditions, I need a way to simulate bad network conditions. For example, I'd like to see how the protocol implementation copes when 50% of packets are lost, or when packets arrive out of order.

The Linux kernel contains various subsystems related to networking, and I found that a combination of network namespaces and network emulation was sufficient to achieve this.

The netem page states that you can use the tc command to set queuing disciplines on a network interface. For example, if your computer's primary network interface is called eth0, the following command would add a queueing discipline that would cause the eth0 interface to start dropping 50% of all traffic sent:

# tc qdisc add dev eth0 root netem loss 50%

This is fine, but it does create a bit of a problem; I want to use my network interface for other things during development, and imposing an unconditional 50% packet loss for my main development machine would be painful. Additionally, if I'm running a client and server on the same machine, the kernel will route network traffic over the loopback interface rather than sending packets to the network interface. Forcing the loopback interface to have severe packet loss and/or corruption would without a doubt break a lot of software I'd be using during development. A lot of software communicates with itself by sending messages over the loopback interface, and disrupting those messages would almost certainly lead to breakages.

Instead, it'd be nice if I could create some sort of virtual network interface, assign IP addresses to it, set various netem options on that interface, and then have my client and server programs use that interface. This would leave my primary network interface (and loopback interface) free of interference.

This turns out to be surprisingly easy to achieve using the Linux kernel's network namespaces feature.

First, it's necessary to create a new namespace. You can think of a namespace as being a named container for network interfaces. Any network interface placed into a namespace n can only see interfaces that are also in n. Interfaces outside of n cannot see the interfaces inside n. Additionally, each namespace is given its own private loopback interface. For the sake of example, I'll call the new namespace virtual_net0. The namespace can be created with the following command:

# ip netns add virtual_net0

The list of current network namespaces can be listed:

# ip netns show
virtual_net0

Then, in order to configure interfaces inside the created namespace, it's necessary to use the ip netns exec command. The exec command takes a namespace n and a command c (with optional arguments) as arguments, and executes c inside the namespace n. To see how this works, let's examine the output of the ip link show command when executed outside of a namespace:

# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether f0:de:f1:7d:2a:02 brd ff:ff:ff:ff:ff:ff

You can see that it shows the lo loopback interface, and my desktop machine's primary network interface enp3s0. If the same command is executed inside the virtual_net0 namespace:

# ip netns exec virtual_net0 ip link show
1: lo: <LOOPBACK> mtu 65536 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

The only interface inside the virtual_net0 is lo, and that lo is not the same lo from the previous list - remember that namespaces get their own private lo interface. One obvious indicator that this is not the same lo interface is that the lo outside of the main system is in the UP state (in other words, active and ready to send/receive traffic). This namespace-private lo is DOWN. In order to do useful work, it has to be brought up:

# ip netns exec virtual_net0 ip link set dev lo up
# ip netns exec virtual_net0 ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

We can then create virtual "dummy" interfaces inside the namespace. These look and behave (mostly) like real network interfaces. The following commands create a dummy interface virtual0 inside the virtual_net0 namespace, and assign it an IPv6 address fd38:73b9:8748:8f82::1/64:

# ip netns exec virtual_net0 ip link add name virtual0 type dummy
# ip netns exec virtual_net0 ip addr add fd38:73b9:8748:8f82::1/64 dev virtual0
# ip netns exec virtual_net0 ip link set dev virtual0 up

# ip netns exec virtual_net0 ip addr show virtual0
2: virtual0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether aa:5f:05:93:5c:1b brd ff:ff:ff:ff:ff:ff
    inet6 fd38:73b9:8748:8f82::1/64 scope global
       valid_lft forever preferred_lft forever

In my case, I also created a second virtual1 interface and assigned it a different IPv6 address. It's then possible to, for example, run a client and server program inside that network namespace:

# ip netns exec virtual_net0 ./server
server: bound to [fd38:73b9:8748:8f82::1]:9999

# ip netns exec virtual_net0 ./client
client: connected to [fd38:73b9:8748:8f82::1]:9999

The server and client programs will do all of their networking in the virtual_net0 namespace and, because the Linux kernel knows that the addresses of the network interfaces are both on the same machine, the actual traffic sent between them will travel over the virtual_net0 namespace's lo interface.

A program like Wireshark can be executed in the virtual_net0 namespace and used to observe the traffic between the client and server by capturing packets on the lo interface.

Now, we want to simulate packet loss, corruption, and reordering. Well, unsurprisingly, the tc command from netem can be executed in the virtual_net0 namespace, meaning that its effects are confined to interfaces within that namespace. For example, to lose half of the packets that are sent between the client and server:

# ip netns exec virtual_net0 tc qdisc add dev lo root netem loss 50%

Finally, all of the above can be cleaned up by simply deleting the namespace:

# ip netns del virtual_net0

This destroys all of the interfaces within the namespace.

Bhante Henepola Gunaratana

“Discipline” is a difficult word for most of us. It conjures up images of somebody standing over you with a stick, telling you that you’re wrong. But self-discipline is different. It’s the skill of seeing through the hollow shouting of your own impulses and piercing their secret. They have no power over you. It’s all a show, a deception. Your urges scream and bluster at you; they cajole; they coax; they threaten; but they really carry no stick at all. You give in out of habit. You give in because you never really bother to look beyond the threat. It is all empty back there. There is only one way to learn this lesson, though. The words on this page won’t do it. But look within and watch the stuff coming up—restlessness, anxiety, impatience, pain—just watch it come up and don’t get involved. Much to your surprise, it will simply go away. It rises, it passes away. As simple as that. There is another word for self-discipline. It is patience.

-- Bhante Henepola Gunaratana

Vultr

If you're reading this, then the migration to Vultr was successful. Additionally, this site should now be accessible to IPv6 users.

Host          | IPv4         | IPv6
------------------------------------------------------------------------
io7m.com      | 45.77.78.222 | 2001:19f0:0005:061d:f000:0000:0000:0000/64
mail.io7m.com | 45.77.76.92  | 2001:19f0:0005:0752:f000:0000:0000:0000/64
A Change Of Scenery

I'm looking at changing my VPS provider from DigitalOcean to Vultr. These were the top two contenders when I initially chose a provider. I can't fault DigitalOcean's service, but Vultr have better pricing and give more control over DNS details such as PTR records.

I have the configurations ready to go, so I suspect I'll make the move over the next few days. I'll be taking this opportunity to enable IPv6 for the http and smtp services. Expect outages!

New VPS

I've moved over to the new VPS. Enabling LZ4 compression on the ZFS filesystem has immediately halved my disk usage.

Mind Your Constants

A little known feature of javac is that it will inline constant references when compiling code. This can mean that it's possible to accidentally break binary compatibility with existing clients of a piece of code when changing the value of a constant. Worse, tools that analyze bytecode have no way of detecting a binary-incompatible change of this type.

For example, the following class defines a public constant called NAME:

public final class Constants
{
  public static final String NAME = "com.io7m.name";

  private Constants()
  {

  }
}

Another class refers to NAME directly:

public final class Main0
{
  public static void main(
    final String args[])
  {
    System.out.println(Constants.NAME);
  }
}

Now, let's assume that NAME actually becomes part of an API in some form; callers may pass NAME to API methods. Because we've taken the time to declare a global constant, it should be perfectly safe to change the value of NAME at a later date without having to recompile all clients of the API, yes? Well, no, unfortunately not. Take a look at the bytecode of Main0:

public final class Main0
  minor version: 0
  major version: 52
  flags: ACC_PUBLIC, ACC_FINAL, ACC_SUPER
Constant pool:
   #1 = Methodref          #7.#16         // java/lang/Object."<init>":()V
   #2 = Fieldref           #17.#18        // java/lang/System.out:Ljava/io/PrintStream;
   #3 = Class              #19            // Constants
   #4 = String             #20            // com.io7m.name
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   #5 = Methodref          #21.#22        // java/io/PrintStream.println:(Ljava/lang/String;)V
   #6 = Class              #23            // Main0
   #7 = Class              #24            // java/lang/Object
  ...
  #19 = Utf8               Constants
  #20 = Utf8               com.io7m.name
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  #21 = Class              #28            // java/io/PrintStream
  #22 = NameAndType        #29:#30        // println:(Ljava/lang/String;)V
  ...
{
  public Main0();
    descriptor: ()V
    flags: ACC_PUBLIC
    Code:
      stack=1, locals=1, args_size=1
         0: aload_0
         1: invokespecial #1                  // Method java/lang/Object."<init>":()V
         4: return
      LineNumberTable:
        line 1: 0

  public static void main(java.lang.String[]);
    descriptor: ([Ljava/lang/String;)V
    flags: ACC_PUBLIC, ACC_STATIC
    Code:
      stack=2, locals=1, args_size=1
         0: getstatic     #2                  // Field java/lang/System.out:Ljava/io/PrintStream;
         3: ldc           #4                  // String com.io7m.name
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
         5: invokevirtual #5                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V
         8: return
      LineNumberTable:
        line 6: 0
        line 7: 8
}

You can see that the value of the NAME constant has been inlined and inserted into the Main0 class's constant pool directly. This means that if you change the value of NAME in the Constants class at a later date, the Main0 class will need to be recompiled in order to see the change.

What can be done instead? Wrap the constant in a static method:

public final class ConstantsWrapped
{
  private static final String NAME = "com.io7m.name";

  public static final String name()
  {
    return NAME;
  }

  private ConstantsWrapped()
  {

  }
}

Call the method instead of referring to the constant directly:

public final class Main1
{
  public static void main(
    final String args[])
  {
    System.out.println(ConstantsWrapped.name());
  }
}

Now the resulting bytecode is:

public final class Main1
  minor version: 0
  major version: 52
  flags: ACC_PUBLIC, ACC_FINAL, ACC_SUPER
Constant pool:
   #1 = Methodref          #6.#15         // java/lang/Object."<init>":()V
   #2 = Fieldref           #16.#17        // java/lang/System.out:Ljava/io/PrintStream;
   #3 = Methodref          #18.#19        // ConstantsWrapped.name:()Ljava/lang/String;
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   #4 = Methodref          #20.#21        // java/io/PrintStream.println:(Ljava/lang/String;)V
   #5 = Class              #22            // Main1
   #6 = Class              #23            // java/lang/Object
   #7 = Utf8               <init>
   #8 = Utf8               ()V
   #9 = Utf8               Code
  #10 = Utf8               LineNumberTable
  #11 = Utf8               main
  #12 = Utf8               ([Ljava/lang/String;)V
  #13 = Utf8               SourceFile
  #14 = Utf8               Main1.java
  #15 = NameAndType        #7:#8          // "<init>":()V
  #16 = Class              #24            // java/lang/System
  #17 = NameAndType        #25:#26        // out:Ljava/io/PrintStream;
  #18 = Class              #27            // ConstantsWrapped
  #19 = NameAndType        #28:#29        // name:()Ljava/lang/String;
  #20 = Class              #30            // java/io/PrintStream
  #21 = NameAndType        #31:#32        // println:(Ljava/lang/String;)V
  #22 = Utf8               Main1
  #23 = Utf8               java/lang/Object
  #24 = Utf8               java/lang/System
  #25 = Utf8               out
  #26 = Utf8               Ljava/io/PrintStream;
  #27 = Utf8               ConstantsWrapped
  #28 = Utf8               name
  #29 = Utf8               ()Ljava/lang/String;
  #30 = Utf8               java/io/PrintStream
  #31 = Utf8               println
  #32 = Utf8               (Ljava/lang/String;)V
{
  public Main1();
    descriptor: ()V
    flags: ACC_PUBLIC
    Code:
      stack=1, locals=1, args_size=1
         0: aload_0
         1: invokespecial #1                  // Method java/lang/Object."<init>":()V
         4: return
      LineNumberTable:
        line 1: 0

  public static void main(java.lang.String[]);
    descriptor: ([Ljava/lang/String;)V
    flags: ACC_PUBLIC, ACC_STATIC
    Code:
      stack=2, locals=1, args_size=1
         0: getstatic     #2                  // Field java/lang/System.out:Ljava/io/PrintStream;
         3: invokestatic  #3                  // Method ConstantsWrapped.name:()Ljava/lang/String;
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
         6: invokevirtual #4                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V
         9: return
      LineNumberTable:
        line 6: 0
        line 7: 9
}

This effectively solves the issue. The ldc opcode is changed to an invokestatic opcode, at no point does the string com.io7m.name appear directly in the Main1 class, and the value of the constant can be changed at a later date without breaking binary compatibility. Additionally, the JIT compiler will inline the invokestatic call at run-time, meaning that there's no performance degradation over using the constant directly.

FreeBSD ZFS Root

When I set up the initial FreeBSD install to host io7m.com, I didn't realize how trivial it was to use ZFS as the root partition. Having used this option several times since, I now wish I had done this for the io7m VPS. I might spin up a new VPS over the next few days with a ZFS root partition, copy the configuration data over to the new VPS, and then reconfigure DNS to point to the new system. If there's a mysterious outage, this will be the reason why.

Half Float Pain

Whilst working on smf, I ran into an issue when resampling 32-bit floating point mesh data to 16-bit floating point format. The issue turned out to be poor handling of subnormal values by my ieee754b16 package. I went looking for better implementations to borrow and found a nice paper by Jeroen van der Zijp called Fast Half Float Conversions. It uses precomputed lookup tables to perform conversions and appears to be drastically more accurate than my manual process (the mathematics of which I've almost entirely forgotten).

I decided to put together a simple C99 implementation in order to see how the code worked but am having some strange issues with some very specific values. My test suite basically tries to prove that packing a double value and then unpacking it should be an approximate identity operation. Essentially, ∀x. unpack(pack(x)) ≈ x. Unfortunately, some very specific values are failing. For some reason, my implementation yields these results:

unpack(pack(2048.0)) → 2048.0
unpack(pack(2047.0)) → -0.0
unpack(pack(2046.0)) → 2046.0
unpack(pack(16375.0)) → 16368.0
unpack(pack(16376.0)) → 0.0

All of the other values in the range [-32000, 32000] appear to be correct. The unusual 16375.0 → 16368.0 result is expected; the conversion is necessarily a lossy procedure and 16368.0 is simply the nearest representable value when converting down to 16-bits. However, the 0.0 values are utterly wrong. This suggests that there's an issue in implementation that's almost certainly caused by a mistake generating the conversion tables. It seems that packing is correct, but unpacking isn't. I've gone over the code several times, even going so far as to implement it twice in two different languages and have gotten the same results every time. I've spoken to Jeroen and he showed me some results from his own implementation and test suite that show that the above isn't a problem with the algorithm. So, assuming that I haven't managed to screw up the same implementation after some five clean-room attempts, there may be a transcription mistake in the paper. I'm waiting to hear more from Jeroen.

Maven Assembly Plugin Redux

A few months back, I filed a bug for the Maven Assembly Plugin. Karl Heinz Marbaise finally got back to me and solved the issue right away. Thanks again!

JEP 305 - Pattern Matching

http://openjdk.java.net/jeps/305

This is a good first step towards getting algebraic data types into Java.

Sender Policy Framework

Set up SPF for the io7m.com mail server today. Took about 30 seconds. Hopefully I'll still be able to send and receive mail when the DNS changes propagate.

OSGi Requirements And Capabilities

OSGi is an extremely powerful module system for the Java virtual machine. Code and resources are packaged into bundles that can be installed into a running OSGi container for use.

Using bundles to deliver compiled Java code is essentially a solved problem. Bundles specify what Java packages they export and import. If a package is exported, then that package can be imported by other bundles. The OSGi resolver is responsible for wiring bundles together. For example, if a bundle B0 specifies that it imports a package named P0, and another bundle B1 specifies that it exports a package named P0, then the OSGi runtime will create a wire W0 from B0 to B1. Whenever code in B0 references a class in P0, then that class will be loaded from B1 by traversing W0. This creates a directed acyclic graph where the vertices of the graph are packages and the edges of the graph are wires.

We can look at this in more general terms: We can think of the above in terms of requirements that can be satisfied by capabilities. For example, we can think of a package import as being a specific requirement: A package p attempting to import a particular package q can be thought of as a requirement for q by p. Conversely, a package export can be thought of as a capability: An export of a package p can be thought of as a capability to satisfy a requirement for p.

In these terms, then, the OSGi runtime is essentially a constraint solving system that takes a set of requirements and capabilities as input, and tries to find a solution that satisfies all of the requirements using the provided capabilities.

So what's the point of explaining all of this? Well, beyond importing and exporting Java code, the OSGi system actually allows developers to declare their own types of capabilities and requirements that will be solved by the OSGi runtime when the bundles specifying them are installed. This allows for bundles that contain things other than Java classes to get the same strong versioning and dependency handling that Java code enjoys.

In my case, I'm using this to get versioning and dependency handling for game engine assets. A bundle in the system I've put together can place declarations in the manifest such as:

Provide-Capability: com.io7m.callisto.resources.package; name = a.b.c; version = 1.0

That is, the bundle states that it provides a package called a.b.c, version 1.0, containing resources, in the category of requirements called com.io7m.callisto.resources.package. Another bundle declares:

Require-Capability: com.io7m.callisto.resources.package; filter=(& (name = a.b.c) (version >= 1.0) (version < 2.0))

That is, the bundle states that it requires a package that has the name a.b.c AND has a version greater than or equal to 1.0 AND has a version less than 2.0 (so for example, 1.1 would satisfy the requirement, but 2.1 would not). The requirement is only satisfied when all three constraints are met.

Those bundles will be wired together when they're installed and a bundle can only access the resources of another bundle via the created wire when it explicitly imports that bundle via a Require-Capability declaration. I get all of the benefits of strong versioning, dependency handling, proper cross-bundle resource visibility, good error messages when dependencies aren't met, etc, essentially for free. If a user wants to install a bundle containing resources, the declared capabilities and requirements of the package mean that it's trivial to automatically fetch dependencies of the bundles without the user having to do anything.

I don't know of any modern game engines that use a system like this. Apart from anything, it's phenomenally impractical to build a system like this outside of a virtual machine due to the constraints imposed by working with native code directly. Game developers are religiously opposed to anything that isn't C++ and so refuse to consider the alternatives. Generally, in most games, all of the resources are stuffed into one giant flat namespace without any sort of versioning, privacy control, dependency handling, or anything else. The result is immediate dependency hell as soon as third parties try to produce modifications for the games. Because there's no dependency information, installing modifications for games must be done manually, and there's absolutely no way to ensure that arbitrary modifications are compatible with each other. Worse, when modifications are incompatible, the result will be obscure problems and crashes at runtime instead of actionable "Modification X is incompatible with modification Y" error messages.

In life never do as others do...

Stack Overflow

I actually asked a while back on Stack Overflow if anyone had any idea how to solve the problem I've been attempting to solve with the room model.

Given that I've now actually solved it, I went back to append the answer to my question. I immediately ran into an endless series of idiotically draconian checks that essentially wouldn't let me post the answer to my own question. I eventually gave up trying to get the post to pass the full body cavity search and blood sample analysis, so ended up wrapping the entire thing in preformatted text tags and preceding it with a plea that a passing editor fix it. Consider me discouraged from ever bothering to post on Stack Overflow again.

For future reference, here's the solution I posted:

I came up with a solution to this. In order to solve this efficiently, some sort of spatial data structure is needed in order to query which polygons are overlapped by a given rectangular area. I used a Quadtree. It's also necessary for the polygon data structure being used to be able to distinguish between internal and external edges. An edge is internal if it is common to two polygons.

The steps are as follows (assuming a coordinate system with the origin in the top-left corner):

  1. Insert all polygons into whatever spatial data structure you're using.

  2. Iterate over all polygons and build a list of all of the Y values upon which vertices occur. This has the effect of conceptually dividing up the scene into horizontal strips:

    Strips

  3. Iterate over the pairs of Y values from top to bottom. For each pair (y0, y1) of Y values, declare a rectangular area a with the the top left corner (0, y0) and bottom right corner (width, y1). Determine the set of polygons S that are overlapped by a by querying the spatial data structure. For each polygon p in S, determine the set of edges E of p that are overlapped by a. For best results, ignore any edge in E with a normal that points directly up or down. For each edge e in E, it's then necessary to determine the pair of points at which e intersects the top and bottom edges of a. This is achieved with a simple line intersection test, treating the top and bottom edges of a as simple horizontal line segments. Join the intersection points to create a set of new line segments, shown in red:

    Segments

  4. Create vertical line segments L0 = (0, y0) → (0, y1) and L1 = (width, y0) → (width, y1). Working from left to right, gather any line segments created in the preceding step into pairs, ignoring any line segments that were created from internal edges. If there were no intersecting external edges, then the only two edges will be L0 and L1. In this example strip, only four edges remain:

    Pairs

  5. Join the vertices in the remaining pairs of edges to create polygons:

    Polygons

Repeating the above process for each horizontal strip achieves the desired result. Assuming a set of convex, non-overlapping polygons as input, the created polygons are guaranteed to be either triangles or quadrilaterals. If a horizontal strip contains no edges, the algorithm will create a single rectangle. If no polygons exist in the scene, the algorithm will create a single rectangle covering the whole scene.

Cell Connectivity

Good progress made on the room model.

Cell Connectivity

The algorithm now correctly breaks up the space into horizontal spans, and produces a graph of cells that each have links to the cells above and below. I believe this should be enough information to semi-realistically propagate water through the space simply by walking up and down the graph of nodes.

Cells

Been working intensely on the room model.

Cells

The intention here is to analyze a set of polygons and divide the space outside the polygons into horizontal spans. The horizontal spans represent areas within which water would pool if it was poured into the space. Actually producing these polygons with usable up/down connectivity information has turned out to be a surprisingly fiddly computational geometry problem! I've still not completely solved it.

Polygons

Experimenting with a room model for the engine.

Polygons
Zeptoblog XHTML Strict

Made some corrections to zeptoblog to ensure that the output is valid XHTML 1.0 Strict. I bring this up because it directly affects this blog. I'm now validating the output of this blog against the XHTML 1.0 Strict XSD schema, so any problems of this type should be caught immediately in future.

Validate Now!

Evolving Generated Types

I released version 1.0.0 of jregions a while back and then found that I wanted to make some changes to the API. I didn't want to make a compatibility-breaking change this close to 1.0.0, so I decided to make the changes but keep some deprecated compatibility methods in place.

However, some of the types involved are generated by the immutables package. The way the package works is that you define an abstract interface type and immutables generates an immutable implementation of this interface. One of the parts of the generated implementation is a builder type that allows you to construct instances of the implementation type in multiple steps. For example, a declaration like this:

@Value.Immutable
interface SizeType
{
  int width();

  int height();
}

... would result in the generation of an immutable Size class that contained a mutable Size.Builder type capable of constructing values of Size:

Size s = Size.builder().setWidth(640).setHeight(480).build();

In my case, I wanted to rename the width and height methods to something more generic. Specifically, width should be sizeX and height should be sizeY. Clearly, if I just renamed the methods in the SizeType, then the generated type and the generated builder type would both be source and binary incompatible. I could do this:

@Value.Immutable
interface SizeType
{
  int sizeX();

  int sizeY();

  @Deprecated
  default int width()
  {
    return sizeX();
  }

  @Deprecated
  default int height()
  {
    return sizeY();
  }
}

That would at least preserve source and binary compatibility for the API of the generated type, but the generated builder type would no longer have setWidth or setHeight methods, so source and binary compatibility would be broken there. I asked on the immutables.org issue tracker, and right away Eugene Lukash stepped in with a nice solution. Thanks again Eugene!

How To Fix Intellij IDEA build issues

Switch to the project directory, and:

$ find . -name '*.iml' -exec rm -v {} \;
$ rm -rfv .idea

Reopen the project and hope intensely.

Fix

Pulsing Headache

PulseAudio has some problems.

I have a laptop and various machines for testing software across platforms, and they all send audio over the network to my main development machine. This allows me to use a single pair of headphones and to control audio levels in a single place. I'm using PulseAudio's networking support to achieve this but, unfortunately, it seems rather poor at it.

The first major problem with it is that when the TCP connection between the client and the server is broken for any reason, the only way to get that connection back appears to be to restart the client. This is pretty terrible; network connections are not reliable and any well-written networked software should be designed to be resilient in the case of bad network conditions. Simply retrying the connection with exponential backoff would help, possibly with an explicit means to reconnect via the pactl command line tool. As an aside, the use of TCP is probably not a great choice either. Software that streams audio has soft real-time requirements and TCP is pretty widely acknowledged as being unsuitable for satisfying those requirements. An application such as an audio server is receiving packets of audio data and writing them to the audio hardware as quickly as it can. The audio data is time critical: If a packet of audio is lost or turns up late, then that is going to result in an audible gap or glitch in the produced sound no matter what happens. Therefore, an algorithm like TCP that will automatically buffer data when packets are reordered, and will automatically re-send data when packets are lost, is fundamentally unsuitable for use in this scenario. Best to use an unreliable transport like UDP, consider lost or late packets as lost, and just live with the momentary audio glitch. The next piece of audio will be arriving shortly anyway! Ironically, the use of an unreliable transport would seem to make the software more reliable by eliminating the problem of having to supervise and maintain a connection to the server as sending data over UDP is effectively fire and forget.

The second major problem, and I'm suspicious (without good evidence) that this may be related to the choice of TCP as a protocol, is that the client and server can become somehow desynchronized requiring both the client and server to be restarted. Essentially, what happens is that when either the client or server are placed under heavy load, audio (understandably) begins to glitch. The problem is that even when the load returns to normal, audio remains broken. I've not been able to capture a recording of what happens, but it sounds a little like granular synthesis. As mentioned, the only way to fix this appears to be to restart both the client and server. A broken client can break the server, and a broken server can break the client!

Amber Expert Group

A few weeks back, I was contacted by none other than Brian Goetz inviting me to become part of the Project Amber expert group. I was quite honoured, and I accepted! It'll be my job to get into arguments on the mailing list about algebraic data types. Honestly, right now I'd be perfectly happy with simple Kotlin-style case classes, but I understand that full pattern matching is being considered for implementation. A while back, I wrote:

Given the typical Java conservatism, Java will probably gain closed types and pattern matching abilities some time after 2025.

I never considered that I might be slightly responsible for meeting or beating that estimate!

Three Day Insight

Been working on some difficult software architecture problems lately. I'm a proponent of a method of thinking that, according to my rather faulty memory, was attributed to Einstein, possibly by Robert Anton Wilson. I can't actually find any evidence that Einstein used this method now, but I find it useful nevertheless! Essentially, the theory goes that the subconsious mind is highly effective at problem solving but does not work quickly. Whereas the conscious mind is useful for making snap decisions (on the timescale of seconds or minutes) that can mean the difference between escaping a predator or being eaten, the subconscious mind works on timescales approaching days, weeks, months, and beyond. An effective way to use the subconscious mind is therefore to pose a question or series of questions to it, then banish those questions from the conscious mind (either by occult means or by sheer distraction). Upon returning to the problem in roughly three days time, the subconscious mind will usually have arrived at some sort of solution.

To achieve this, I spent the last few days hammering away at the mindless task of modularizing an existing codebase. In this case, jsycamore. I'm only really this year getting a handle on applying a service-oriented approach to programming and each new project is an opportunity to find new places where the model can be applied. For example, the user interface in the jsycamore package is themable. Previously, the core package provided a set of four default themes, each of which emulated the look and feel of an existing operating system. The modularization of the code moved those themes into their own modules with the themes being published as services. This allows programmers to publish their own themes as services and have them automatically made available to any program using jsycamore without anyone having to write extra code to use them. If nothing else good comes from Jigsaw, I hope at least that it shoves programmers in the direction of publishing services as opposed to relying on ClassLoader and reflection hacks to provide late-binding of functionality in this way. Much of the JDK has been converted to services, apparently, and they're now a core part of the new module system instead of lurking in the background the way they have since Java 6.

Lies!

Just because you can convince 9 out of 10 people that your stupid idea is a good idea doesn't make it a good idea it just makes you a skilled liar.

Morale shall continue until beatings improve.

Etc.

GitHub Contributions

Managed to hit 3000 GitHub contributions today:

GitHub

Renames

Had a change of heart. Doing all of the package renames now rather than waiting for Java 9. I wrote:

There is the possibility that changing the entire name of a project could be considered a non-compatibility-breaking change according to semantic versioning...

I'm choosing to believe this is true and am renaming projects and modules without incrementing the major version number. I'm using japicmp to verify that I'm not introducing binary or source incompatible changes.

Bulldozing

Mutable Numbers

Sometimes, what you really need is a mutable, boxed integer.

While updating jcanephora, I discovered that I needed to update jpra to use the new jtensors types. Whilst doing this, I discovered that the new simplified implementation of the ByteBuffer based storage tensors that I'd implemented were too simple: The jpra package made use of the cursor-like API that the old jtensors-bytebuffered package provided. I'd not provided anything analogous to this in the new API, so I had to do some rewriting. In the process, I discovered that the code that jpra generated was using an AtomicLong value to store the current byte offset value. The reason it used an AtomicLong value was simply because there was no mutable, boxed long value in the Java standard library. To remedy this, I've created a trivial mutable numbers package upon which the com.io7m.jtensors.storage.bytebuffered and com.io7m.jpra.runtime.java modules now depend. I should have done this years ago but didn't, for whatever reason.

https://github.com/io7m/jmutnum

It may be the least interesting software package I've ever written.

jcanephora on jtensors 8

Going to start working on moving jcanephora to jtensors 8.0.0-SNAPSHOT in order to flush out any problems with jtensors before I try to do a stable 8.0.0 release.

jtensors implementation

The jtensors implementation is basically done. I need to release the 1.0.0 version of the primogenitor, though, and I can't do this until the 0.10.0 version of japicmp is released.

I like this sort of pure code because it allows for property-based testing ala QuickCheck. The general idea is to specify mathematical properties of the code abstractly and then check to see if those properties hold concretely for a large set of randomly selected inputs. In the absense of tools to formally prove properties about code, this kind of property-based testing is useful for checking the likelihood that the code is correct. For example, the test suite now has methods such as:

/**
 * ∀ v0 v1. add(v0, v1) == add(v1, v0)
 */

@Test
@PercentagePassing
public void testAddCommutative()
{
  final Generator<Vector4D> gen = createGenerator();

  final Vector4D v0 = gen.next();
  final Vector4D v1 = gen.next();

  final Vector4D vr0 = Vectors4D.add(v0, v1);
  final Vector4D vr1 = Vectors4D.add(v1, v0);

  checkAlmostEquals(vr0.x(), vr1.x());
  checkAlmostEquals(vr0.y(), vr1.y());
  checkAlmostEquals(vr0.z(), vr1.z());
  checkAlmostEquals(vr0.w(), vr1.w());
}

Of course, in Haskell this would be somewhat less verbose:

quickCheck (\(v0, v1) -> almostEquals (add v0 v1) (add v1 v0))

The @PercentagePassing annotation marks the test as being executed 2000 times (by default) with at least 95% (by default) of the executions being required to pass in order for the test to pass as a whole. The reason that the percentage isn't 100% is due to numerical imprecision: The nature of floating point numbers means that it's really only practical to try to determine if two numbers are equal to each other within an acceptable margin of error. Small (acceptable) errors can creep in during intermediate calculations such that if the two results were to be compared for exact equality, the tests would almost always fail. Sometimes, the errors are large enough that although the results are "correct", they fall outside of the acceptable range of error for the almost equals check to succeed.

There's a classic (and pretty mathematically intense) paper on this called "What Every Computer Scientist Should Know About Floating-Point Arithmetic". This was given an extensive treatment by Bruce Dawson and his explanations formed the basis for my jequality package. I actually tried to use junit's built-in floating point comparison assertions for the test suite at first, but they turned out to be way too unreliable.

Update: Without even an hour having passed since this post was published, japicmp 0.10.0 has been released!