crush depth

Half Float Pain

Whilst working on smf, I ran into an issue when resampling 32-bit floating point mesh data to 16-bit floating point format. The issue turned out to be poor handling of subnormal values by my ieee754b16 package. I went looking for better implementations to borrow and found a nice paper by Jeroen van der Zijp called Fast Half Float Conversions. It uses precomputed lookup tables to perform conversions and appears to be drastically more accurate than my manual process (the mathematics of which I've almost entirely forgotten).

I decided to put together a simple C99 implementation in order to see how the code worked but am having some strange issues with some very specific values. My test suite basically tries to prove that packing a double value and then unpacking it should be an approximate identity operation. Essentially, ∀x. unpack(pack(x)) ≈ x. Unfortunately, some very specific values are failing. For some reason, my implementation yields these results:

unpack(pack(2048.0)) → 2048.0
unpack(pack(2047.0)) → -0.0
unpack(pack(2046.0)) → 2046.0
unpack(pack(16375.0)) → 16368.0
unpack(pack(16376.0)) → 0.0

All of the other values in the range [-32000, 32000] appear to be correct. The unusual 16375.0 → 16368.0 result is expected; the conversion is necessarily a lossy procedure and 16368.0 is simply the nearest representable value when converting down to 16-bits. However, the 0.0 values are utterly wrong. This suggests that there's an issue in implementation that's almost certainly caused by a mistake generating the conversion tables. It seems that packing is correct, but unpacking isn't. I've gone over the code several times, even going so far as to implement it twice in two different languages and have gotten the same results every time. I've spoken to Jeroen and he showed me some results from his own implementation and test suite that show that the above isn't a problem with the algorithm. So, assuming that I haven't managed to screw up the same implementation after some five clean-room attempts, there may be a transcription mistake in the paper. I'm waiting to hear more from Jeroen.

Maven Assembly Plugin Redux

A few months back, I filed a bug for the Maven Assembly Plugin. Karl Heinz Marbaise finally got back to me and solved the issue right away. Thanks again!

JEP 305 - Pattern Matching

http://openjdk.java.net/jeps/305

This is a good first step towards getting algebraic data types into Java.

Sender Policy Framework

Set up SPF for the io7m.com mail server today. Took about 30 seconds. Hopefully I'll still be able to send and receive mail when the DNS changes propagate.

OSGi Requirements And Capabilities

OSGi is an extremely powerful module system for the Java virtual machine. Code and resources are packaged into bundles that can be installed into a running OSGi container for use.

Using bundles to deliver compiled Java code is essentially a solved problem. Bundles specify what Java packages they export and import. If a package is exported, then that package can be imported by other bundles. The OSGi resolver is responsible for wiring bundles together. For example, if a bundle B0 specifies that it imports a package named P0, and another bundle B1 specifies that it exports a package named P0, then the OSGi runtime will create a wire W0 from B0 to B1. Whenever code in B0 references a class in P0, then that class will be loaded from B1 by traversing W0. This creates a directed acyclic graph where the vertices of the graph are packages and the edges of the graph are wires.

We can look at this in more general terms: We can think of the above in terms of requirements that can be satisfied by capabilities. For example, we can think of a package import as being a specific requirement: A package p attempting to import a particular package q can be thought of as a requirement for q by p. Conversely, a package export can be thought of as a capability: An export of a package p can be thought of as a capability to satisfy a requirement for p.

In these terms, then, the OSGi runtime is essentially a constraint solving system that takes a set of requirements and capabilities as input, and tries to find a solution that satisfies all of the requirements using the provided capabilities.

So what's the point of explaining all of this? Well, beyond importing and exporting Java code, the OSGi system actually allows developers to declare their own types of capabilities and requirements that will be solved by the OSGi runtime when the bundles specifying them are installed. This allows for bundles that contain things other than Java classes to get the same strong versioning and dependency handling that Java code enjoys.

In my case, I'm using this to get versioning and dependency handling for game engine assets. A bundle in the system I've put together can place declarations in the manifest such as:

Provide-Capability: com.io7m.callisto.resources.package; name = a.b.c; version = 1.0

That is, the bundle states that it provides a package called a.b.c, version 1.0, containing resources, in the category of requirements called com.io7m.callisto.resources.package. Another bundle declares:

Require-Capability: com.io7m.callisto.resources.package; filter=(& (name = a.b.c) (version >= 1.0) (version < 2.0))

That is, the bundle states that it requires a package that has the name a.b.c AND has a version greater than or equal to 1.0 AND has a version less than 2.0 (so for example, 1.1 would satisfy the requirement, but 2.1 would not). The requirement is only satisfied when all three constraints are met.

Those bundles will be wired together when they're installed and a bundle can only access the resources of another bundle via the created wire when it explicitly imports that bundle via a Require-Capability declaration. I get all of the benefits of strong versioning, dependency handling, proper cross-bundle resource visibility, good error messages when dependencies aren't met, etc, essentially for free. If a user wants to install a bundle containing resources, the declared capabilities and requirements of the package mean that it's trivial to automatically fetch dependencies of the bundles without the user having to do anything.

I don't know of any modern game engines that use a system like this. Apart from anything, it's phenomenally impractical to build a system like this outside of a virtual machine due to the constraints imposed by working with native code directly. Game developers are religiously opposed to anything that isn't C++ and so refuse to consider the alternatives. Generally, in most games, all of the resources are stuffed into one giant flat namespace without any sort of versioning, privacy control, dependency handling, or anything else. The result is immediate dependency hell as soon as third parties try to produce modifications for the games. Because there's no dependency information, installing modifications for games must be done manually, and there's absolutely no way to ensure that arbitrary modifications are compatible with each other. Worse, when modifications are incompatible, the result will be obscure problems and crashes at runtime instead of actionable "Modification X is incompatible with modification Y" error messages.

In life never do as others do...