crush depth

jregions 0.0.2

First public release of jregions.

Unfortunately, the Travis CI build is failing because 0.0.1 was never deployed to Central. This means that when japicmp tries to analyze the API against the old version of the library, it can't find the old version. This will self correct when 0.0.3 is released.

The next step is to move any libraries that were using jareas or jboxes over to jregions. That's jcanephora, r2, and jsycamore, at the very least.

That bin directory. No, not that one, the other one.

For about a week, I've been having DNS resolution issues on one server. The machine runs a tinydns server for publishing internal domain names, and it seemed that after roughly 24 hours of operation, the server would simply stop responding to DNS requests. After exhausting all of the obvious solutions, I restarted the jail that housed the daemon and everything mysteriously started working.

I checked the logs and suddenly realized that there were no messages in the log newer than about a week. I checked the process list for s6-log instances and noticed that no, there were no s6-log instances running in the jail. I checked /service/tinydns/log/run, which looked fine. I tried executing /service/tinydns/log/run and saw:

exec: /usr/local/sbin/s6-setuidgid: not found

OK. So...

# which s6-setuidgid
/usr/local/bin/s6-setuidgid

Apparently, at some point, the s6 binaries were moved from /usr/local/sbin to /usr/local/bin. This is not something I did! There was no indication of this happening in any recent port change entry nor anything in the s6 change log.

The "outage" was being caused by the way that logging is handled. The tinydns binary logs to stderr instead of using something like syslog, with the error messages being piped into a logging process in the manner of traditional UNIX pipes. This is normally a good thing, because syslog implementations haven't traditionally been very reliable. The problem occurs when the process that's reading from the standard error output of a preceding process stops reading. Sooner or later, any attempt made by the preceding process to write data to the output will block indefinitely (presumably, it doesn't happen immediately due to internal buffering by the operating system kernel). In a simple single-threaded design like that used by tinydns, this essentially means the process stops working as the write operation never completes and no other work can be performed in the mean time.

I'm currently going through all of the service entries to see if anything else has quietly broken. Perhaps I need process supervision for my process supervision.

Fighting fires

Outage

Had a minor outage yesterday apparently due to a bug in the older FreeBSD images that DigitalOcean provide. To their credit, they responded extremely quickly to my support request and provided a link to a GitHub repository with a fix that could be applied to live systems.

Glossaries And Modularization

A close friend of mine suggested that this blog might be more readable with a glossary, and I agreed. Computer science is fraught with terms that may have one of several similar-but-not-quite-the-same meanings depending upon the context in which they're used. I've added a glossary that I'll try to keep updated and will link to from the first uses of significant terms when they arise. Note that the glossary tends towards definitions from the perspective of this blog. That is, it lists only my intended definitions of terms as opposed to listing every possible accepted definition of each of them. I've gone back through old posts and tidied up the usages somewhat. I'll be making an effort to use more consistent terminology from now on.

I also took this opportunity to aggressively modularize the zeptoblog codebase and introduce a general API for implementing post generators. The glossary page is an implementation of a post generator that builds a static page from a database of definitions.

Primogenitor

The Maven POM files in io7m projects contain a fair amount of duplication with respect to each other. The (possibly not entirely rational) reason for this is that when I started moving projects to Maven about five years ago, I had an instinctive lack of trust for project inheritance after seeing what a disaster inheritance usually implies in object-oriented programming languages. Five years of experience, however, have taught me that in a non-Turing-complete description language such as Maven's POM, the problems usually caused by inheritance tend not to occur. I can't give any formal reasoning for this, it's purely anecdotal.

I've introduced inheritance in steps: The root POM of each project does most of the work, and then each of the POMs in the modules of the project specify the bare minimum extra information such as dependencies, plugin executions, etc. In practice, most modules just specify some dependencies and an OSGi manifest.

This is fine, but it does mean that the root POMs of all of the projects mostly contain the same few hundred lines of XML. If I make a change to the logic in one POM that I think would be useful in other projects, I have to make the same change in those projects too. Therefore, I'd like to complete the progression and move towards all projects inheriting from a common primogenitor POM. In addition I'd like to add in some extra information such as inserting the current Git revision into produced JAR files, and statically analyzing the bytecode of compiled files to ensure that semantic versioning rules are being followed. I had a fairly productive conversation with Curtis Rueden about some practical aspects of this (Curtis maintains a very large collection of projects that inherit from a common root POM) and I've made a first attempt at a new primogenitor POM. I'll try moving a few of the more recent leaf projects such as jregions to this new root and see how things work out.