crush depth

Foolish Inconsistency

Been working on moving the jtensors codebase over to source generation as I mentioned previously. I've discovered some annoying inconsistencies in the API that are making it harder to generate the sources from a single template. For example, the VectorM4I type has a method that takes a double-typed parameter as a scaling value, but has a scaleInPlace method that takes an int-typed parameter as a scaling value.

VectorM4I.scale

VectorM4I.scaleInPlace

The fact that this hasn't been noticed up until now is both evidence to the lack of utility of the latter method, and a testament to the fallibility of humans when it comes to performing repetitive tasks.

In any case, fixing the above would be a backwards-incompatible change and I'm really trying to avoid those until Java 9 appears. Likely going to deprecate the old methods and add new ones with the correct types.

japicmp update

Big thanks to Martin Mois for implementing a recent feature request to relax the rules for semantic versioning enforcement in japicmp when the current project version is less than 1.0.0.

My basic problem was that I wanted to configure the plugin once in the primogenitor POM and then have the semantic versioning check automatically start working when the API is marked as stable (in other words, when it reaches 1.0.0). Without the feature above, I'd have had to redeclare the plugin's configuration in every project, disable it, and then remember to re-enable it every time a project reached 1.0.0.

jregions 0.0.2

First public release of jregions.

Unfortunately, the Travis CI build is failing because 0.0.1 was never deployed to Central. This means that when japicmp tries to analyze the API against the old version of the library, it can't find the old version. This will self correct when 0.0.3 is released.

The next step is to move any libraries that were using jareas or jboxes over to jregions. That's jcanephora, r2, and jsycamore, at the very least.

That bin directory. No, not that one, the other one.

For about a week, I've been having DNS resolution issues on one server. The machine runs a tinydns server for publishing internal domain names, and it seemed that after roughly 24 hours of operation, the server would simply stop responding to DNS requests. After exhausting all of the obvious solutions, I restarted the jail that housed the daemon and everything mysteriously started working.

I checked the logs and suddenly realized that there were no messages in the log newer than about a week. I checked the process list for s6-log instances and noticed that no, there were no s6-log instances running in the jail. I checked /service/tinydns/log/run, which looked fine. I tried executing /service/tinydns/log/run and saw:

exec: /usr/local/sbin/s6-setuidgid: not found

OK. So...

# which s6-setuidgid
/usr/local/bin/s6-setuidgid

Apparently, at some point, the s6 binaries were moved from /usr/local/sbin to /usr/local/bin. This is not something I did! There was no indication of this happening in any recent port change entry nor anything in the s6 change log.

The "outage" was being caused by the way that logging is handled. The tinydns binary logs to stderr instead of using something like syslog, with the error messages being piped into a logging process in the manner of traditional UNIX pipes. This is normally a good thing, because syslog implementations haven't traditionally been very reliable. The problem occurs when the process that's reading from the standard error output of a preceding process stops reading. Sooner or later, any attempt made by the preceding process to write data to the output will block indefinitely (presumably, it doesn't happen immediately due to internal buffering by the operating system kernel). In a simple single-threaded design like that used by tinydns, this essentially means the process stops working as the write operation never completes and no other work can be performed in the mean time.

I'm currently going through all of the service entries to see if anything else has quietly broken. Perhaps I need process supervision for my process supervision.

Fighting fires

Outage

Had a minor outage yesterday apparently due to a bug in the older FreeBSD images that DigitalOcean provide. To their credit, they responded extremely quickly to my support request and provided a link to a GitHub repository with a fix that could be applied to live systems.