What to save and throw away?
pr?The last hour is on us both?mr.s?tuck this little kitty into the impenetrable brainpan?
pr?Contents under pressure?Do not expose to excessive heat, vacuum, blunt trauma, immersion in liquids, disintegration, reintegration, hypersleep, humiliation, sorrow or harsh language?
pr?When the time comes, whose life will flash before yours?
pr?A billion paths are here inside me? pr?yes, yes, yes, Bernhard, 110? pr?potential, jewels, jewels, yes, jewels? %
Decided to kill off some old
packages. jnull and
jfunctional in particular
I've used in just about every project I've ever worked on. There's
really very little reason for them to exist anymore though. Java
8 added Objects.requireNotNull
which standardized terse null
checking. Noone cares about the @NonNull
annotations (including
myself). The entire contents of jfunctional
were made redundant
by Java 8. Both of these packages served their purposes well back
in the days of Java 6 when they were first designed, but now they're
just a burden.
It's good to throw away code.
I'm still waiting on a set of issues to be resolved in order to push modules to all of my projects.
MJAVADOC-489 causes JavaDoc generation to fail when one module requires another.
MDEP-559 causes
the dependency:analyze
goal to fail. I use this goal
as part of all my builds in order to keep dependencies
clean and correct. Getting this fixed depends on
MSHARED-660.
I've also removed japicmp from
all of my builds. I don't want to disparage the project at all;
it's good at what it does, but using it would require using custom
MAVEN_OPTS
on JDK 9, and that's just not good enough. I'm in the
process of writing a replacement for japicmp
and will announce it
within the next few weeks.
Instead of using a non-default MTU on my network, I've instead implemented TCP MSS clamping.
Specifically, I reset all of the interfaces on my networks back to
using an MTU of 1500
(including those on the router), and added
the following pf rule:
scrub on $nic_ppp max-mss 1440
That rule clamps the maximum TCP segment length on the
PPP interface to 1440
. Why 1440
? It's
essentially down to the per-packet overhead of each protocol that's
involved. Typically, that'll be 40
or so bytes for an IPv6 packet
header, 8
bytes for PPPoE, and some
loose change.
So far, nothing has broken with the new settings. No TLS handshake failures, no sudden broken pipes on SSH sessions, no issues sending mail.
I've just recently deployed IPv6. Everything went well except for one painful issue that is still not really resolved my satisfaction. To recount the story requires covering quite a bit of ground and digging through a pile of acronyms. Hold on tight!
My ISP provides a native
/48
network per customer. That means that I get a mere
1208925819614629174706176
public IP addresses spread over
65536
/64
networks to use as I please.
I want to use my existing FreeBSD router to do the routing for the individual networks. I want to do this for several reasons:
The ISP-provided box is a standard consumer router and is fairly limited in what it can do. It's not actively harmful; it's a respectable brand and fairly powerful hardware, but it's still only a consumer-grade box with a web interface.
I'd rather have the intricate configuration details of my network be stored in text configuration files on commodity hardware and on an operating system that I mostly trust. The ISP-provided box runs Linux on proprietary hardware and only provides shell access via an undocumented (authenticated) backdoor (side-door?).
I trust myself to write safe pf rules.
Exposing the FreeBSD machine directly to the WAN eliminates one routing hop.
However, in order to allow my FreeBSD machine to do the routing of the individual networks (as opposed to letting the entirely optional ISP-provided box do it), I had to get it to handle the PPP connection. The machine doesn't have a modem, so instead I have to run the ISP-provided modem/router in bridging mode and get the FreeBSD machine to send PPP commands using the PPPoE protocol. Encouragingly, my ISP suggested that yes, I should be using FreeBSD for this. It's a testament to the quality of IDNet: They are a serious technical ISP, they don't treat their customers like idiots, and they respect the freedom of choice of their customers to use whatever hardware and software they want.
For those that don't know, limitations in PPPoE
mean that the
MTU of the link is limited to at most
1492
. For reference, most networks on the internet are using an MTU
of 1500
. In IPv4
, if you send a packet that's larger than your
router's MTU, the packet will be fragmented into separate pieces
and then reassembled at the destination. This has, historically,
turned out to be a rather nasty way to deal with oversized packets
and therefore, in IPv6
, packets that are larger than the MTU
will be rejected by routers and will result in Packet Too Large
ICMPv6 messages being returned to the sender.
In effect, this means that IPv6 networks are somewhat less tolerant of misconfigured MTU values than IPv4 networks. Various companies have written extensively about fragmentation issues.
So why am I mentioning this? Well, shortly after I'd enabled IPv6 for
the network and all services, I suddenly ran into a problem where
I couldn't send mail. The symptom was that my mail client would
connect to the SMTP server, authenticate
successfully, send an initial DATA
command, and then sit there
doing nothing. Eventually, the server would kick the client due to
lack of activity. After asking on the mailing list for my mail
client, Andrej Kacian pointed me at a thread that documented
someone dealing with MTU issues. After some examination with
Wireshark, I realized that my workstation
was sending packets that were larger than the PPPoE
link's MTU
of 1492
. My FreeBSD machine was dilligently responding with
Packet Too Large
errors, but for whatever reason, my Linux workstation
was essentially ignoring them. Some conversations on the #ipv6
Freenode IRC channel have suggested that Linux
handles this very badly. Worse, it seems that the MTU related issues
are sporadic: Sometimes it works without issue, other times not.
The "solution" seems to be this: Set the MTUs of all interfaces on
all machines in my network to 1492
. If I, for example, set the MTU
of my workstation's network interface to 1500
and set the FreeBSD
router's interfaces to 1492
, I can no longer SSH reliably into
remote sites, and the majority of TLS handshakes fail. No Packet Too Large
errors are generated, which seems counter to my understanding
of how this stuff is supposed to work. I very much dislike having to
use a non-default MTU on my network: It seems like I will inevitably
forget to set it on one or more machines and will run into bizarre
and intermittent network issues on that machine.
Some further conversation on the #ipv6
IRC channel suggests that I
should not have to do this at all. However, I've so far spent roughly
ten hours trying to debug the problem and am exhausted. Using a
non-standard MTU in my LAN(s) works around the issue for now, and
I'll re-examine the problem after my capacity for suffering has
been replenished.
2018-02-23: Update: IPv6 And Linux
I wanted to start moving all my projects to Java 9, but quickly discovered that a lot of Maven plugins I depend on aren't ready for Java 9 yet.
japicmp doesn't support Java 9 because javassist doesn't support Java 9 yet.
maven-bundle-plugin doesn't support Java 9 because BND doesn't support Java 9 yet.
Update (2017-10-03): John Poth has offered a workaround
maven-dependency-plugin doesn't support Java 9. See this ticket.