November 12 2010 — plan of attack
68 years ago today, the Australian government approved a second attack on the enemy host — because the first one hadn't been particularly successful — in what was to become known as the "(Great) Emu War".
You read that correctly. The enemy that Australia was fighting was the large, silly relative to the ostrich known as the emu. About twenty thousand of them were causing enough trouble for the Australian farmers, that it was decided that military action was needed.
Guns against emus. I'm not making this up. Anyway, the first attack wasn't the amazing success one might expect when the Royal Australian Artillery with frigging machine guns moves against a group of flightless birds.
In the words of ornithologist Dominic Serventy:
Major Meredith also expresses his clear disappointment at the birds' sturdy builds:
I was going to say that in the century of gene modification, someone is bound to build an army of armed emus sooner or later. But I'm momentarily stunned by the comparison of emus to Zulus.
Anyway, the second attack was more successful, resulting in about a thousand direct kills, and about 2,500 birds dying from bullet injuries. Allegedly, 100 of the emu skins were collected, their feathers used to make hats for light horsemen.
❦
A decent test suite can help you catch simple programming errors. An excellent test suite helps you with the design.
A decent test suite tells you "you missed a spot". An excellent test suite tells you "no, that's not it — I remember more of the design than you do". And then, when you get it mostly right, it tells you "you missed a spot". Or, put differently, you're not done until both you and the tests say it's fine. It's bloody frustrating, but it really helps, too.
Today, I had that kind of battle with pls
. The tests are pretty good, and thus they boss me around. Even when I want them to adhere to a new design, like I did today, they still drive me more than I drive them. Brings a slightly new ring to the term "test-driven".
Here's how I originally thought project installation would happen:
- Fetch recursively
- Build recursively
- Test recursively
- Install recursively
Simple, and fine.
Then jnthn showed up (about half a year ago) and told me that that's not how people expect things to happen. So I got a new model, which looks more like this:
- Fetch
- Fetch-Build-Test-Install all dependencies (recursively)
- Build
- Test
- Install
Had to drag the tests, kicking and screaming, into this new model, but it was OK. They made sure I didn't botch things up. Things were fine again.
In testing pls
with real projects, though, it turned out that the model contained a heinous oversight. See, we only know the dependencies of a project after it's been fetched, but the tests had all their dependencies given up front, and (oops) assumed in some places that they would be.
So I changed that. And everything broke completely.
Why? Because I had used test-driven development, and the code was shaped after the tests. So the code also relied heavily on the faulty assumption. Specifically, the counter-measures for cyclic dependencies seem to have assumed a static model of dependencies, and the cycle-detecting code seemed to end up doing infinite recursive nastiness when it didn't.
Making all of it work meant taking a good look at code that was perfect under the old model, trying to understand why it was less-than-perfect under the new. Ow; brain hurt.
The tests, and judicious debug output, helped me through it. Here's the commit. Now my model looks like this:
- Fetch
- Make sure there are no cycles
- Fetch recursively
- Built-Test-Install all dependencies (recursively)
- Build
- Test
- Install
And it seems to work. Both the tests and I are happy.