Strangely Consistent

Musings about programming, Perl 6, and programming Perl 6

Deep Git

I am not good at chess.

I mean... "I know how the pieces move". (That's the usual phrase, isn't it?) I've even tried to understand chess better at various points in my youth, trying to improve my swing. I could probably beat some of you other self-identified "I know how the pieces move" folks out there. With a bit of luck. As long as you don't, like, cheat by having a strategy or something.

I guess what I'm getting at here is that I am not, secretly, an international chess master. OK, now that's off my chest. Phew!

Imagining what it's like to be really good at chess is very interesting, though. I can say with some confidence that a chess master never stops and asks herself "wait — how does the knight piece move, again?" Not even I do that! Obviously, the knight piece is the one that moves √5 distances on the board. 哈哈

I can even get a sense of what terms a master-level player uses internally, by reading what master players wrote. They focus on tactics and strategy. Attacks and defenses. Material and piece values. Sacrifices and piece exchange. Space and control. Exploiting weaknesses. Initiative. Openings and endgames.

Such high-level concerns leave the basic mechanics of piece movements far behind. Sure, those movements are in there somewhere. They are not irrelevant, of course. They're just taken for granted and no longer interesting in themselves. Meanwhile, the list of master-player concerns above could almost equally well apply to a professional Go player. (s:g/piece/stone/ for Go.)

Master-level players have stopped looking at individual trees, and are now focusing on the forest.

The company that employs me (Edument) has a new slogan. We've put it on the backs of sweaters which we then wear to events and conferences:

We teach what you can't google.

I really like this new slogan. Particularly, it feels like something we as a teaching company have already trended towards for a while. Some things are easy to find just by googling them, or finding a good cheat sheet. But that's not why you attend a course. We should position ourselves so as to teach answers to the deep, tricky things that only emerge after using something for a while.

You're starting to see how this post comes together now, aren't you? 😄

2017 will be my ninth year with Git. I know it quite well by now, having learned it in depth and breadth along the way. I can safely say that I'm better at Git than I am at chess at this point.

Um. I'm most certainly not an international Git grandmaster — but largely that's because such a title does not exist. (If someone reads this post and goes on to start an international Git tournament, I will be very happy. I might sign up.)

No, my point is that the basic commands have taken on the role for me that I think basic piece movements have taken on for chess grandmasters. They don't really matter much; they're a means to an end, and it's the end that I'm focusing on when I type them.

(Yes, I still type them. There are some pretty decent GUIs out there, but none of them give me the control of the command line. Sorry-not-sorry.)

Under this analogy, what are the things I value with Git, if not the commands? What are the higher-level abstractions that I tend to think in terms of nowadays?

(Yes, these are the ACID guarantees for database transactions, but made to work for Git instead.)

A colleague of mine talks a lot about "definition of done". It seems to be a Scrum thing. It's his favorite term more than mine, but I still like it for its attempt at "mechanizing" quality, which I believe can succeed in a large number of situations.

Another colleague of mine likes the Boy Scout Rule of "Always leave the campground cleaner than you found it". If you think of this in terms of code, it means something like refactoring a code base as you go, cleaning it up bit by bit and asymptotically approaching code perfection. But if you think of it in terms of process, it dovetails quite nicely with the "definition of done" above.

Instead of explaining how in the abstract, let's go through a concrete-enough example:

  1. Some regression is discovered. (Usually by some developer dogfooding the system.)
  2. If it's not immediately clear, we bisect and find the offending commit.
  3. ASAP, we revert that commit.
  4. We analyze the problematic part of the reverted commit until we understand it thoroughly. Typically, the root cause will be something that was not in our definition of done, but should've been.
  5. We write up a new commit/branch with the original (good) functionality restored, but without the discovered problem.
  6. (Possibly much later.) We attempt to add discovery of the problem to our growing set of static checks. The way we remember to do that is through a TODO list in a wiki. This list keeps growing and shrinking in fits and starts.

Note in particular the interplay between process, quality and, yes, Git. Someone could've told me at the end of step 6 that I had totalled 29 or so Git basic commands along the way, and I would've believed them. But that's not what matters to us as a team. If we could do with magic pixie dust what we do with Git — keep historic snapshots of the code while ensuring quality and isolation — we might be satisfied magic pixie dust users instead.

Somewhere along the way, I also got a much more laid-back approach to conflicts. (And I stopped saying "merge conflicts", because there are also conflicts during rebase, revert, cherry-pick, and stash — and they are basically the same deal.) A conflict happens when a patch P needs to be applied in an environment which differs too much from the one in which P was created.

Aside: in response to this post, jast++ wrote this on #perl6: "one minor nitpick: git knows two different meanings for 'merge'. one is commit-level merge, one is file-level three-way merge. the latter is used in rebase, cherry-pick etc., too, so technically those conflicts can still be called merge conflicts. :)" — TIL.

But we actually don't care so much about conflicts. Git cares about conflicts, becuase it can't just apply the patch automatically. What we care about is that the intent of the patch has survived. No software can check that for us. Since the (conflict ↔ no conflict) axis is independent from the (intent broken ↔ intent preserved) axis, we get four cases in total. Two of those are straightforward, because the (lack of) conflict corresponds to the (lack of) broken intent.

The remaining two cases happen rarely but are still worth thinking about:

If we care about quality, one lesson emerges from mst's example: always run the tests after you merge and after you've resolved conflicts. And another lesson from my example: try to introduce automatic checks for structures and relations in the code base that you care about. In this case, branch A could've put in a test or a linting step that failed as soon as it saw something according to the old naming convention.

A lot of the focus on quality also has to do with doggedly going to the bottom of things. It's in the nature of failures and exceptional circumstances to clump together and happen at the same time. So you need to handle them one at a time, carefully unraveling one effect at a time, slowly disassembling the hex like a child's rod puzzle. Git sure helps with structuring and linearizing the mess that happens in everyday development, exploration, and debugging.

As I write this, I realize even more how even when I try to describe how Git has faded into the background as something important-but-uninteresting for me, I can barely keep the other concepts out of focus. Quality being chief among them. In my opinion, the focus on improving not just the code but the process, of leaving the campground cleaner than we found it, those are the things that make it meaningful for me to work as a developer even decades later. The feeling that code is a kind of poetry that punches you back — but as it does so, we learn something valuable for next time.

I still hear people say "We don't have time to write tests!" Well, in our teams, we don't have time not to write tests! Ditto with code review, linting, and writing descriptive commit messages.

No-one but Piet Hein deserves the last word of this post:

The road to wisdom? — Well, it's plain
and simple to express:

Err
and err
and err again
but less
and less
and less.

The curious case of the disappearing test

I've recently learned a few valuable things about testing. I outline this in my Bondcon talk — Bondcon is a fictional anti-conference running alongside YAPC::Europe 2016 in a non-corporeal location but unfortunately frozen in time due to a procrastination-related mishap, awaiting the only speaker's tuits — but I thought I might blog about it, too.

Those of us who use and rely on TDD know to test the software itself: the model, the behaviors, etc. But as a side effect of attaching TravisCI to the 007, another aspect of testing came to light: testing your repository itself. Testing code-as-artifact, not code-as-effect.

Thanks to TravisCI, we currently test a lot of linter-like things that we care about, such as four spaces for indentation, no trailing whitespace, and that META.info parses as correct JSON. That in itself is not news — it's just using the test suite as a linter.

But there are also various bits of consistency that we test, and this has turned out to be very useful. I definitely want to do more of this in the future in my projects. We often talk about coupling as something bad: if you change component A and some unrelated component B breaks, then they are coupled and things are bad.

But some types of coupling are necessary. For example, part of the point of the META.info is to declare what modules the project provides. Do you know how easy it is to forget to update META.info when you add a new module? (Hint: very.) Well, we have a test which double-checks.

We also have a consistency test that makes sure a method which does a certain resource-allocating thing also remembers to do the corresponding resource-deallocating thing. (Yes, there are still a few of those, even in memory-managed languages.) This was after a bug happened where allocations/deallocations were mismatched. The test immediately discovered another location in the code that needed fixing.

All of the consistency tests are basically programmatic ways for the test suite to send you a message from a future, smarter you that remembered to do some B action immediately after doing some A action. No wonder I love them. You could call it "managed coupling", perhaps: yes, there's non-ideal coupling, but the consistency test makes it manageable.

But the best consistency check is the reason I write this post. Here's the background. 007 has a bunch of builtins: subs, operators, but also all the types. These types need to be installed into the setting by the initialization code, so that when someone actually looks up Sub from a 007 program, it actually refers to the built-in Sub type.

Every now and again, we'd notice that we'd gotten a few new types in the language, but they hadn't been added as built-ins. So we added them manually and sighed a little.

Eventually this consistency-test craze caught up, and we got a test for this. The test is text-based, which is very ironic considering the project itself; but hold that thought.

Following up on a comment by vendethiel, I realized we could do better than text-based comparison. On the Perl 6 types side, we could simply walk the appropriate module namespaces to find all the types.

There's a general rule at play here. The consistency tests are very valuable, and testing code-as-artifact is much better than nothing. But if there's a corresponding way to do it by running the program instead of just reading it, then that way invariably wins.

Anyway, the test started doing Stash traversal, and after a few more tweaks looked really nice.

And then the world paused a bit, like a good comedian, for maximal effect.

Yes, the test now contained an excellent implementation of finding all the types in Perl 6 land. This is exactly what the builtin initialization code needed to never be inconsistent in the first place. The tree walker moved into the builtins code itself. The test file vanished in the night, its job done forever.

And that is the story of the consistency check that got so good at its job that it disappeared. Because one thing that's better than managed coupling is... no coupling.

Where in the sky

Our 1.5yo has a favorite planet. By a long margin, it's Mars.

I've told him he's currently on Earth, and shown him where, and he's OK with that. Mars is still the favorite. Earth's moon is OK too.

Such an interest does not occur — pun not intended — in a vacuum. We have a book at home (much like this one, but a different one, and in Swedish), and we open it sometimes to admire Mars (and then everything else).

But what really left its mark is the Space Room in our local Tekniska Museet — a complete dark room with projectors filling a wall with pictures of space. Using an Xbox controller, you decide where to fly in the solar system. If anything, this is what made Mars real for our son. In that room, we've orbited Mars. We've stood on its surface, looking at the Mars moons in the Mars sky. We've admired a Mars sunrise, standing quiet together in the red sands.

Inevitably, I got the first hard science question I couldn't answer from him a couple of weeks ago.

I really should have seen it coming. I was dropping him off at kindergarten. Before we went inside, I crouched next to him to point up at the moon above the city. He looked at it and said "Moon" in Swedish. Then he turned to me, eyes intent, and said "Mars?". It was a question.

He had put together that the planets we were admiring in the book and in the Space Room were actually up there somewhere. And now he just wanted me to point him towards Mars.

I had absolutely no idea. I told him it's not on the sky right now, but for all I knew, it might have been.

Now I really want to know how to find this out. Sure, there are calculations involved. I want to learn enough about them to be able to write a small, understandable computer program for me. It should be able to answer a question such as "Where in the sky is Mars?". Being able to ask it for all the planets in our solar system seems like an easy generalization without much extra cost.

Looking at tutorials like this one with illustrations and this one with detailed calculations, I'm heartened to learn that it's not only possible to do this, but more or less the steps I hoped:

There are many complicating factors that together make a simple calculation merely approximate, and the underlying reasons are frankly fascinating, but it seems that if I just want to be able to point roughly in the right direction and have a hope of finding a planet there, a simple method will do.

I haven't written any code yet, so consider this a kind of statement of intent. I know there must be oodles of "night sky" apps and desktop programs that already present this information... but my goal is to make the calculation myself, with a program, and to get it right. Lovingly handcrafted planetary positions.

It would also be nice to be able to ask "Where in the sky is the moon?" (that one's easy to double-check) or "Where in the sky is the International Space station?". If anything, that ought to be a much simpler calculation, since these orbit Earth.

Once I can reliably calculate all the positions, being able to know at what time things rise and set would also be very useful.

I went outside to throw the garbage last night, and it turned out it was a cloudless late evening. I saw some of the brighter stars, even from the light-polluted vantage point of our yard. I may have been gazing on Mars without knowing it. It's a nice feeling to find out how to learn something. Even nicer when it's for your child, so you can show him his favorite planet in the sky.