Strangely Consistent

Musings about programming, Perl 6, and programming Perl 6

Send more money (in Perl 6)

In which I implement four different Perl 6 solutions to MJD's SEND + MORE = MONEY challenge.

I encourage you to read that post if you haven't already, but here's a short tl;dr: Haskell's do notation is wonderful in that it allows the author to cleanly express a backtracking algorithm without any "noise" such as explicit backtracking information, or indentation. Monads may be weird and frightening, but proponents of other languages should take heed: do notation is nice.

Can we do as nicely in Perl 6?

Version A: recursion

Here we're trying to get as close as possible to the original Haskell code without using any tricks. Basically trying to match the essence of the problem line by line. We're hampered by not having a do notation, of course, and no built-in backtracking in the main language. The program pretends to have no indentation, because the indentation isn't really relevant.

my @digits = 0..9;

choose @digits (-) 0, -> $s {
choose @digits (-) $s, -> $e {
choose @digits (-) ($s, $e), -> $n {
choose @digits (-) ($s, $e, $n), -> $d {
my $send = :10[$s, $e, $n, $d];

choose @digits (-) (0, $s, $e, $n, $d), -> $m {
choose @digits (-) ($s, $e, $n, $d, $m), -> $o {
choose @digits (-) ($s, $e, $n, $d, $m, $o), -> $r {
my $more = :10[$m, $o, $r, $e];

choose @digits (-) ($s, $e, $n, $d, $m, $o, $r), -> $y {
my $money = :10[$m, $o, $n, $e, $y];

guard $send + $more == $money, {
say "$send + $more == $money";

sub choose(Set $choices, &fn) {
    for @$choices -> $value {

sub guard($condition, &fn) {
    if $condition {

This takes about 26 minutes to run on my laptop. I despaired at this — the original Haskell version finishes in less than a second — but then I wrote an equivalent Perl 5 version, and it took 8 minutes. Paradoxically, that somehow made me feel less bad about Perl 6's performance. ("Wow, we're within an order of magnitude of Perl 5!")

(Update: Peter Sergeant sent me a faster Perl 5 version of the above script. His uses hashes instead of arrays. My Perl 6 port of this takes 15 minutes to run. That's two orders of magnitude slower — not cool.)


If you're new to Perl 6, you might not recognize (-) as set difference. I could also have used (U+2216 SET MINUS), but for once, the Texas version felt clearer.

I also like the clarity of $send = :10[$s, $e, $n, $d]. In the Perl 5 versions, I ended up with this helper sub that does the same.

sub base_10 {
    my (@digits) = @_;
    my $result = 0;
    while (@digits) {
        my $digit = shift @digits;
        $result *= 10;
        $result += $digit;
    return $result;

Perl 6 just treats it as a variant of the base conversion syntax.

Version B: iteration

Where the previous version tried to stick close to the original, this version just dumps all such concerns and tries to go fast. It does so by spewing out explicit loops, checks, and native integers. Among all the solutions, this one fails MJD's criteria the hardest... all in the name of speed.

my int $s = -1;
while ++$s <= 9 {
    next if $s == 0;

    my int $e = -1;
    while ++$e <= 9 {
        next if $e == $s;

        my int $n = -1;
        while ++$n <= 9 {
            next if $n == $s;
            next if $n == $e;

            my int $d = -1;
            while ++$d <= 9 {
                next if $d == $s;
                next if $d == $e;
                next if $d == $n;

                my int $send = $s*1000 + $e*100 + $n*10 + $d;

                my int $m = -1;
                while ++$m <= 9 {
                    next if $m == 0;
                    next if $m == $s;
                    next if $m == $e;
                    next if $m == $n;
                    next if $m == $d;

                    my int $o = -1;
                    while ++$o <= 9 {
                        next if $o == $s;
                        next if $o == $e;
                        next if $o == $n;
                        next if $o == $d;
                        next if $o == $m;

                        my int $r = -1;
                        while ++$r <= 9 {
                            next if $r == $s;
                            next if $r == $e;
                            next if $r == $n;
                            next if $r == $d;
                            next if $r == $m;
                            next if $r == $o;

                            my int $more = $m*1000 + $o*100 + $r*10 + $e;

                            my int $y = -1;
                            while ++$y <= 9 {
                                next if $y == $s;
                                next if $y == $e;
                                next if $y == $n;
                                next if $y == $d;
                                next if $y == $m;
                                next if $y == $o;
                                next if $y == $r;

                                my int $money =
                                    $m*10000 + $o*1000 + $n*100 + $e*10 + $y;
                                next unless $send + $more == $money;

                                say "$send + $more == $money";

(cygz++ for suggesting many improvements to the above code, which eventually led to the fast version we have now.)

This version takes 22 seconds on my laptop. Certainly an improvement over version A. The corresponding Perl 5 code (which doesn't do natives) takes 1.3 seconds. An NQP version takes 0.69 seconds (beating even Haskell), which leads me to believe we can still be a lot faster in Perl 6, too.

(Update: Apparently, if you produce Perl 5, Perl 6 and NQP versions of the same script, then you will be approached by japhb++ who will invite you to become part of a loosely-knit group of heroes known as the Benchmarker initiative. I added my scripts to the growing number of benchmark scripts. Exciting!)

(Update: Wohoo! timotimo++ made some commits to Rakudo, and suddenly this Perl 6 script takes but 15 seconds to run! That's like a 30% time saving!)

Version C: regex engine

Now for a version that tries to capitalize on the regex engine having backtracking behavior. The basic idea (using amb) comes from Rosetta Code. I'm a teeny bit disappointed amb has to resort to building regex fragments as strings, which feels inelegant.

sub amb($var, @a) {
    "[{ {"||\{ $var = '$_' }"}

sub infix:<except>(@lhs, @rhs) { (@lhs (-) @rhs).list }

my @digits = 0..9;

"" ~~ m/
    :my ($s, $e, $n, $d, $m, $o, $r, $y);
    :my ($send, $more, $money);

    <{ amb '$s', @digits except [0] }>
    <{ amb '$e', @digits except [$s] }>
    <{ amb '$n', @digits except [$s, $e] }>
    <{ amb '$d', @digits except [$s, $e, $n] }>
    { $send = :10[$s, $e, $n, $d] }
    <{ amb '$m', @digits except [0, $s, $e, $n, $d] }>
    <{ amb '$o', @digits except [$s, $e, $n, $d, $m] }>
    <{ amb '$r', @digits except [$s, $e, $n, $d, $m, $o] }>
    { $more = :10[$m, $o, $r, $e] }
    <{ amb '$y', @digits except [$s, $e, $n, $d, $m, $o, $r] }>
    { $money = :10[$m, $o, $n, $e, $y] }

    <?{ $send + $more == $money }>
    { say "$send + $more == $money" }

On the plus side, this algorithm nails the linear code layout and gets fairly close to being nice and clean. There's a bit of noise along the fringes, what with all the { } and <{ }> and <?{ }>, but for a Perl 6 regex, this is good going.

Too bad it's so damn slow. Extrapolating from a shorter run, I estimate that the program would take around 100 minutes to finish. But it gets killed off on my system after 88 minutes because it leaks ridiculous quantities of memory (11 MB a second, or 660 MB a minute). I wonder if I could submit that as a rakudobug.

(Update: At the expense of the nice syntactic abstraction offered by amb, I managed to produce a version of the regex that actually completes before it runs out of memory. (And doesn't leak nearly as bad.) Here it is. It runs in little over 6 minutes; worse than version B but better than version A.)

Version D: macros/speculation

Now, obviously, the solution that isn't burdened down by properly existing yet is also the cutest one.

use Hypothetical::Solver;

my @digits = 0..9;

solve {
    my $s = amb @digits (-) [0];
    my $e = amb @digits (-) [$s];
    my $n = amb @digits (-) [$s, $e];
    my $d = amb @digits (-) [$s, $e, $n];
    my $send = :10[$s, $e, $n, $d];
    my $m = amb @digits (-) [0, $s, $e, $n, $d];
    my $o = amb @digits (-) [$s, $e, $n, $d, $m];
    my $r = amb @digits (-) [$s, $e, $n, $d, $m, $o];
    my $more = :10[$m, $o, $r, $e];
    my $y = amb @digits (-) [$s, $e, $n, $d, $m, $o, $r];
    my $money = :10[$m, $o, $n, $e, $y];

    guard $send + $more == $money;
    say "$send + $more == $money";

Clearly, this won't even compile, as it's missing a dependency. Let's supply it with the smallest possible dependency, just honoring signatures:

module Hypothetical::Solver {
    sub solve(&block) is export {}
    sub amb($set) is export {}
    sub guard($condition) is export {}

Which... is useless, because now we have a program which looks pretty but does nothing.

So let's fix that. Here I have another program which eats the first program for breakfast. More exactly, it can parse the program and emit a new one that solves the problem. Be aware that the below is a bit of a hack (I'll get back to that), but at least each individual part is nice and self-contained.

grammar Solver::Syntax {
    token TOP { <statement>* }

    proto token statement {*}

    token statement:sym<use> {
        <sym> \s+ ([\w | '::']+) ';' \s*

    token statement:sym<my> {
        <sym> \s+ \S+ \s* '=' \s* <!before 'amb'> <-[;]>+ ';' \s*

    token statement:sym<solve> {
        <sym> \s+ ('{' \s*) <statement>* ('}' \s*)

    token statement:sym<guard> {
        <sym> \s+ (<-[;]>+ ';' \s*)

    token statement:sym<say> {
        <sym> \s+ <-[;]>+ ';' \s*

    token statement:amb-my {
        'my' \s+ (\S+) \s* '=' \s* 'amb' \s+ (<-[;]>+) ';' \s*

class Solver::Actions {
    method TOP($/) {
        make $<statement>».ast.join;

    method statement:sym<use>($/) {
        make $0 eq "Hypothetical::Solver" ?? "" !! ~$/;

    method statement:sym<my>($/) {
        make ~$/;

    method statement:sym<solve>($/) {
        make $0 ~ $<statement>».ast.join ~ $1;

    method statement:sym<guard>($/) {
        make "next unless " ~ $0;

    method statement:sym<say>($/) {
        make ~$/;

    method statement:amb-my ($/) {
        make "for ($1).list -> $0 \{\n" ~ $<statement>».ast.join.indent(4) ~ "\}\n";

(Entire script is here.)

The result is closest in spirit to version B above. But it doesn't try to be as optimized. As a result of this, it actually performs like version A, and finishes in 26 minutes.

Let me just conclude by making a few points.

Lately I've been nosing around languages that compile to JavaScript. Such languages allow us to state the program in a nicer, more fit-for-the-task language than JavaScript, but still get all the advantages of being able to run things in the browser.

The intended use of macros in Perl 6 is similar to this: express the problem in a "nicer way" (variant D), then massage it down to something that you could have written but would rather prefer not to (variant B). The big difference between macros and slangs (IMO) is that macros allow you to parse normally and then mess with the resulting Qtree, whereas slangs allow you to replace the parser with something else entirely (and then mess with the Qtree too, if required).

The fan on my laptop is relieved that I'm done running programs for this post. 哈哈

Here be heredocs

I used to consider Perl 6 heredocs a clear-cut symptom of second-system syndrome: taking a useful feature and weighing it down with extravagant bells and whistles until it cannot move. I read the specification for them and would think, leave well enough alone. Don't fix it if it ain't broken.

Since nowadays I've done a complete reversal on that opinion, and really love Perl 6's heredocs, I thought I would write my take on them.


The most noticeable part of Perl 6 heredocs as compared to Perl 5 ones is that Perl 6 heredocs can be indented.

In Perl 5, you'd do this:

for my $n (reverse 1..99) {
    my $n1 = $n - 1;
    my $n_s = $n == 1 ? "" : "s";
    my $n1_s = $n1 == 1 ? "" : "s";

    print <<"VERSE";
$n bottle$n_s of beer on the wall!
$n bottle$n_s of beer!
Take one down, pass it around,
$n1 bottle$n1_s of beer on the wall!


Heredocs could be said to be "anti-socially indented" compared to the rest of the program. If you quickly scan through a program, you'll see the general structure of it by how things are indented... plus a number of heredoc blocks of text completely ruining the picture.

I remember retionalizing this to myself. Thinking "oh well, heredocs are a bit special, they 'deserve' to stand out from the rest of the program the way they do". I no longer think that. In fact, I'd say the effect gets worse the bigger your application is. Bigger applications tend to rely more on nested blocks and indentation. So this is not likely to be a big problem for straightforward scripts with no control flow to speak of. But it is a problem as your application grows.

Here's how I would write the corresponding Perl 6 heredoc.

for reverse 1..99 -> $n {
    my $n1 = $n - 1;
    my $n_s = $n == 1 ?? "" !! "s";
    my $n1_s = $n1 == 1 ?? "" !! "s";

    say qq :to 'VERSE';
        $n bottle$n_s of beer on the wall!
        $n bottle$n_s of beer!
        Take one down, pass it around,
        $n1 bottle$n1_s of beer on the wall!

I'd say visually, that's a big improvement. S02 explains what's going on: "Leading whitespace equivalent to the indentation of the delimiter will be removed from all preceding lines."

I should note that even in Perl 5, you could conceivably do manual de-indenting. Something like this:

for my $n (reverse 1..99) {
    my $n1 = $n - 1;
    my $n_s = $n == 1 ? "" : "s";
    my $n1_s = $n1 == 1 ? "" : "s";

    print join "\n", map { substr $_, 8 } split /\n/, <<"VERSE";
        $n bottle$n_s of beer on the wall!
        $n bottle$n_s of beer!
        Take one down, pass it around,
        $n1 bottle$n1_s of beer on the wall!


But (a) that still leaves the terminator in an awkward position, and (b) doing it manually is extra code to get right and to maintain. (Of course, as nine++ pointed out on the channel, you can put the right amount of indentation into the terminator symbol. But, ick. I much prefer not having to do that.)

The .indent method

As a nice unintended side effect of implementing heredocs, we ended up with an .indent method on strings in core.

> say "foo".indent(4 * $_) for ^5;

The method works with a negative argument. In which case it (of course) removes indentation instead. If there's not enough to remove, it warns. (But it still gives you a sensible result.)

> say "Give up?\n  Never!".indent(12).indent(-4 * $_) for ^5;
            Give up?
        Give up?
    Give up?
Give up?
Asked to remove 16 spaces, but the shortest indent is 12 spaces  in block <unit> at <unknown file>:1

Give up?

As the final crowning touch, the method accepts a * (whatever) argument. Meaning, in this case, "(consistently) remove the biggest amount of indentation you can get away with". This is a nice default. Also, that's basically what heredocs end up doing, if for the purposes of de-indenting you consider the terminator to be part of the string.

The surface reason to add this method was that this was something heredocs needed to do internally anyway. But as it turns out, .indent is just a generally useful method to have around. It's possible to implement yourself (lines, map, join), but why bother? It's right there.

I'm happy about the way .indent turned out. It's even fairly sane in how it handles different kinds of whitespace — to the degree anything involving mixing tabs and spaces can really be referred to as "sane".

The heredoc terminator

Heredocs all work by setting up a terminator symbol. When the parser sees this symbol alone on a line later, it'll know that the heredoc ends. The terminator itself is not part of the heredoc string.

Here's how I write it:

say qq :to 'VERSE';
    $n bottle$n_s of beer on the wall!
    $n bottle$n_s of beer!
    Take one down, pass it around,
    $n1 bottle$n1_s of beer on the wall!

In the few example heredocs in S02 and S03, the spec chooses to write things together a bit more, and to use slashes around the terminator symbol:

say qq:to/VERSE/;
    $n bottle$n_s of beer on the wall!
    $n bottle$n_s of beer!
    Take one down, pass it around,
    $n1 bottle$n1_s of beer on the wall!

I think the choice of slashes here is unfortunate, because it makes it look like /VERSE/ is a regular expression literal. It isn't, it's just a string with funny delimiters. I don't know that I can stop the inevitable and prevent others from using slashes there, but myself, I'm going to try to use string quotes.

The exact choice of quotes doesn't determine whether the heredoc interpolates or not. Only the q or qq operator determines that. Which feels more sane anyway.

A word of warning if, like me, you decide to use single quotes around your terminator: you have to put in some whitespace between the to and the single quote. This is because to'VERSE is a valid identifier in Perl 6. That's why I put in the whitespace both before and after the :to. And now that it's there, I think I like it better. I might not always put it in, for example when the heredoc operator is part of a bigger expression and needs to be more of a visual pill in itself. But here, qq :to 'VERSE' looks nice and laid-back to me.

Or you could use qq:to<VERSE>, which also conveys "this is a string" and doesn't have the above apostrophe problem.

Indentation, revisited

Let me tell you a story that conveys quite well the Perl 6 maxim of "tormenting the implementors" (to benefit the users).

It's about this kind of code:

my $description = "not exactly Sparta,\nbut still nice";

say qq :to 'TEXT';
    Wow, this is $description!

Before July 2013, this used to warn about not-enough whitespace during de-intentation. Why? Because the $description string has two lines, and the second line does not have the required four spaces (or equivalent) of indentation that the terminator requires!

Yes, you read that right. Before the dust settled around exactly how to implement heredocs, you could get warnings because the interpolated strings in the heredoc were not properly indented. The heredoc itself would look fine in the code, but you'd still get the warning.

Arguably there are two "local optima" involved, though. Two possible ways in which we could view the process of interpolation/de-indentation. Let me lay them out for you:

  1. First, the heredoc string is finalized by interpolating all the variables into it. The resulting string is then de-indented.

  2. The heredoc is a sequence of constant strings separated by interpolations (either variables or expressions in {} blocks). The constant strings are de-indented, but the interpolations are left as-is.

The first model has fewer moving parts, and is easier to implement. The second model is easier to understand for the user, but more work for the implementor.

We ended up having the first model in Rakudo for a while, and I argued for the second model. Eventually TimToady settled the matter:

<TimToady> the intent is that heredoc indent is a textual feature, not a shortcut for a run-time feature
<TimToady> yes it's harder, but STD does it that way (and so niecza, I think)
<TimToady> it's what the user wants, not the implementor

In the end, timotimo++ implemented the second model, and all was well. You won't get de-indentation warnings from inside your interpolated variables.

...Furthermore, as a nice bonus, the constant string fragments can actually be de-indented at compile time! Because they're constant, and known at compile time. Under the former model we couldn't do that, since the entire string wasn't known until that point in the code at runtime (in the general case).

The limits of sanity

One final thing. One niggling little bit of discomfort remains.

You currently can't do this:

constant INTRO = q :to 'INTRO';
    Space: the final frontier.
    These are the voyages of the starship Enterprise.
    Its continuing mission: to explore strange new worlds,
    to seek out new life and new civilizations,
    to boldly go where no one has gone before.

A constant declaration gets evaluated and assigned to at parse-time, as soon as the parser sees the semicolon at the end of the declaration. And that's the problem, because at the point the parser hasn't yet seen the promised heredoc, and so it has nothing to assign to the constant.

The problem is perhaps a little easier to see if the constant declaration is re-written as a BEGIN block instead. BEGIN blocks also run ASAP, as the closing } is being parsed. This also doesn't work:

my $INTRO;
BEGIN { $INTRO = q:to/INTRO/; }
    Space: the final frontier.
    These are the voyages of the starship Enterprise.
    Its continuing mission: to explore strange new worlds,
    to seek out new life and new civilizations,
    to boldly go where no one has gone before.

I get that there are good reasons for disallowing the above two constructions. It's just that... I don't know, it feels slightly unfair. Heredocs are very nice and work well, constants are very nice and work well. If you combine them, the result ought to be twice as nice and also work well. But it isn't, and it doesn't.

In the case of de-indentation of interpolated variables, it was more of a clear-cut case of writing a bit more compiler code to DWIM things for the user. In this case... the stakes are higher. Because if the above were to work, we'd have to create an exception to one-pass parsing, something that Perl 6 values very highly.

<TimToady> masak: the fundamental problem with making the
           constant lookahead for the heredoc is that it
           violates the one-pass parsing rule
<masak> TimToady: yes, I realize that.
<TimToady> in fact, STD used to do the lookahead like P5
           does, and I rewrote it to avoid that :)
<TimToady> in fact, the discussion at S02:4474 stems from
           that time

In spite of all this, I remain hopeful. Maaaaaybe we can deviate ever so slightly from the one-pass parsing rule just to be a bit more DWIM-y to users who like both constants and heredocs. Maybe, just maybe, there's a clean way to do that. There's an open RT ticket that keeps hoping there is.

In the meantime, there's also a workaround. If I just put the semicolon after the heredoc, things work out.

constant INTRO = q :to 'INTRO'
    Space: the final frontier.
    These are the voyages of the starship Enterprise.
    Its continuing mission: to explore strange new worlds,
    to seek out new life and new civilizations,
    to boldly go where no one has gone before.

It's not ideal, but it will do in the interim.

Update 2015-04-11: Not four hours after I published my post, TimToady++ had constants with heredocs patched (cleanly) into Rakudo. He also updated S02. So I'm officially out of discomforts. 哈哈

Anyway, that was my story about heredocs. Here's to heredocs! 🍻

You're in a space of twisty little mazes, all alike

It started with a mini-challenge on the #perl6 channel.

<masak> today's mini-challenge: a 4x4 grid has 24 internal walls. in the
        power set of these, some of the members are "mazes", by definition
        grids with a unique path between every two squares. how many mazes

For example, here are four randomly selected solution mazes:

I have a hidden reason for posing this problem, but that's the subject of a future blog post. But for now, let me clarify by saying that I'm not just interested in counting the number of such mazes, I actually want to enumerate them — that is, I need to know what each maze looks like.

I have previous experience with building mazes, and I think it's a really fun domain. But the skills from that post teach us how to make a nice random maze. Here we want to make all mazes. Maybe the knowledge carries over somehow, but it's not clear to me how.

Upper bound

Instead, let's start over. The problem specification talks about a "power set" of 24 internal walls.

We could think of each internal wall being controlled by a bit of storage somewhere, a 0 indicating the absence of a wall and a 1 its presence. Together these 24 bits can be thought of as making up a bit string that completely determines the maze.

...which also means that we have an upper bound on the number of mazes, because there are only so many length-24 bit strings!

> 2 ** 24
> sub triples { $^s.flip.comb(/. ** 1..3/).flip }; Nil
> triples 2 ** 24
16 777 216

Wherever we are going with this, we won't exceed 16.8 million! Phew!


But we already know that this upper bound is not attained; the bit string of all 1s (illustrated above) fails the "unique path between every two squares" criterion of the problem statement.

In fact, here, let me generate 10 random 24-bit strings and draw them out in grid form:

These all fail the "unique path" criterion!

Fortunately for us, the number of "nice" mazes is rather smaller than 2 ** 24. Since I know what the number is, I would expect there to be one correct maze, on average, for every 167 randomly generated grids. (I just tested this experimentally. In a sequence of randomly generated grids, the first correct maze was number 111.)

Ok, at this point in our investigations, we are at the "I don't know what a correct maze is, but I know one when I see one!" stage. It's a nice stage to be in. So many unexplored avenues! Our ignorance is like a fog, surrounding us, waiting for our sharp observational skills to dispel it! Let's dig beneath the surface.

Why do the above mazes fail the "unique path" criterion? Perhaps if we can describe the vices they commit, we can start laying down the law about how a maze must look to be a member of our club. I see two such vices:


We might be tempted to conclude from the above that there is a certain Goldilocks number of internal walls that a grid must have in order to be a correct maze.

...and, in fact, there is! It must have 9 walls. As weak evidence of this, here are the four random mazes again. Yep, they all have 9 internal walls.

This is because a correct maze induces a spanning tree on the graph of the grid — and vice versa. The spanning tree is simply our branching structure of corridors that form between our walls.

Our whole problem can now be restated as "enumerate all possible spanning trees on a 4x4 grid". But Wikipedia also informs us that "for a connected graph with V vertices, any spanning tree will have V − 1 edges". We have 16 vertices, so 15 edges. But "edge" here is "edge in the spanning tree", which corresponds to "no wall" for us. So out of 24 possible walls, our mazes must have 24 - 15 = 9.

This also means that, thanks to the binomial theorem, we've lowered our upper bound to 1.3 million.

> sub postfix:<!>($n) { [*] 2..$n }; Nil
> sub choose($n, $k) { $n! / ($k! * ($n - $k)!) }; Nil
> choose(24, 9)
> triples choose(24, 9)
1 307 504

While 9 walls is a necessary criterion, it's not a sufficient one. Here are 10 random grids with 9 walls:

Of these, only the upper right one is a correct maze. It's not just how many walls you have, it's how you use them, too!

(If we generate a random 9-walled grid, we're now down to a 1-in-13 chance that the grid is also accidentally a well-formed maze.)

At least we can be a bit clever when searching for bit strings. Instead of iterating through all 16.7 million of them, we can iterate through only the 1.3 million with nine 1s in them. Order-of-magnitude speedup? Yes, please! The below code for doing so is inspired by MJD's post, from nine years ago, about enumerating strings of things.

my $s = "0" x 15 ~ "1" x 9;
say $s;

repeat until $s eq "1" x 9 ~ "0" x 15 {
    $s ~~ s[ .* <( 01 (.*) ] = "10" ~ $0.comb.sort.join;
    # possibly do other things to verify that it's a maze
    say $s;


It was somewhere around here that I got a promising insight, and thought I had solved the whole problem when I really hadn't.

Here was my (wrong) idea: in every correct maze, the walls form chains which protrude from the outer wall, and then just end. If we could take such a maze and start to "unravel" it at the fringes, peeling away walls which are at the end of such chains, and we do this over and over, eventually we'll be left with a grid with no inner walls.

Here, an example with a correct maze:

But in a grid with cycles, the process of unraveling would eventually stabilize with internal walls still remaining:

The (wrong) way I used to decide whether an edge was at the fringe of an edge was to ask whether it had less than two edges neighboring it. In a cycle, edges would constantly protect each other by being surrounded on two sides by a neighbor edge. But fringe edges would unravel and eventually we'd be left with nothing. (This way of thinking assumed that the big outer edge was a single edge in terms of being able to neighbor other edges. Except that it never gets removed.)

Here is my first attempt. I've lightly sprinkled comments around the source; hopefully enough. The first script finds all the mazes. The second one removes duplicates in terms of rotation and reflection symmetries. The third one draws the mazes.

As you can see, it took more than two and a half hours to find all the solutions using this script. Which would be more acceptable if it didn't also arrive at the wrong answer.


After a day or so of these ideas sloshing around in my head, I came up with a better, faster algorithm: if we can find the correctness of the mazes by unraveling them back to the empty grid, we could also run this process in reverse, successively growing all correct mazes in generations from 1..9 (walls), each new generation adding all correct walls to all the mazes of the previous generation.

Here is the script to do that. It basically scavenges the old script for ideas. Because it would be much slower not to, this script now removes symmetric duplicates as part of generating the mazes. The idea of what constitutes a "correct" addition of a wall is exactly the same as before: a wall can be added if that spot has one neighboring wall.

I also uncovered a string-handling bug in MoarVM as part of writing this. Under certain circumstances, substr on a string built using x gives the wrong answer. Internally, MoarVM uses something called ropes to represent strings. Among other things, MoarVM ropes can encode repetitions such as the ones x creates, to reduce memory footprint and computation. But something throws substr off the scent in this particular case. And my toy problem helped discover it. ♥

Bug still open as of this writing, looking for someone willing to tackle a ropey problem.

<jnthn> Too bad it gets the wrong answer, rather than getting into an
        infinite loop
<jnthn> Then we coulda said it had enough rope to hang itself...

I didn't see the error of my ways until Mouq++ actually dug up a few mazes that my algorithm didn't produce. Common to all these mazes was that they had "corners" in the maze: points where a single wall chain branched out in two different directions. Here, have a few examples.

My algorithm would fail to produce any of these, for the simple reason that my 1-neighbor criterion was flawed.

Einstein has been remembered for the quote "Everything should be made as simple as possible, but no simpler." Actually, what he really said was "The supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience." (See Wikiquote.) But history seems to have applied the spirit of his own words to his quote and produced the former soundbite from Einstein's longer version.

<masak> oh, it's lovely to be wrong :>

It says something about my TV habits that I can't really think of the quote nowadays without Jesse Pinkman appearing in my mind and appending "...bitch" to the quote. (Either the long one or the short one.) Especially when I commit a modeling error like the one with the mazes.


When I arrived at my first two solutions, I was kind of relieved that I could phrase it only in terms of the internal walls of the maze. I didn't have to deal in the nine vertices between the walls — or so I thought. It turns out that this simplification was of the kind that made the solution a bit simpler than possible. (Bitch.)

(I'm sure this isn't the first time I make this exact mistake. I come back to thinking about mazes and maze generation every few years, and I recognize this failure mode in previous code I've written. This is the first time I've taken the time to analyze it, though.)

So my third solution introduces the notion of points joining together the walls. The algorithm now turns into this: for an empty grid, all nine internal points are active. As we fill out the grid, one by one these points will toggle from active to inactive by having a wall touch the point. We still proceed in generations from an empty grid to a completed maze. But now, the criterion for where we may put a new wall is this: a wall goes between an inactive point (that already touches a wall) and an active point (that doesn't). (Again, the outer wall is treated a bit specially, this time as a tenth "point" that's always active no matter how many walls attach to it.)

Viewed a bit differently, the algorithm does a breadth-first search on the graph induced by the points (including the outer wall) and the internal walls. But it does the breadth-first search non-deterministically, in all possible ways. Each time it "finds a neighbor point" it hasn't visited yet, it leaves a wall between the old point and the new. Each maze ends up representing one possible BFS with the outer wall as starting point.

The resulting algorithm runs in little under 15 minutes. Which is quite an improvement in running time — besides being correct, I mean.

Kudos go to moritz++ for realizing that the whole thing runs faster if the qw lists of numbers are hyper-numified instead of being implicitly cast each time they're used.


So... how many correct 4x4 mazes are there? The short answer is 12600.

I don't know about you, but I didn't expect there to be that many. Yes, that's after removing rotation and reflection duplicates. For comparison, there's only one maze of size 1x1, only one maze of size 2x2, and only 28 mazes of size 3x3.

All of the 3x3-mazes, up to reflection and rotation.

But apparently there's a lot of new wiggle room for the walls once you increase it to 4x4.

I rendered an image of all 12600 4x4 mazes, but it would be foolish to try to put it here in the blog post. But I can link you to it. (Warning: 3.7 MB .png file. timotimo++ for helping shrink it from its original 4.4 MB.)

Of the 12600 mazes, 112 have a reflectional symmetry. Which means that if we count all reflections and rotations as distinct, we have (12600 - 112) * 8 + 112 * 4 = 100352 mazes. (Here's the script that finds the symmetries.)


It seems that more and more often, I encounter these kinds of problems where the problem domain is essentially a "space" of objects, and any algorithm has to loop over the objects in some intelligent manner. The last time this happened to me in a toy problem setting, I was counting configurations on a hex cell board. But I find it increasingly happen to me in less hobby-oriented domains, too. Often enough, the challenge is being able to navigate a giant code base or data model, or to extract some subset of it, or to classify parts of it on the fly. My brain, perhaps addled by category theory at this point, increasingly thinks about these problems as happening in a "phase space", each point representing some object of interest.

I wonder if I could go back and describe to my 15-year-old self what it's like to discover and explore these spaces. I wonder if he'd understand and be as hooked on it as I am. The thrilling aspects are really in finding appropriate names for new entities and relationships in the artificial universe, and then building an understanding out of these named concepts, constructing ever-higher towers of composite concepts to explain ever-larger clusters of relations. I am Adam, my artificial universe is the garden of Eden, and the objects are species to be tagged and classified.

And here I could wax poetic about the suggestive power of language, and the importance of actually evolving a language for your domain, both nouns and verbs. But we'll save that too for some other post.


The solution I ended up with avoids cycles using "positive" filtering, creating partial results without cycles until we arrive at full mazes. It is possible to use "negative" filtering too, by first building a list of all the "fundamental cycles" in 4x4-space, and then finding all (9-wall) grids that don't contain one of the fundamental cycles. The promising thing about this is that the question "does it contain this cycle?" can be formulated as a bitwise +& and the maze bit vectors (and fundamental cycle bit masks) can be represented as native ints, making this potentially quite fast.

How can we build a list of fundamental cycles? Basically by doing one depth-first search at every internal point (and once for the outer wall), backtracking each time we find a cycle. I'm intrigued that it turns out that when we're looking for (cycle-free) mazes we need to use BFS, but when we're looking for cycles we end up with DFS. I wonder if there's some really deep duality that can explain why that is. (If you know, get in touch.) There are 627 fundamental cycles on a 4x4 grid. (Here's another image that doesn't fit.)

Unfortunately, the fun ended there. Rakudo doesn't support native arrays yet, so I have nowhere fast to put my 627 native ints. Right now the algorithm runs in 38 minutes, which is not an improvement on the 14 minutes of solution three. We'll see if that balance changes once we get native arrays.

And now, I will consider mazes as being flushed out of my system, for now. Hopefully I'll be able to get back to blogging about macros.