rjbs forgot what he was saying

not logged in (root) | by date | tagcloud | help | login

recent tags

1   email  
1   journal  

RSS feed entries

collapse entry bodies

Notes from YAPC in Austin (body)

by rjbs, created 2013-07-01 10:34
tagged with: @markup:md journal perl yapc

I'm posting this much later than I started writing it. I thought I'd get back to it and fill in details, but that basically didn't happen. So it goes.

This year's YAPC was in Austin. A lot of people complained about the weather, but it was pretty much the same weather we had at home when I left home, so I wasn't bothered. This was good planning on the part of the YAPC organizers, and I thank them for thinking of me.

I'm just going to toss down notes on what I did, for future memory.

I landed on Sunday, having flown with Peter Martini and Andrew Rodland. Walt picked us up at the airport and we went to Rudy's for barbecue… but first we had to check in. I was worried, because it was after 19:00, and it sounded like nobody would be at the B&B to let me in. I called, and nobody was there. I wandered around the back of the building and found a note for me. It told me where to find my key and how to get in. "…and help yourself to the soda, lemonade, and wine in the fridge." Nice. I really liked the place, The Star of Texas Inn, and would stay there again, if the opportunity arose.

Rudy's was fantastic. I had some very, very good brisket and declared that I needed nothing else. I tried some of the turkey and pork, too, though, and they were superb. The jalapeño sausage, I could take or leave. The sides were great, too: creamed corn, new potatoes, potato salad. The bread was a distraction. I also had a Shiner bock, because I was in Texas.

From Rudy's we went to the DoubleTree, where lots of other attendees were staying, and I said hello to a ton of people. Eventually, though, Peter and I headed back to our respective lodgings. I worked a little on my slides and a lot on notes for the RPG that I planned to run on Thursday night.

Monday morning, I caught up with Thomas Sibley, who was staying at the same B&B. We had breakfast (which was fine) and headed to the conference. I attended:

  • Mark Keating's history of Perl, which was good, except that he seems to think that my name is "Ricky." I think he's been talking to my mom too much.
  • Sterling Hanenkamp's telecommuting panel discussion, on which I was a panel member. I think it went pretty well, although I wonder whether we needed an aggressive interviewer to push us harder.
  • John Anderson's automation talk, which was good, but to which I must admit I payed limited attention. I forget what I got distracted by, but something.

For lunch, we had a "p5p meetup" at the Red River Diner. The food was fine and the company was good, but we ended up quite a few more people than I'd expected, and it sort of became a generic conference lunch. Jim Keenan presented me with a copy of the Vertigo score, which is sitting on my desk waiting for a good 45 minute slot in which to be played. Sawyer was keen to get anything with blueberries in it. "We don't have these things in Israel, man! They're incredible!" I was tickled.

In the next slot, I spent most of my time in the hallway, talking to people who were interested in the state of Perl 5 development. The big questions that arose in these discussions, and similar ones later in the week: how can Perl 5 get more regular core contributors, and how can interested people start helping? For the second one, I need to boil things down to a really tight answer with a memorable URL. I'm not sure it will help, but it might.

I attended the MoarVM talk, which was interesting, but which I can't judge very well. At any rate, I'm excited to see the Rakudo team doing more cool stuff. After that, Larry spoke. It was good, and I was glad to be there. The lightning talks were good, and then there was the "VIP mixer." That's basically free drinks and an opportunity to meet all kinds of new people. I did! I would've met more, but it was loud in there. If we could've done it outside, I bet I would've stayed much longer, but I was losing my voice within the hour.

After that, we were off to Torchy's Tacos. Walt had previously described their tacos as "a revelation." They were definitely the two best tacos I'd ever eaten. Especially amazing was the "tuk tuk," a Thai-inspired delicacy. I went back to Torchy's twice more before I left town, and regret nothing. I'll definitely go again, if I go back to Austin.

Tuesday, in fact, Walt, Tom and I headed to Torchy's for breakfast. It was a good plan. We got to the venue in time for Walt to give his talk about OS X hackery (phew!). I saw a live demo of Tim Bunce's memory profiler, which is clearly something I'll be using in the future, though it looks like it will take significant wisdom to apply effectively. Before lunch, I took in Mark Allen's talk on DTrace, which provided more incentive for me to finally learn how to use the thing. I've been working on the giant DTrace book since YAPC. I also managed, during the talk, to predict and find a stupid little bug when DTrace and lexical subs interact.

For lunch, Walt suggested we eat Bacon, so we did. Peter, Walt, and I piled into his rental and got over there. The Cobb salad was very good, the bacon fries okay. I was very glad to have a selection of local beers beyond Shiner and Austin Amber, and the guy behind the counter suggested Fire Eagle, which I enjoyed.

After lunch, Reini's talk on p2, Karen's TPF update, Matt S Trout on automation, and then Stevan's talk about the state of Perl. Despite calling Perl "the Detroit of scripting languages," it made no mention of RoboCop, nor did it liken anyone to The Old Man, Clarence Boddicker, or Dick Jones. It was a good talk, but I was understandably let down.

For dinner, the whole conference (or much of it) headed out for barbecue. The barbecue was made by The Salt Lick, and while good, it did not beat out Rudy's. Dinner ended with "game night," and I ran a hacked up D&D game, set in what I'm calling Trekpocalypse. More on that in another entry.

Wednesday started with my last trip to Torchy's. It was good.

We took our time getting to the conference, and then I killed a bunch of time in the hallway track. The first talk I got to was Sawyer's talk about async. The talk was good, and one to learn from, as a speaker. I think he did a great job keeping people involved, especially with a hunk of "spot the error" slides in the middle. By the end, he had built a program that did a bunch of parallel queries against IMDB, and then showed results to the audience. He spent a fair hunk of time just commenting on the character actors he dug up, and this went over well, as he'd paid for the digression with strong technical content until that point. I was pleased!

After that, I was obliged to go to Andrew Rodland's talk on StatsD, as I knew that I needed to start using it at work. It was useful, and I've been graphing more stuff, now, which has also been cool. In fact, this talk led to me finding a bug in py-statsd, which has now been fixed. Woo!

After that, it was time for me to give my talk on perl 5. I think it went quite well! I had been worried about it, since I was editing and reworking it until the last day. I was happy with it, though, and will not be making major changes before giving it at OSCON. I look forward to seeing the attendee feedback, once it's in. After that, it was a lot of post-talk chat in the hallways, then Peter Martini's talk about his work on adding subroutine signatures to Perl. Everyone was excited, and rightly so.

After that, Matt S Trout and the lightning talks brought things to a conclusion, and I spent some time saying goodbyes. With everyone heading his or her own way, Tom Sibley and I decided that our way was "toward cocktails." I'd identified a place on Yelp that looked good, the Firehouse Lounge and we headed down. The drinks were okay, but it was amazingly loud, so we headed out. Actually, I should qualify the drink review: my drink was pretty good. I ordered the drinks for both of us, yelling the order across the bar, and I ordered the wrong thing for Tom, forgetting which one he'd settled on. I was mortified. Tom graciously let it go.

We hadn't eaten yet, and we were still hoping for some more drinks, so I consulted Yelp and it suggested Bar Congress, which was probably the best food and drink advice I've ever gotten from a website. I wrote a review which I won't repeat here, but: I would go there again in a heartbeat. If I get back to Austin, for some reason, I will make a point of getting there.

After dinner, we headed back to the inn and I turned in. I'd have to get up early for my flight, so I packed and went right to sleep. In the morning, I used the "Catch a Cab in Austin" iOS app that people at YAPC had been talking about. It worked, and I got to the airport with plenty of time, and my flights home were uneventful. As always, I'd had a great time, but I was ready to get home to my family.

Next year's YAPC::NA will be in Orlando, and although it won't be easy to be as good as this year's, I'm pretty sure it will do just fine.

print-ing to UDP sockets: not so good (body)

by rjbs, created 2013-06-27 20:21

We've been rolling out more and more metrics at work using Graphite and StatsD. I am in heaven. I'm not very good at doing data analysis, but fortunately there are some very, very obvious things I can pick out from our current visualizations, and I'm finding all kinds of things to improve based on these.

I'm using Net::Statsd::Client, as it looked convenient. Under the hood, for now, it uses Etsy::StatsD. I found a very confusing bug and when I told the author of Net::Statsd::Client, he confirmed that he'd seen it. I've worked out the details, and it has made me grumpy! The moral of this story will be: don't use print to send to a UDP socket. (I doubt I'll print to a socket again, after this.)

As a rule, I was sending very simple measurements to StatsD. They'd look like this:


This means: increment the counter with the given name.

StatsD listens for UDP. In theory, you can send a bunch of these in one datagram, and they're separated by newlines. In practice, though, I was sending exactly one measurement per datagram. Sometimes, though, the server was receiving mangled data. The metric names would be wrong, or the whole string would be mangled. I fired up a network sniffer and saw things like this:


Okay, it's a bunch of +1 operations run together… but what's up with the first one being truncated? And, more importantly, what was sending them in one datagram!? A review of the StatsD libraries will show that they don't do any buffering. All that Etsy::StatsD does is open a UDP socket and print to it. You can send multiple metrics at once, if you want, but you have to go out of your way to do it, and I wasn't.

Further, sockets don't buffer their output in Perl! When you connect to a socket, it's set to auto-flush. Why was there buffering happening? Andrew Rodland, author of Net::Statsd::Client said that it only happened while the StatsD server was local and unavailable. Immediately, things fell into place.

If you're running Linux, you can try this fun experiment. First, run this server:

my $sock = IO::Socket::INET->new(
  Proto      => 'udp',
  LocalHost  => 0,
  LocalPort  => 3030,
  MultiHomed => 1,

while (1) {
  my $data;
  my $addr = $sock->recv($data, 1024);

  print "<<$data>>\n";

Then, run this client:

  use IO::Socket::INET;
  use Time::HiRes qw(sleep);

  my $sock = IO::Socket::INET->new(
    Proto    => 'udp',
    PeerHost => 'localhost',
    PeerPort => 3030,

  for (1 .. 20) {
    print $sock "1234567890";
    sleep 0.5;

You'll see the server print out the datagrams it's receiving. It all looks good.

If you start the server after the client, though, or kill it and restart it during the client's run, you'll see the server receive datagrams with the number sequence more than once. This is bad enough. My belief, which I haven't put hard to the test, is that when the buffer to send is full, data is lost from the left. Even if your data were capable of being safely concatenated, it wouldn't be safe.

This is, at least in part, a product of the fact that Linux tries much harder to deliver UDP datagrams to the local interface. They are, to some extent, guaranteed. I'm not yet sure whether the behavior of print in Perl with such a socket is a bug, or merely a horrible behavior emerging from the circumstances around it. Fortunately, no matter which, it's easy to avoid: just replace print with send:

  send $sock, "0123456789", 0;

With Etsy::StatsD patched to do that, my problems have gone away.

the 2013 Perl QA Hackathon in Lancaster (body)

by rjbs, created 2013-04-15 19:34
last modified 2014-03-18 17:46

I got home from Lancaster, this morning. I'd been there for the sixth annual Perl QA Hackathon. As usual, it was a success and made me feel pretty productive. Here's an account of most of the things I did:


There were many discussant-hours spent in room C-7C hashing out a bunch of things that needed hashing out. Although I was very interested in the outcome, and had some strong opinions about one or two things, I didn't want to get too tangled up in the discussion, so I split my attention between (mostly) coding and (a little) joining in. Others will surely write up the decisions of these meetings, so I won't, but in the end I felt that it was useful for me to be there and that I wasn't unhappy with any of the resolutions.


At my first two QA Hackathons, I worked mostly on the CPAN Testers Metabase. Since then, my most common recurring project has been PAUSE. More specifically, I've mostly worked on the indexer. That's the program that looks at new files to decide whether it should put their contents into the 02packages file used by CPAN clients to install packages by name. It's a really important program, and I remain very interested in improving its maintainability. Once again, though, I wasn't just adding tests, but also doing some tweaks to how the PAUSE indexer works.

A few months ago at the NY Perl Hackathon, David Golden and I got to work fixing letter case behavior in PAUSE. The problem was that PAUSE treated package permissions case-insensitively even though not all supported Perl platforms would. The most commonly discussed problem was the conflict between File::stat, a core module, and File::Stat, a non-core module. If a user ran cpan File::Stat on Win32, he or she would end up with File/Stat.pm installed, and use File::stat would pick that up instead of the core module.

We'd done about three quarters of the work for this in New York, but I didn't send a pull request. I knew that we had an untested case: what happens when someone who owns Foo::Bar now uploads Foo::bar? We decided that it would replace the old entry. I wrote tests, which showed it didn't work, then David made it work. We also wrote a few more tests for other edge cases, and were pleased to find them all handled the way we wanted.

We also discussed problems with the non-uniqueness of distribution names on the CPAN. In short, non-maintainer of Text::Soundex should not be able to upload a dist called Text-Soundex and have it indexed. I implemented this, which ended up being a bit tricker to do than I expected, although the code changes weren't too bad. It was just getting there that took time. Unfortunately, over 1,000 distributions on the CPAN have names that don't match a contained package, so those had to be grandfathered in. I may send out a "consider changing your dist name" email, but I haven't decided. It isn't really such a big deal, but it nags at me.

I also did some work on the code that generates 03modlist.data, the registered module list. The future of this feature is unclear, and will have to get sorted out soon, probably over the next month or two.


The thing I came to the hackathon knowing that I had to work on was Pod::Checker. In fact, most of the PAUSE work I did had to wait until I finished dealing with Pod::Checker. I was really not looking forward to the work, but I didn't want it to continue to languish, undone. I'm glad I started with it, because it only took about a day, and finishing it made me feel excited for the rest of my time: I'd be able to work on other things!

In 2011, there was a Perl project in the Google Summer of Code whose goal was to replace all core uses of Pod::Parser with Pod::Simple. Pod::Html was overhauled, but Pod::Usage and Pod::Checker weren't completed. Pod::Checker was mostly done, but not quite, and unfortunately languished in that state for some time. Since I'm keen to get Pod::Parser out of the core distribution, and since I know that nobody really wants to do this work, I decided it would be a good task to force myself to do while stuck in a room with nothing but my laptop and a bunch of sympathetic ears.

There were n kinds of Pod::Checker checks that needed to be implemented, reimplemented, or moved to Pod::Simple itself:

  • tests needed updating for the new mismatched =item type check from Pod::Simple
  • the totally broken "unescaped <" warning had to go
  • a warning for "no closing =back" got put into Pod::Simple, eliminating use of Pod-Parser's Pod::List
  • warnings for ambiguous constructs in L<> like leading spaces, unquoted slashes, and so on
  • the check for internal hyperlink resolution had to be reimplemented

...and a few other little things, like hash randomization bugs. I've filed a pull request with Pod-Simple for the patches that would go there, and my branch of Pod-Checker based on Marc Green's original work is also now on GitHub, waiting to get a trial release.

Once this is done, we'll get Pod::Usage converted, then we're done with everything but the actual warnings and subsequent removals!

Other Stuff!

I made a new release of CPAN::Mini, closing quite a few very old tickets. I also went ahead and made --remote optional. Maybe in the future, I might make --local optional, too! The biggest outstanding question is whether I will add any alternate configuration filename and location for Win32, rather than ~/.minicpanrc. I'm still conflicted.

I applied some patches to Router::Dumb, exposing (I think) an annoying missing behavior in Test::Deep. I'd like to figure out how to fix that Test::Deep problem soon, but it didn't happen this weekend.

I made a few other releases, including a release to Dist::Zilla that will make it always upload to PAUSE using HTTPS. I decided not to try to tackle anything bigger at the hackathon.

Free-Floating Helping!

This year, I think I spent less time than ever looking at other people's code to be a fresh set of eyes. On the other hand, I spent more time answering questions related to coordinating changes with blead and other future releases. "Is this a blocker?" was asked quite a few times as the rest of the room found some interesting bugs in bleadperl. "Shall I commit this to 5.19.0?" came up often, too.

The End!

I'm hoping to get some more work done on the Pod and PAUSE fronts, hopefully very soon, but maybe at YAPC. I'm looking forward to seeing the fruits of all the labors performed by the other hackers at the hackathon, too. (I started, here, to list "especially abc, xyz, …" but the list got far too long. Lots of good stuff is coming!) I also clearly found plenty of things I'd like to do, but not just yet. In other words, I'm ready for next year already!

I might write up some of the social bits of the trip a bit later. The short version of that is that I had a great time, enjoyed seeing old friends and making some new ones, and ate four servings of black pudding.

DKIM, Email::Simple, and heartache (body)

by rjbs, created 2013-04-09 14:06
last modified 2013-04-09 14:11

Header folding

These email headers are all supposed to be equivalent:

1 Foo: bar baz

2 Foo:  bar baz

3 Foo: bar

4 Foo: bar

It's part of the "folding white space" thing that is just one of the reasons that email is such an irritating format. In any of these cases, when your program has received the message and you as, "What's in the Foo header?" the answer should generally be "bar baz" and not anything else.

Representing headers in memory

Email::Simple uses a pretty simple mechanism for representing headers: a list of pairs. When a message is built in memory, its header object stores a list of name/value pairs. Since the above forms are all equivalent, they are reduced to the first form when parsed. If you read in form 4, above, your header will store:

@pair = (Foo => 'bar baz')


DKIM is a mechanism for digitally signing parts of an email message to provide reliable evidence that some sender has taken responsibility for the post. Parts of the message are digested and signed by a private key. The public key can be found in DNS, and the message's recipient can verify the signature.

Here's a DKIM signature from a message I got from Twitter recently:

1 DKIM-Signature: v=1; a=rsa-sha1; c=simple/simple; d=twitter.com; s=dkim;
2     t=1365174665; i=@twitter.com; bh=LmB+XG63ICs3ubpceGdSzYEPG4o=;
3     h=Date:From:Reply-To:To:Subject:Mime-Version:Content-Type;
4     b=BRBSHqSmznpsZOEC1tbOtGdZu+YX20jL9NiEIAsepmOaazRCpzTYVfUMC9oEoomok
5      /X0HVHkBgrkYfp9sWTGcCrDHr+7zntfykKwDWNrgTx9+t64wTrASvcBlUD4lGxTw1T
6      +JPJtdI17YtTg7pvpsHYYOMmbNZCLNSFTpClo0RQ=

Sometimes messages change in flight. For example, "Received" headers are just about always added. For that reason, only some of the headers are signed. If the whole header was signed, it would be guaranteed to fail once somebody added Received (or did lots of other things). The "h" attribute in the signature says which headers were signed.

There are other ways to break a header. For example, the DKIM-Signature above references the Reply-To header. Twitter might set the header to:

To: "Twitter Customer Support"

...and then later someone emits the header as:

To: "Twitter Customer Support"

There's a change in the representation of the header, even if not the effective value. Does it matter? Maybe.

The "c" attribute in the signature says how values were "canonicalized" before signing. In relaxed canonicalization, changes to things like whitespace won't affect the signature. In simple canonicalization, they will. If the header is re-wrapped, the signature will be broken.

Simple canonicalization is the default canonicalization.

Finally, the DMARC standard is providing senders with a way to assert that DKIM signatures are a reliable test of a message's legitimacy. If a DKIM signature is broken, the message is not trusted. Breaking DKIM matters now more than it did before, because DKIM is taken more seriously.

Email::Simple and DKIM

Email::Simple stores the normalized form. When round-tripping a message, unless the header is folded exactly how Email::Simple would fold it, Email::Simple will re-fold the header. In other words, Email::Simple breaks DKIM signatures pretty often, even in simplest pass-through program that doesn't try to affect the message's content at all.

It amused and frustrated me, in fact, that with the Twitter message I posted above, the only change affected in the message was to the DKIM-Signature itself. The signature was rewritten to:

1 DKIM-Signature: v=1; a=rsa-sha1; c=simple/simple; d=twitter.com; s=dkim;
2  t=1365174665; i=@twitter.com; bh=LmB+XG63ICs3ubpceGdSzYEPG4o=;
3  h=Date:From:Reply-To:To:Subject:Mime-Version:Content-Type;
4  b=BRBSHqSmznpsZOEC1tbOtGdZu+YX20jL9NiEIAsepmOaazRCpzTYVfUMC9oEoomok
5  /X0HVHkBgrkYfp9sWTGcCrDHr+7zntfykKwDWNrgTx9+t64wTrASvcBlUD4lGxTw1T
6  +JPJtdI17YtTg7pvpsHYYOMmbNZCLNSFTpClo0RQ=

The only difference? Five tabs replaced with spaces and the omission of two extra spaces. This broke the signature.

Fixing Email::Simple

Email::Simple 2.200_01 is now on the CPAN, and it fixes this problem. When it reads a message in from a string, it keeps track of the exact lines that were used, and it will emit them again, unless the header is deleted or changed. If a header it set from within the program, it will be folded however Email::Simple likes. If you need to set a header field within your program and specify how it will be folded, you'll need to use another library.

If you're using Email::Simple (or Email::MIME) for forwarding or delivering mail (and that includes using Email::Filter), you should test with this new trial release right now. It will become a stable release soon. Probably as soon as I'm back from the QA Hackathon next week.

money into code: Perl 5 code bounties (body)

by rjbs, created 2013-02-24 22:44
tagged with: @markup:md journal perl

If there's money earmarked to be spent improving Perl 5, one seemingly obvious thing to do is to try to use it to directly to improve perl. In other words, the mission is to "turn money into code." The most successful expression of this strategy, I think, has been in Nick and Dave's grants. On the other hand, it's an expression that succeeded because of very specific and felicitous circumstances. Dave and Nick were both well-established, trusted participants in perl's development, known as experts and conscientious workers. They were given, by and large, free rein to pick the topics on which they would work. The foundation trusted them to pick things of value, though with a means for TPF to call shenanigans if needed. That trust has been well-placed, in my opinion.

For whom else, though, would the foundation be willing to do this? I began to write "precious few" before deciding that it's probably more likely to say "nobody." Or, rather, nobody who has any chance of applying. What we need is a confirmed expert at working on the core, with a broad knowledge of many of its pieces, including its build options, portability concerns, and so on. Then there would be the question of picking topics for work: they'd have to be things on which the candidate could work, since no one seems quite comfortable with every single piece of the codebase. They'd also have to be things with value, since they'd have to get approval from grant managers, and possibly code reviewers.

I think I speak for most, if not all, of the regular contributors when I say that we'd love to see such a person apply for and receive a grant to work on core improvements. It just doesn't seem likely, in part because it hasn't already happened.

If we can't fund the work of a highly skilled, self-starting factotum, what's next? One common refrain goes something like, "Now that TPF has such-and-such quantity of money, they should use it to get such-and-such feature added." This is something that we haven't really tried doing in recent memory, in part because it's very unclear how we'd do it.

First, we'd need to make a list of things we want done. That part is pretty easy, especially if we don't scruple to list even the very difficult things like "untangle PerlIO" or "replace string exceptions with objects without breaking things." In a sense, we have had this for some time in the perltodo document, but its placement does have a bit of "beware of the leopard" to it.

With a list of tasks that we'd like to see done, the next step would be to post bounties or request bids. That is, either we'd attach a prize to seeing the task completed or we'd let each applicant name his or her price. Either way, we'd end up (assuming any interest) with a list matching up people, tasks, and dollar amounts. Through some selection criteria, applications for grants would be approved and work would begin.

In fact, we already have a mechanism for some of this in the grants committee of the Perl Foundation. Four times a year, they post a public call for grant applications. Lately, there have been precious few applications, though. It's not clear why this is, but one reason that's been cited in the past is that the amount paid for the grants is not sufficient to warrant the time required, at least for many. The maximum value paid for a grant from the committee is $3,000 by their rules (although the most recent call for proposals set a $2,000 cap). This is about a week and a half at $50 per hour, the rate used for Dave Mitchell and Nicholas Clark's grants.

Being able to accept a $3,000 grant and work on it steadily probably means that an applicant must already be unemployed, working as a contractor, or willing and able to take off the stretch of time from one's day job. With those conditions met, the next question is whether any of the problems we'd really like to see sorted out can be done in ten days of work. It seems quite unlikely, from here.

What we see, instead, from most grants is that the work gets performed in off hours: evenings and weekends, while the grantee continues on his or her day job. The grant money isn't necessary income, but a bonus, and is now at odds with the grantee's enjoyment of leisure time — hardly a great motivator.

I think that in order to achieve results on any task that can't already be done in someone's leisure time, any bounty or grant is probably going to have to be for a much greater amount, sufficient to become the grantee's livelihood or chief income for the duration. Too many grants have lingered on for too long, often fizzling out, because they were hobby projects with an optional prize at the end. For the grantee to abandon the grant cost nothing but face.

The next problem is keeping the work moving. This isn't necessarily a question of keeping the grantee on task. After all, we've tried to make it possible for this work to be the primary employment of the grantee, so there should be some built-in incentive. The problem here is removing roadblocks to the work getting done and getting accepted. That means that the programmer needs access to code review to answer the questions, "why doesn't this work?" and "does this changeset look good?"

That last one is particularly important because at some point any work being done will have to land on perl5.git's blead branch, and if it's problematic, it could be very problematic. Several years ago, for example, a grant was paid for to have perl produce an AST. The work was done and the grant paid, but the code was rejected for inclusion in the core for numerous reasons that would not have occurred had there been regular code review.

Code review and expert advice are insufficently available on the perl5-porters mailing list. The reasons for this include, but are not limited to, the strictly low number of experts on the core and the low availability of those experts who exist. Some potential reviewers have gone so far as to say that providing code review can be depressing, because they have sometimes spent a lot of time trying to transfer knowledge to would-be contributors, only to have the contributors later go away, never to contribute again. (This, after both successful and failed attempts to get changes into the core. Difficulty in retaining contributors is problem for another day.)

If availability of code review is critical to making the money-to-code machine work, how do we improve that availability? One option is to pay reviewers, but I think this is no good. It means that potential aid might be withheld because the potential reviewer knows that someone else could be getting paid to do what he or she would be doing for free. In other words: if somebody is supposed to be getting paid to do this, why should I do it for free? Further: if I'm getting paid to perform review for this set of changes, why should I do it for free for that set of changes?

This concern applies, really, to the grant itself. Why should anyone contribute to the core "for fun" when others are doing it "for profit." I don't quite think that this argument is a reason to avoid cash-for-code altogether, but I do think it's a reason to limit grant-funded work to problems that are clearly understood to be very difficult and long-languishing. Dave's work on (??{}) is probably the paragon of this sort of problem.

Of course, fixing (??{}) is a big task, plausibly on the order of "untangle PerlIO" or "replace string exceptions with objects without breaking things." It took time, code review, design review, and time. It was possible, in part, because Dave was on a fairly open-ended grant, and could work on it until it was finished. If coding grants are given based on bids or bounties, and if the grant is meant to be the primary work of the grantee, the time in which they must be concluded is much more strictly bounded.

For me, on the topic of turning money into code, I'm left with a few thoughts:

  1. Grants like Dave's and Nick's amount in many practical ways to simple employment contracts, and work well because of the freedom afforded by that arrangement. Finding more suitable people willing to take on such a commission would be a boon. Of course, this might mean minting (read: mentoring) new experts in the perl internals, and so far, this seems to be the real mystery of the age.

  2. Grants for thankless, short tasks likely to take two weeks or less might be suitable for bounties. One example from the top of my head would be "convert Pod::Usage to Pod::Simple." These would (presumably) require less code review than longer-scale tasks. If the best way to try to turn money into code is to pay for short, awful, thankless tasks that will benefit future development, I think the best thing that can be done is for such tasks to be clearly listed and described somewhere, possibly rt.perl.org, and to point at them from any future call for grant proposals. I think the foundation should probably try to offer $3000 (or more) grants again, to try its best to make grants "real income" rather than a bonus for work in spare time.

  3. Code review needs to become more universally available — whether it's wanted or not, in some cases. It probably can't be brought about with the direct application of money, and I don't see how money can indirectly help just yet.

naming and numbering perl (body)

by rjbs, created 2013-02-19 22:21
tagged with: @markup:md journal perl

Matt S Trout wrote a very reasonable suggestion to brand the current Perl 5 implementation as Pumpkin Perl. The gist is something like, "take the emphasis off of 5, which sounds like one less than 6, and put it on the thing itself: the really nice language, all plump and ready to be used in a pie." I can get behind that.

The part that starts to wrankle me is this:

So what if next year's release, instead of saying

  perl revision 5, version 20


  pumpkin perl, version 20

It's not that I think Matt's proposal says we should drop the five as its first, key point. It's a bit more: "Look, everybody knows this is five. Focus on the thing that does keep changing and marking nice improvements!" (I will state for the record that I do not want to remove the five from many places, although moving it around a little might happen.) The problem is the huge influx of expectation that this is really is about dropping the five and using this as some sort of breaking point with history. This frustrates me for a few reasons.

There's an occasionally repeated refrain that "if only we could break backward compatibility," Perl 5 would surge forward with new innovations. "We'd finally throw off the yoke of some feature the speaker doesn't use and be free!" The problem, of course, is that somebody else uses that feature. Pretty often the speaker is his own somebody else, and just doesn't realize it. Prototypes? Test::More. Tied variables? DBI and Config. AUTOLOAD? CPAN and Encode. Typeglobs? Much of the Net namespace (not to mention anything that exports). Other times, the feature is old and goofy, but not really in the way of anything.

So there's one blocker to breaking backward compatibility: you'll make it a nice language in which you'll get to reimplement all the stuff you love about using the language. Whoops!

That's not the most important blocker, either. The more important blocker is that nobody is actually coming forward and saying, "If we can break X, we can get a big improvement to Y." Maybe this is because there is a feeling that backcompat is so deeply entrenched, and so pervasive about the smallest foibles of the language that there is no point. I think this would be a shame, because I can pretty confidently say that we can break backward compatibility for the right win. How much, for what? I don't know. We'd need to see an offer, and then a patch.

Of course, there are limits. Perl is used to power multi-billion-dollar businesses. This constrains its paths forward. It won't cease to exist, nor will it be abandoned, but it can't break the code bases of those businesses. Also, note that I'm speaking in the plural. If there was one massive enterprise that owned Perl and drove it forward, there would be a very clear set of guidelines for what could break: anything but the code making billions of dollars for the sponsors. Instead, there are a bunch of enterprises and upgrading them all in sync and keeping all their code working forever is not a simple matter.

This is the problem with success. As a language grows successful, it loses agility. That's one of Perl 6's strengths: it hasn't yet become a big success, so it can change anything it wants whenever a design flaw is made. If we want to regain that kind of agility, all we have to do is agree to give up our success.

That's what forking a project is about: you get a whole bunch of code (warts and all) for free, without the burden of success. Then again, maybe after a brief and extremely liberating romp through the free prairies of obscurity, you can try to steal the success of your ancestor. Remember: you'll be giving up that agility again.

This is where I get back to liking Pumpkin Perl.

If you want to break backward compatibility, you can sketch out your plan and say, "Hey, I figured out that if we drop support for reset, we can get a 4% speedup to local!" This will result in a response of "no, never!" or "yes, surely!" or "hm, show us a patch?" If it goes in, great. There's a deprecation period to ease everybody off of reset, and then local gets faster. If it doesn't, what do you do?

Either you grin and bear it, or you go work on another Perl. The Perl you work on doesn't care about reset. Heck, it doesn't care about dump, either, so you can save another 2% on something there. You won't be working on Pumpkin Perl, of course, but on Antelope Perl, or Hubbard Perl, or Kurila Perl.

Are these forks viable? Of course. They are viable as long as they have people using them, just like Pumpkin Perl. Perl is free software, so anybody's fork can continue to incorporate changes from the mainline, while it's possible, and changes determined to be massive improvements can be brought back to Pumpkin Perl after a proving period. GitHub showed us all that forking is good, because a fork is just a branch. That works here, too, and naming "the" perl5 is a way of saying, "This is one branch: the most conservative, commonly relied-upon one." The distinction it creates from Perl 6 is, to me, a minor side benefit.

Matt's posts have all been very clearly trying to avoid talking about forking perl and breaking backcompat, so I hope he isn't bothered to see me going directly to those two topics in this post. A lot of other responses went there, though, and I think those topics really need to be addressed.

If "Pumpkin Perl" is going to be a thing, it's going to be a very low-key change: we'll call the thing at perl5.git.perl.org "pumpkin perl," and it will answer to use 5.x.y, and it will still say "revision 5, version X," more or less. The freedom we get is a freedom of expression, granted by having a clearer way to refer to one branch of perl as an equal amongst others. By giving the first fork a name, we make room for future forks to exist and have their own names, without having to "break" this one.

spending somebody else's money… for Perl! (body)

by rjbs, created 2013-02-17 22:21
last modified 2013-02-18 09:54
tagged with: @markup:md journal perl

Over the past few years, the Perl Foundation received a bunch of nice big donations to be used for Perl 5. Some of this money has been used to pay for work by Dave Mitchell and Nicholas Clark to work on difficult problems in the Perl 5 core. This has, in my opinion, been money well spent. Dave and Nick know the Perl core very well, and they've worked seemingly tirelessly to make progress where progress is not easy, and to fix things that nobody else wanted to touch.

There are problems with this kind of spending, though. One of them is that Dave and Nick are human resources, and not permanent assets. We can keep spending money on them for as long as they let us, and it will almost certainly keep being a good investment, but it can't go on forever. Another problem is that the rate at which we can fund Nick and Dave is limited, and we're not going to burn through all the money any time soon doing that.

Do we want to be in a rush to spend all that money? Well, maybe not a big rush, and maybe not all of it, but I think it sends a bad sign to donators when we don't spend the money we're given. Specifically, it sends the sign that we don't have any need for money, because we're not even really using what we have.

Then again, maybe we don't. Maybe the only things we should be spending money on are the YAPCs, legal issues, and some service hosting. There's an argument to be made for that, too. It's been said that when TPF spends money on some coding, it indicates that there are multiple strata of people in the Perl community: those whose work is blessed by "the powers that be" and those whose work isn't. Does this create a real disincentive for "outsiders" to contribute?

It's a big complicated question, all of which boils down to something like, "What ought TPF to do?" Maybe the answer is, "just what it's doing now." If that's the case, though, I want to feel convinced of it. Right now, I'm not.

I think I'm going to write down a bunch of ideas for how TPF could spend money other than conferences and paying for Dave and Nick. Implicit or explicit in these ideas will be my internal list of problems that seem worth solving but without obvious solutions that can be carried out with just some free time and good intentions.

The Great Infocom Replay: Deadline (body)

by rjbs, created 2013-02-16 17:24
last modified 2013-09-15 07:17

Last weekend, I went to Las Vegas for a one-day trip to attend a party in honor of my father's many years coaching rugby. It was a great event, but the travel was, as usual, not a big bowl of cherries. I decided to make the most of it, though, and play Deadline on the plane.

I loaded the story file into Frotz on my iPad and put the manual into GoodReader. Using my iPad to play was a great plan, once I also started using my wireless keyboard (presumably in violation of some sort of regulation). Using my laptop would've been impossible in that cramped seat, but the iPad worked a charm, apart from occasional glitches in Frotz.

I was looking forward to Deadline, because it was the first Infocom game I'd be "replaying" that I hadn't actually played before. I had probably started it up and poked around for a few minutes, but I hadn't played the game in earnest. This was my chance to pretend that I was living in 1982, playing a brand new Infocom game ... in flight, on a touchscreen tablet. Well, the illusion of novelty wasn't the important thing.

Spoilers follow.

I enjoy a good murder mystery, and I decided I'd really try to solve this one. The Deadline feelies include a number of witness interviews. From those, I produced a timeline of events, noting what was probably fact and what was only testimony. Arriving at the scene of the crime, I made a survey of the grounds (one again cursing the asymmetry of the map exits) and figured out which windows led to which part of the estate. This would prove useful later.

I intercepted mail, rifled through medicine cabinets, conducted thorough interviews, snooped on phone calls, and tried to figure out where characters were sneaking off to. I felt like a real detective… almost. There were problems. For one, it was clear that I'd have to follow some characters around to see what they were doing, but it wasn't clear whom to follow. It meant that I had to do that most boring of adventure game chores: start over, over and over and over.

I didn't mind all that much, actually, because in many ways it reminded me of Suspended, a game I love. As I played, I expanded my timeline of events to include events that would happen during the day. Nine fifteen, a phone call. Ten o'clock, the mail arrives. Eleven thirty, Angus gets angry about holes in his garden. Noon, a reading of the will. As I built up more of this timeline, by playing and failing over and over, I began to find the critical path to being everywhere I needed to me. Every time I played, I could do every required action to learn every important fact. This was satisfying.

Learning these facts, on the other hand, was often very dissatisfying. I was often correct in my assumptions about what was going on. I often knew what I had to do. I just didn't know how to do it. It was a "guess the verb" puzzle on a very frustrating level.

dig in dirt
Although everything is coming up roses, you haven't found anything unusual except for a few pieces of a hard substance which fall back to the ground.

look in holes
There are two holes here, each about two inches by four inches. They are at least three inches deep and the soil is compacted around them.

look near holes
Ouch! You cut your finger on a sharp edge as you dig. You search carefully in the dirt, now that you are sure something is there, and pull up a piece of porcelain, covered with dirt and dried mud.

There were a number of drugs around the house, and many of them were labeled with warnings about drug interactions. They also had the names of the dispensing pharmacy. I couldn't call the pharmacist to ask, "How would Allergone interact with Ebullion?" I couldn't even ask the coroner to check for these drugs in the victim's system:

analyze Mr. Robner for loblo

Duffy appears in an instant. "Well, I might be able to analyze the Mr. Robner, but you don't even have it with you!" With that, he discreetly leaves.

Well, Duffy, ask at the morgue!

Later, I'd find traces of a drug on a piece of ceramic. Even if I'd already had that drug in particular analyzed, the lab would just say "well, it's not a common medication, anyway."

Later, I would resort to InvisiClues. This was almost as frustrating as trying to pieces things together myself, at least when it came to figuring out why on earth George would stand around outside doing nothing for an hour. The clue urged me to do something, but I couldn't guess the verb.

In the end, I played the game until I knew who had killed Mr. Robner, and why, and how. It's possible I could have solved the game, had I stuck with it for another few hours. I realize that a single five hour flight is not how one might usually expect to play Deadline. Then again, it did get five hours of my time, which seemed like quite a lot. More importantly, giving it more time wasn't going to lessen my frustration, but only increase it.

I think that an IF game could be a very good way to present a murder mystery, and I hope to find another one that I love. This one, though, is not one I'll recommend to friends. Maybe Witness...

the great new Email::Sender (body)

by rjbs, created 2013-02-07 09:33
last modified 2013-02-07 11:56
Yesterday I released Email: :Sender 1.300003. This is a pretty important release!

First, it is the first production release of Email::Sender to use Moo instead of Moose. This doesn't affect my code much, because I use Moose all over the place already. On the other hand, your code might speed right up. On my laptop, for example, the test suite now runs in 20% the time it used to. I'm hoping this helps people feel freer to move to Email::Sender, which really does make life easier for writing email-related code than previous email sending libraries did.

So, I'm delighted to have the work of Justin Hunter and Christian Walde in place, making the Moo-ification possible, not to mention that of Dagfinn Ilmari Mannsåker, Frew Schmidt, and Matt S Trout on porting Throwable to Moo, which was essential to making the rest of the conversion possible. Thanks!

Like I said, though, the Moo-ification doesn't change most of my code very much (yet?), but I'm still very excited. Why is that?

Right now, I've got a large stack of "technical debt payment" tasks scheduled at work. Many of these are quite old, and many are in the form "we switched 90% of our code to some new system, but 10% remains on the old system; convert the final 10% so we can reduce our total code complexity." We talk about this at the office as "killing snowflakes." Look, that subsystem is unique and special and not like anything else! It is a delicate, beautiful snowflake! Kill it!

Amongst those snowflakes being killed is our use of Email::Send. (Note the lack of the -er. This is the precursor to Email::Sender.) We don't actually use Email::Send, exactly, because it's so broken. Instead, we use an internal fork, the design of which strongly influenced the eventual creation of Email::Sender. It really had to go!

Unfortunately, it wasn't quite possible yet.

One of the great strengths of Email::Sender::Simple is that you can redirect all mail by setting an environment variable. This is very useful in tests, where you can say something like:



my @deliveries = Email::Sender::Simple->default_transport->deliveries;

...and then inspect what mail would have been sent. Of course, doing this via an environment variable within one process isn't that compelling. You could just assign a global. On the other hand, this is a big deal:



my @deliveries = deliveries_from_db('email.db');

This is useful for testing something like an exploder that forks to do its work. The next step is testing how code behaves when the email transport fails. This has always been possible with the Failable transport, which wraps another transport and forces failures however the programmer likes. Unfortunately, it works via code references, which can't be easily passed in the environment. What's worse is that it turned out that no configuration could be passed to a wrapped transport via the environment. Oops!

This has been fixed! So, imagine this extremely simple (but quite useful) wrapper:

package Test::FailMailer;
use Moo;
extends 'Email::Sender::Transport::Wrapper';

use MooX::Types::MooseLike::Base qw(Int);

has fail_every    => (is => 'ro', isa => Int, required => 1);
has current_count => (is => 'rw', isa => Int, default  => sub { 0 });

around send_email => sub {
  my ($orig, $self, $email, $env, @rest) = @_;

  my $count = $self->current_count + 1;

  my $f = $self->fail_every;
  if ($f and $count % $f == 0) {
    Email::Sender::Failure->throw("programmed to fail every $f message(s)");

  return $self->$orig($email, $env, @rest);

This makes it easy to say "every third message fails." Then, to make your test configure it for spawned processes:

$ENV{EMAIL_SENDER_TRANSPORT_fail_every}      = '3';
$ENV{EMAIL_SENDER_TRANSPORT_transport_class} = 'SQLite';
$ENV{EMAIL_SENDER_TRANSPORT_transport_arg_db_file} = 'failtest.db';


my @deliveries = deliveries_from_db('failtest.db');


Using these tools together (in their internal-Email::Send-like version) was instrumental to allowing us to confidently refactor our code, because we could test it in ways we never could before. Now that everything has been moved to one email library, it's even easier to rely on these testing systems. I'm looking forward to improving them even more.

The Great Infocom Replay: Zork Ⅱ (body)

by rjbs, created 2013-02-02 22:15
last modified 2013-09-15 07:17

In my memory, before I "came back" to adventure games in the mid-nineties, Zork Ⅰ was an important classic and Zork Ⅲ was the serious, thoughtful capstone to the trilogy. Zork Ⅱ was, in my mind, an afterthought. It was something I had to get through before I could get to the trilogy's endgame.

Having played through Zork Ⅰ and Ⅱ in the last week, I can say that my childhood view was dumb.

The first thing I found was that the map made much more sense. I could keep the whole thing, more or less, in my head. Only a bit of the northern ledge confused me reliably. The Carousel Room made for a really useful hub, and helped me keep things divided into memorable segments. I found its puzzles much clearer, except for the widely hated two: the Bank of Zork and the Oddly-Angled Room. Even the Bank of Zork didn't bother me so much, except for the really lousy description of the Small Room.

The Wizard was much less annoying than the thief, but he still irritated me. Actually, it wasn't that he irritated me, it was that his spells were more trouble than I felt they should've been. More than once, I thought I'd made it through the duration of a spell, only to trip and fall, fatally. Oops? Still, he didn't scatter the contents of the dungeon far and wide, and that's worth something. I liked that the wizard had more personality than the thief, and the eventual interaction with the demon, who was also fun. It made the whole game feel a bit more story-oriented than Zork Ⅰ.

I enjoyed the robot, although every part of the robot puzzle was of the "you will have to die to figure this out" variety. The button puzzle, like the button puzzle in Zork Ⅰ, was tedious, and to be solved much in the same way as a maze. Its pay-off was much more enjoyable, though. I felt cleverer, and making the Carousel Room behave normally was a real win.

One thing I never understood in Zork Ⅱ: what's up with the underground volcano? What does that look like? It's still pretty gonzo, and my brain rejects it. I know this is my problem and not the game's, and it's silly since I enjoy old-school D&D, which is pretty rife with gonzo nonsense, but there it is.

Finally, I think the princess is a pretty underrated character in this game. Who is she? How long has she been stuck in that dragon's lair? Where does she go? Who are her parents? Why does the Wizard of Frobozz care about her?

I hope she gets her own game, someday.

The Great Infocom Replay: Zork Ⅰ (body)

by rjbs, created 2013-02-01 22:23
last modified 2013-09-15 07:17

Zork Ⅰ is a really important game to work through, if you're going to try to understand where interactive fiction came from. It's not the first, but it's a major early milestone of the golden age of commercial IF, and its book is alluded to repeatedly throughout later works. I'm really glad that I've played Zork Ⅰ, but my feeling after playing it again is that once was probably enough.

In fact, I didn't really replay the entire game. I remember it fairly well, despite having played it over twenty-five years ago. I played as much of it as I could from memory, plus the puzzles I could solve again, and I skipped the parts that I knew I would find painful. I'll list them as I go.

I'd forgotten, before this replay, that so many of the early games were very, very sparse of text. Zork is actually quite wordy compared to some, but it's still quite minimal. One of its most memorable locations, Flood Control Dam #3, is described like this:

You are standing on the top of the Flood Control Dam #3, which was quite a tourist attraction in times far distant. There are paths to the north, south, and west, and a scramble down.
The sluice gates on the dam are closed. Behind the dam, there can be seen a wide reservoir. Water is pouring over the top of the now abandoned dam.
There is a control panel here, on which a large metal bolt is mounted.
Directly above the bolt is a small green plastic bubble.

This led me to go play the first few rooms of one or two later games, as well as some popular amateur games, and decided that I have been failing to appreciate the economy of prose in many of these games. I also went to look at some of my never-completed projects and was not surprised to decide that I probably had too much text. (This realization, I must admit, came after feeling some surprise at Jon Ingold's remark in "Thinking Into the Box" that "players are not often in it for the prose.")

I was amazed by how much of the map was still in my memory. I didn't remember how the zones fit together, but I remembered them. I remembered how to solve the most hateful puzzles (like the awful Loud Room) and most importantly, I remembered many secret passages and exits to the surface. I knew to make my first task: get to the temple, get the torch, and pray my way out.

I managed to collect quite a few treasures, but I knew I didn't want to shoot for everything. My secret internal goal was to get the thief to open the egg, and then to retrieve it. This meant getting enough other treasure to have a chance at defeating the thief. I managed to do it all in one night, skipping only two things: the maze and the river. Skipping the maze was a no-brainer. I remember mapping the maze the first time. I realized how to do it, I felt like a genius, and then I never, ever, ever wanted to do it again. I also knew that I wouldn't need to go in there to accomplish my goals. I do not regret skipping the maze.

As for the river, I knew exactly what I would have to do, and executing all the steps just didn't seem like it would be worth it.

Finally, I also didn't drain the reservoir. The "trial and error save, push button, restore, repeat" puzzle drains my will to play faster than it would drain the reservoir. I used the magic word to get the platinum bar out of the Loud Room. As a side note, I think that is my least favorite non-maze puzzle in the game. My favorite is probably the coal mine, except for the part where it's also a maze.

I enjoyed making a map of the game, as I thought I would, but I'd forgotten how often rooms were not joined symmetrically. That is, very often you'd leave a room by walking east, but to get back, you'd walk south. The official maps make it very clear why the exits looked that way, but it made everything much more tedious. Just to draw a new room, I'd have to walk through many of its exits, figuring out the angle of the passage. That would waste battery life on the lamp. I ended up pursuing a strategy that felt like it hurt my enjoyment of the game. I worked on each zone's map and puzzles without concern for treasure or time. Once I had a plan of attack, I restarted the game and went through my checklist.

Even this would not have been much of a pain, if not for random aspects of the game. At one point, when things were going quite well, the thief wandered by and stole my torch. Oops. Fighting the thief also led to some "restore and try again" moments. I don't mind randomness that provides atmosphere, but randomness that can ruin my game tends to, well, ruin my game.

Finally, I had a hard time forming a mental picture of the dungeon. Once again, the official map helps, but it doesn't help enough for me. The problem here is mostly with me. Zork is a gonzo setting, where anything that will be neat is allowed to exist. I kept trying to figure out how it fit together, and why there's an Egyptian temple bordering a huge dam and a tiny coal mine. It was mostly a collection of puzzles, which is what I knew to expect.

In the end, I'm glad I replayed Zork Ⅰ, but my replay was really tainted by the fact that I had so much of the game still in my memory. I blew through many puzzles that I might have enjoyed, and there were puzzles that I might have enjoyed if I wasn't prejudiced by my previous play. I'm hoping I get more out of later games.

The Great Infocom Replay: Foreword (body)

by rjbs, created 2013-02-01 09:48
last modified 2013-09-15 07:17

Quite a while ago, I decided that I had too many petty interests, and that I should pick one and pursue it. I thought I'd work on running some really good D&D, but basically it hasn't worked out. I have been unable to establish a regular-enough group of players, and have not felt entirely compatible with some of the players who I was able to attract. It has been a big let down.

Something recently reminded me of Suspended, and I thought I'd give it another play. I found that I could still beat it from memory, which made me happy, and I played around with the custom and advanced scenarios, too. I also finally learned how to get a good score. (I don't think I ever got a better score than 7 in the past.) Then I started looking at my big pile of IF games to play, and my big pile of IF games to write. (I count seven abandoned projects in my code tree, each with a CVS directory in it.)

So, I'm puttering about again and trying to determine whether I can get motivated enough to do any real work on this. I finally wrote ZMachine::ZSCII, getting me one more letter in The CPAN Alphabet Game, and I've been poking at hand-assembling Z-Machine programs. I want to get a better handle on how to construct IF, though, and part of that means playing more. (I'm also really enjoying the IF Theory Reader.)

I've gone back and played a bunch of the games I liked, and I want to play a bunch of the games I never did. I gave some thought to trying to organize an interactive fiction "book club," but I decided it would probably not pan out. (If you think I am wrong, you can make a comment or something. I am not made of stone.)

Included in the games I want to play (or play again): the Infocom canon. I've only played about a third of it, and some of it I barely remember. I've decided to try playing all of it. To avoid any indecision, I'm going to do them in order. I haven't decided how to limit my play time, but I think it will be something like "at least three sessions, unless I finish the game earlier than that."


  • Zork I ✓
  • Zork II ✓
  • Deadline
  • Zork III ✓
  • Starcross
  • Suspended ✓
  • The Witness
  • Planetfall ✓
  • Enchanter
  • Infidel
  • Sorcerer
  • Seastalker
  • Cutthroats
  • HHGG ✓
  • Suspect
  • Wishbringer
  • A Mind Forever Voyaging ✓
  • Spellbreaker
  • Ballyhoo
  • Trinity
  • Leather Goddesses of Phobos
  • Moonmist
  • Hollywood Hijinx
  • Bureaucracy ✓
  • Stationfall
  • The Lurking Horror ✓
  • Nord and Bert Couldn't Make Head or Tail of It ✓
  • Plundered Hearts
  • Beyond Zork ✓
  • Sherlock

Not found in my collection, although I'll see if I can get them:

  • Zork Zero
  • Shōgun
  • Journey
  • Arthur

Checkmarks, above, are games I've played in the past, although I haven't completed all of them.

Wish me luck!

more on the speed of file finding (body)

by rjbs, created 2013-01-28 22:33
last modified 2013-01-30 10:23

Last week I wrote about the speed of Perl file finders, including a somewhat difficult to read chart of their relative speeds in a pretty contrived race. My intent wasn't really to compare the "good" ones, but to call out the "bad" ones. Looking at that graph, it should be clear that you "never" want to use Path::Class::Rule or File::Find::Rule. Their behavior is vastly worse than their competition's.

There were some complaints that some lines were obscured. That's okay! For example, you can't see File::Next because it's obscured perfectly by File::Find, because they almost always take near to exactly the same amount of time. Heck, I don't even mind the relatively vast difference in speed between the "slow" configuration of Path::Iterator::Rule given there and the "fast" default behavior of File::Next. I figure that in any real work, the few minutes difference between their performance at million-file scale is unlikely to be worth noticing for my purposes.

Beyond that, there are so many variables to consider when trying to actually understand these speed differences. I think that some of the reactions I got, both privately and publicly, were interesting because the demonstrated to me the danger of publishing benchmarks. It was very clear to me what I was measuring, and presenting, and why. Readers, on the other hand, seemed more likely to read more into things that I'd meant to present. Most often, benchmarks are seen as passing judgement on whether something is good or bad. Heck, I said that myself didn't I? I wanted to "call out the difference between the good ones and the bad ones."


Well, what I wanted to do was to see how fast the different libraries could possibly be, given the fastest possible bare iteration over a bunch of entities, since that's the starting point for any real work I'd do. For example, this was the test code for File::Next:

  my $iter = File::Next::everything({}, $root);
  while (defined ( my $file = $iter->() )) {
    last if $i++ > $max;

Let's face it: that's not much of a real world use case. In real life, you're going to want to look at the filename, maybe its size or type, and quite likely you'll even (gasp!) open it. All that stuff is going to stomp all over the raw speed of iteration, unless the iteration is itself really out of line with what it could be.

David Golden read my report and got to work figuring out why Path::Iterator::Rule was lagging behind File::Next, and he sped it up. Some of that speedup was just plain speedup, but much of it comes from providing the iter_fast iterator constructor:

  sub iter_fast {
    my $self     = shift;
    my %defaults = (
      follow_symlinks => 1,
      depthfirst      => -1,
      sorted          => 0,
      loop_safe       => 0,
      error_handler   => undef,
    $self->_iter( \%defaults, @_ );

Why is this one faster? Well, it stops worrying about symlinks, so there's no -l involved. It doesn't sort names, which makes it unpredictable between runs over the same corpus. It doesn't worry about noticing one directory being scanned multiple times (because of symlinks). It doesn't handle exceptions when testing files. File::Next acts like this by default, too, but in the end, you very often want these safeguards.

It's great if your program can iterate over 1,000,000 distinct files in four minutes, but you'll feel pretty foolish when it takes forty minutes because you forgot that you had a few symlinks in there that caused repeated scans of subtrees!

Then there's the change to depthfirst. There, we start to get into even more situational optimization. What tree-walking strategy is best for your tree? Probably you should learn how to decide for yourself, and then make the right decision.

Still, the availability of these options is definitely a great thing. Not only does it make the library more flexible, but it allows one to compare File::Next with Path::Iterator::Rule as apples to apples.

Finally, there are plenty of things to judge more than the raw speed of the library's fastest base configuration. Path::Iterator::Rule, for example, has a very pleasant interface, in my opinion:

  my $rule = Path::Iterator::Rule->new;
  my $iter = $rule->or( $rule->new->name('*.txt'),

That's going to build about a half dozen subs that will test each file to decide whether to include it in the results, and three of them will called for each kick of the iterator (I think!). That's fine, it's still really fast, but you can start to imagine how this can get microöptimized. After all, with File::Next:

my $iter = File::Next::files($root);
while (defined (my $file = $iter->())) {
  next unless $file =~ /\.(?:html|txt)\z/;
  next unless -f -r $file;
  next unless -s $file < 10240;

Will this be faster? I bet it will. Will I ever notice the difference…? I'm not so sure. If I thought it would matter, I'd perform another benchmark, of my actual code. I'd profile my program and figure out what was fast or slow.

Again: beware of taking very specific benchmark tests as meaning anything much about your real work without careful review.

Finally, I mention just to scare the reader: did you know that File::Find::Rule, though it looks outwardly just like Path::Iterator::Rule, does not have that 3-6 subroutine call overhead? See, it takes all the rules that you build up and, just in time to iterate, builds them into a string of Perl code that is then evaled into a subroutine to execute as a single aggregate test.

It's the nastiness of that code that prevented me from ever making good on my threats to add lazy iteration to File::Find::Rule. I'm glad, too, because now Andy and David have provided me with better tools, and all I had to do to get them was whine and make a chart.

The free software community is fantastic!

the speed of Perl file finders (body)

by rjbs, created 2013-01-22 22:16
last modified 2013-01-28 12:38

Sometimes you need to walk a directory tree, pick out files, and do stuff. If you're working in the shell, you can use find — at least if you have GNU find. Those other finds… shudder.

If you're in Perl, of course, there's more than one way to do it! Today's question is, how do they perform?

This charts the time required to find every entity under a root directory. The y axis is the amount of time it took. The x axis is the number of files the iterator was allowed to find before being told it was done. Note: the x axis is logarithmic. The first tick is finding 1 file, the next 10, then 100, then 1,000, and so on.

So, what's going on? Here's a brief run down.

First, there was File::Find. It came with Perl 5, and it has just the kind of interface you'd expect. It's totally workable, but weird. As you can see, though, it's among the fastest things we've got.

Around 2002, Richard Clamp released File::Find::Rule, which has a really nice, convenient interface. (Maybe it'd be even better if its objects were immutable, but I'm pretty happy with it as it stands!) There's just one problem. It implements its iterator in terms of slurp instead of the other way around. Oops. That's why it's got that horizontal line. (Confession: when regenerating these data, I only generated a few of FFR's points, to save time. I promise: the fluctuations in previous runs were uninteresting and tiny.) No matter how many files you are actually going to pull out of its iterator, it gets them all first. This has made it totally unusable for most of the things I'd use it for, which is a bummer.

I complained about this, probably way too much, in earshot of David Golden, and this may have contributed to his creation of Path::Class::Rule, which has a very File::Find::Rule-like interface, but finds files lazily, rather than eagerly, and so it's much more efficient at searches across large trees that might terminate early. It even gives back results as Path::Class objects, which are nice and convenient. Unfortunately, they're also really slow for a number of reasons. David Golden and Vincent Pit and Zefram have all done bits of work that could really speed them up, but until that's all released, Path::Class::Rule is a dog on larger finds. Once it hits a bit over 100k files, it's slower than File::Find::Rule, and gets even slower at an alarming rate.

When he found out how slow Path::Class::Rule could get, David Golden went back to the drawing board and produced Path::Iterator::Rule, which is just like Path::Class::Rule, but gives you strings instead of objects. It's much faster than Path::Class::Rule. There's clearly some room for improvement, because its curve doesn't quite match File::Find's yet, but that may be at least in part because of its default options. I didn't spend much time profiling with custom options.

Finally, I've also included Andy Lester's File::Next, which I also like quite a bit. I tend to prefer the Rule modules because it's so easy to set up a search, but File::Next is what I've used for years when File::Find::Rule's eager slurping would make it impossible for me to use. It may be that now I'm a Path::Iterator::Class user forever, but File::Next is still A-OK with me. If you can't see File::Next's line, it's because it's almost exactly the same data as File::Find.


I regenerated the chart on 2013-01-23 13:35, using data at 1e0 through 1e6 rather than the previous odd set. I also added two new libraries: File::Find::Object and File::Find::Iterator, of which I have used neither.

I did not time File::Find::Declare. It doesn't provide lazy iteration, so I would never use it.

I did not time File::Find::Match. It doesn't provide lazy iteration, so I would never use it.

I did not time File::Find::Node. It doesn't provide lazy iteration, so I would never use it. It does allow forking of children to handle subtrees, which is nice, though.

I did not time File::Find::Wanted. It's just a thin bit of sugar for File::Find.

The Source

You can find the source for generating most of this, and some of the states of my result files in my perl-file-finder-speed repo.

workspaces in Google Chrome (body)

by rjbs, created 2013-01-21 22:35
last modified 2013-01-21 22:36
tagged with: @markup:md chrome journal

I really liked using OmniWeb. Back before Safari existed, OmniWeb was, for me, a much better option than Firefox. It was very fast, did a good job saving my session, had per-site preferences, and had workspaces. I am stymied, deeply and daily, by the lack of good workspace support in every other browser.

I feel like I've spent hours, today, trying to figure out how to replicate the basic features of OmniWeb workspaces in Chrome. My spec is something like this:

  1. A workspace is a bunch of tabs. Maybe windows, too. I'm okay with tabs.
  2. I can quickly switch from Workspace A to B. All the tabs from A go away and the ones from B appear.
  3. If I change the tabs I'm working on while in WS B, then go work in A, then come back to B, I come back to the the state of B before I left it.
  4. I can easily move tabs from A to B. (This is vital, because URLs opened from external programs will presumably go into whatever workspace is currently active, and might not be topical.)

I looked at a Chrome Workspaces, which was promising, but:

  • had numerous reports of data loss
  • didn't provide any clear way to move a tab from one WS to another; you can copy the URL and re-open it in the other WS, but that requires re-opening every tab in the workspace, and loses your tab's history, and if you have to switch back, you've got to reload all your tabs in your original workspace, too!

Session Buddy seems great. It saves all your windows and tabs, and can keep multiple snapshots, both by time and named snapshots. Snapshots can be restored, deleted, and tweaked in place.

Unfortunately what you can't do is say "save my current session and swap out all the current windows for the ones in some other session." In other words, it isn't a workspace system at all, even though it seems to have almost everything it would need to be one. Well, I guess it doesn't claim to be one, so I shouldn't be grumpy.

While writing this, I was suddenly inspired to run Firefox, because I felt like I'd found a solution there that I could no longer recall. I was right! Panorama! Tab groups! Panorama is the best workspace thing ever. Forget about OmniWeb. What was I thinking? Man! Panorama is awesome. Why did I stop using Firefox, again?

Oh, right. It was because it was achingly slow.

Well, Panorama came out in mid-2010. Surely it's been ripped off by now, right? Yes!

Tab Sugar was a pure copy of Panorama to Google Chrome.

Unfortunately, the developer gave up because Chrome couldn't offer the data he needed. Augh!

Having spent all this time suffering through trying to find a solution in Chrome, I may just give up for now. Either I'll continue to try to enjoy Chrome without tab groups, or I'll move back to Firefox and see whether it's gotten any faster in the last year.

Or maybe tomorrow morning I'll try Tabs Outliner… or Sidewise Tree Style Tabs

Echohawk's Complete D&D Monster Index

by rjbs, created 2012-12-13 23:48
last modified 2012-12-13 23:48
tagged with: dnd reference

if you only read one blog post about the upcoming perl 5.18.0... (body)

by rjbs, created 2012-12-05 10:39
last modified 2012-12-05 11:18

...it probably shouldn't be this one. Instead, go read Breno G. de Oliveira's piece on hash randomization.

There are lots of important changes in 5.17 right now, and there may be some more between now and 5.18.0, which is due in Spring 2013. The one that you need to care about right now, though, is the change hash randomization. The changes to hash randomization are still themselves changing, but the bottom line is this: The order of things coming out of hashes is going to seem more random.

It's a pretty common mistake to believe that hash ordering is in some way deterministic. The best guarantee you get from perl is that if you call keys and values on a hash, the results correlate with one another. Relying on any more ordering than that is a bug.

The changes in 5.18.0 will expose these bugs where they had been hidden before.

So, rather than restate the things that Breno said so well, I will once again tell you: Go read Breno's post.

If you want to be really helpful, you could look at the list of CPAN distributions affected by this change and help make sure they are patched before 5.18.0 arrives. Heck, maybe your own code is in there.

iTunes 11 displeases me (body)

by rjbs, created 2012-11-29 23:00

Okay, look, it's really nice that iTunes is so much faster, now. I mean, it's really nice. Most of the visual changes are nice. A lot of it, as usual, is just little changes that I don't care about. The new context menus are okay. Super, okay? They're great.

What, though, is up with this Up Next thing? It is junk.

I used to use iTunes DJ all the time. I used it every day. When I fired up iTunes 11 and saw how fast it was, I was pleased, then I panicked while I searched for iTunes DJ, then despaired when I saw what had replaced it.

The new "Up Next" feature shows you what is going to be played next, but it isn't treated like a real playlist. You can't do everything you could do to a normal playlist, and when you can, it's not through the same means. It's just as annoying at the horrible "play queue" in Spotify, and that's saying something.

Anyway, I'm not in the business of posting angry rants, so I thought I would back it up with the way in which it actually is an obvious downgrade.

Almost every time that I listened to music in iTunes, I used iTunes DJ. It looked like a short playlist with something like 20 entries. I could see what was up next, dominating the whole iTunes window, with all the same controls. If I didn't like an entry, I could tap delete and it would go away and get replaced by something at the bottom. I could reorder the upcoming stuff. If I didn't like it at all, I could just click "refresh" and have a new set of tracks pulled in to replace everything that was coming up.

The playlist I listened to quite often in iTunes DJ is called Queuelet. It's a live-updating playlist of music that has no rating. I'd listen to it while working and rate the music as it went. The problem was, it wouldn't work to listen to Queuelet directly. As soon as I'd rate a song, it would be yanked from the playlist and iTunes would stop. By using iTunes DJ, the song would finish normally, because it was only removed from the DJ source, not the DJ's playlist. Now that mechanism is gone, and I'm stuck playing songs from the smart playlist.

I can make the playlist not live update, but then I have to regenerate the playlist manually.

I am unhappy with this.

Horror Movie Month 2012 (body)

by rjbs, created 2012-11-01 10:43
last modified 2012-11-01 10:43
tagged with: @markup:md journal movies

Horror Movie Month 2012 is done! Thirty-one movies in thirty-one days! (We missed one day but watched two on another.)

I think this year's crop was much better than last year's, although Gloria doesn't seem convinced. Of the batch, I'd only recommend a few:

  • Stake Land was nothing special, but I'm a sucker for post-apocalypse.
  • Brain Damage was a weird exploitation movie that I really enjoyed for no very clear reason. It was made by the maker of the Basket Case movies, which I also liked.
  • The Stepfather wasn't great, but I liked Terry O'Quinn — and why wouldn't I?
  • Phase 7 was enjoyable, mostly because the protagonist was such a normal schmoe.
  • Apollo 18 had its merits, and I enjoyed it, but it really didn't buy into its own "found footage" conceit.
  • Murder Party was almost certainly the best of the bunch. It was pure fun.
  • Silent Hill: Revelation was good, if you like that sort of thing. I do.
  • Doghouse was fun, too, but made me think of Shaun of the Dead too often.

I think the worst of show goes to Transylvania 6-5000. Ugh.

The Whole List

October 1: Zombies of Mass Destruction
October 2: Stake Land
October 3: Paranormal Activity 3
October 4: Brain Damage
October 5: The Messengers
October 6: Ghoulies
October 7: House
October 8: Ghoulies Ⅱ
October 9: Duel
October 10: Diagnosis: Death
October 11: No movie! I was at a concert by the Mountain Goats.
October 12: The Stepfather
October 13: Ginger Snaps
October 14: Rabid
October 15: Phase 7
October 16: Transylvania 6-5000
October 17: Apollo 18
October 18: Murder Party
October 19: The Collector, but it was so bad that we switched to the American Horror Story: Asylum premier
October 20: The Monster Squad
October 21: The Car
October 22: Dead Alive
October 23: Brainscan
October 24: The Woman in Black
October 25: Messengers 2
October 26: The Burning
October 27: Hatchet Ⅱ
October 28: Silent Hill: Revelation 2D, Doghouse
October 29: Dance of the Dead
October 30: The Substitute
October 31: ThanksKilling

RPG Recap: Beyond the Temple of the Abyss, 2012-10 (body)

by rjbs, created 2012-10-23 00:21

Still exploring the museum under the statue of Cosativa, the troop poked around the counter at the east end of the room, where they found an old green backpack. They catalogued its contents, argued a bit over who should take what, shuffled gear around, and then moved on to inspect the featureless black sphere floating over one of the display stands. Watching it, Red was transfixed and treated to a strange display proclaiming the broad scope of the now-defunct empire and the meaninglessness of his life. He kept it to himself.

In the middle of the room, the group found a detailed map of the museum, but when Red attempted to copy it down, it vanished, replaced by an admonition to buy one instead. No sales personnel could be found. Red declared that he would commit the three-dimensional rotating blueprint to memory, no matter the time cost. While he did so, the rest of the group huddled within the light of one torch and walked over to have a look at the brass doors at the north end of the chamber.

As they approached, the doors opened and out clomped three robots, each something like the centurion troopers, but larger and better armed. Everybody stood and stared — except for Red, who kept on concentrating on the map. The robots tried to issue orders, but couldn't manage anything but an incomprehensible barking noise. While the group tried to figure out what to do, Rob-E circled around behind the bots, entered the room behind them, and vanished behind the closing doors.

At this, two of the well-armed bots stormed off toward the southwest. The group cleared out of the way, and was unmolsted. Unfortunately, the bots passed close enough to the remaining dormant ooze to wake it, and a free-for-all melee immediately ensued. The ooze flowed upright and attacked the robots, which fought back. Group Leader commanded his troopers to join the fight against the ooze, and several of the fleshy adventurers joined the fray.

Eventually, the ooze was reduced to a pool of fluid and the bots immediately continued on their way down the ramp to level two. The one that remained behind went back to barking and gesturing at the party, and Yuki remarked in irritation that clearly they were all on the same side, given what had just happened. It answered by firing a searing bolt at Yuki and setting his shield ablaze. The mercenaries sprang to Yuki's defense, along with the troopers. Delian knocked the thing over and the rest of the group hacked and smashed at it until it ceased to move.

Meanwhile, Rob-E waited for the doors on the elevator to re-open. When they did, he found himself outside a long room with a few tables in it, most of them displaying complex metal objects ranging from cat- to rhino-sized. He threw a hunk of metal into the room, saw nothing happen, and commanded the elevator to take him back where he'd come from. He was worried that the robots might be come chase him down and find him alone.

Upstairs, everyone piled onto the elevator to go exploring — except for Red and two troopers who stayed to defend him while he continued to study the map. The elevator displayed a list of destinations to Rob-E:

  1. The Welcome Area
  2. The Map Room
  3. The Labyrinth
  4. The Halls of Victory and Defeat
  5. The Shrines
  6. Administration
  7. Maintenance

The group decided to head all the way down to maintenance and see what they could see.

They exited the elevator into a long empty room lined with shelves. At the far end, two small lights glowed on the wall: one red, one green. The group spread out through the room, looking for loot on the shelves. As soon as they'd split up, though, the far door opened and in floated a massive metal sphere set all around by egg-shaped crystals.

As it moved into the room, twiting beams of light into multi-colored blasts, the group took various strategies: Yuki called for the mercenaries to surround him while he fired crossbow bolts, Rob-E inched back into the elevator door, pondering flight, and Delian led the troopers in a charge on the thing.

Delian and the troopers proved more or less unable to harm the sphere at all, as did Yuki. Group Leader was possessed and turned against the party. Another centurion was slowed. Things looked bad.

Rob-E boarded the elevator and went to get Red. By the time he returned, Yuki was unconscious on the ground, nearly finished off by the sphere's light beams. Centurion Delta's metal body was set aflame and burned to a cinder. Red called for everyone to get into the elevator to flee, while the sphere continued to pick them off one by one. Delian, who continued to fight, was thrown across the room and vaporized by a beam of light. Rob-E circled far around the sphere to attack from the rear, but without effect. He fled back to the elevator along with the rest of the group, including Yuki, who was dragged along by Centurion Beta. While Yuki and Rob-E debated about their next port of call, the sphere fired another blast of light into the elevator, disintegrating him. The elevator, reflecting Rob-E's last wishes, began to head for the Administration level. Yuki ordered Beta to have it take them, instead, to the entrance.

Back in the Welcome Area, the group decided they'd retrieve Centurion Alpha from his entangled position in the plinth and then head out. Eloise, Yuki, and Exeter climbed, one by one, up the narrow shaft. Exeter tied off a length of rope to Alpha and Yuki and Eloise gave it a few good yanks. The trooper was pulled free of the silver webbing and crashed down to the Welcome Area below, taking Exeter with him. Exeter lived, but became entangled in the webbing, had his left arm sliced off, and was knocked out. Centurion Gamma tried to free Alpha from the webbing, but he too was entangled and shortly ceased to function.

With the rope leading back to the exit now in a heap at the bottom of the shaft, Red decided that the best way to get out was to cast Floating Disc long-from his spell book. During the hour required, the group was interrupted by a stumbling, panicked man emerging from the darkness carrying a glowing stick. They ignored him and watched as he tried to climb the sheer sides of the shaft, desperate to escape.

Some minutes later, the elevator doors opened and the floating sphere emerged. Red continued his ritual, with only twenty minutes left in its performance, but Garth was having none of it: he shoved his way in, taking the rope from Red's shoulder and casting it up to climb out, which he then did, followed by Red and Michael, who hauled Exeter's unconscious body.

The unknown panicked man and the remaining centurions were left behind to whatever fate might befall them, and the rope was hauled back out of the shaft.

In Memoriam

R.I.P., Delian the Elf
R.I.P., Rob-E the Automaton
R.I.P., Western Spear Centurion Α
R.I.P., Western Spear Centurion Β
R.I.P., Western Spear Centurion Δ
R.I.P., That Weird Guy with the Glowing Rod

Missing in Action

Western Spear Centurion Group Leader
Western Spear Centurion Γ
Western Spear Centurion Ε


Yuki    =  137 + 701 = 838
Red     =    0 + 733 = 733
Exeter  =    0 + 637 = 637
Eloise  =    0 + 669 = 669

Experience is awarded based on treasure successfully removed from the site, monsters defeated (or survived), locations visited, use of class skills, and exceptional ability scores.

RPG Recap: Beyond the Temple of the Abyss, 2012-09-22 (body)

by rjbs, created 2012-09-28 18:27

Sometime during the Red Moon, 937

With their numbers bolstered by centurions and sellswords, the tripart party of Red, Yuki, and Delian made good on their threat to seek their fortune inside the statue of Cosativa. They paid Egrin to arrange their entrance, and were saved considerable expense when he pointed out that the centurions would likely be admitted without tickets.

Inside the plinth's brass doors, whatever means of travel once took visitors from the statue's base to its top was gone, or at least reduced to a scrap of twisted metal. Inside, the ascent was lit only by shafts of light coming through the walls and by spots of light that slid across the interior surface with no clear pattern or source. Centurion Α was sent up the metal cabling that hung inside. He only made it about fifty feet before he reported becoming entangled in an unnatural spiderweb. When ordered to free himself, he dropped his sword and stopped responding.

The group decided to try going down, instead.

With a rope secured to the metal debris in the shaft, the group descended one by one and found themselves in a huge, dark round room. Slowly, they established a loose perimeter around their point of entry and tossed torches out into the room to light it up.

The room was about 100' across, with four ramps leading down. Each ramp was surrounded by a mural and numbered by a plaque:

  1. to the southeast, its mural depicting an elaborate map
  2. to the southwest, depicting two armed men, one triumphant and one defeated
  3. to the northwest, its mural a complex pattern of curves and angles
  4. to the northeast, again depicting two men, both amputees

A number of small displays were set around the room, as well: two statues facing each other, a floating black sphere, a model city, a dead man on a patch of dead, earth, and an abstract metal sculpture overgrown with a pale blue thicket. Vines from the thicket spread out far and wide across the floor, and down the ramps.

The first statue was some sort of fortune-telling construct, which interpreted the significance of Yuki's life for him for a gold coin. The reliability of its interpretation was unclear.

Before investigating the rest of the room, the group looked into the scattered remains of five centurions near the west wall. The centurions' group leader retrieved an energy weapon from them, restored it to working order, and at Red's suggestion vaproized the thicket across the room. Beside the destroyed centurions were two purple blobs, one of which, when carefully observed for several minutes, was seen to extend a pseudopod and extinguish the torch that had landed near it. Assuming that the thing didn't like the light, Rob-E suggested he could focus the light of several nearby torches onto a hand mirror and bathe the blob in the focused light. He did, and got quite a reaction: the blob flowed up into a pillar, sputtered, and began to roil toward the nearest living things: the mercenaries.

Rob-E panicked and retreated.

The rest of the group scrambled to set fire to the shambling ooze. Delian lobbed a flaming vial of oil, Yuki fired a flaming bolt from his crossbow, and Red threw just about anything he could find, culminating in the throwing of a hacked-up piece of vine. The ooze was angered, but not stopped, and with its caustic touch it dissolved most of Len and engulfed Hester whole. The rest of the mercenaries and centurions looked for throwable debris, but came up empty-handed — except for centurion Ε, who picked up the body from the patch of dead soil, set it alight, and threw it into the blob, which seemed to suffer greatly. Rob-E, who had gotten behind the group to cower, was cajoled into resuming his light attack.

In the end, the thing was dead, and dissolved into a puddle. Red set a torch to much of its surface area, just to be sure. Everyone agreed to try to not disturb the other blob, which had remained dormant.

While everyone else resumed their positions around the entrance, Red and Group Leader moved to investigate the model city. Group Leader identified it as Alithica (possibly during the preceptorship of Astagath), and saw nothing out of the ordinary about it. Red the Medium, on the other hand, saw quite a bit out of the ordinary. Looking through one of the magnifying lenses mounted around its border, he saw hundreds of tiny people engaged in a general panic. Some sat catatonic, staring into the distance, but most of the others ran: some to attack and defile, others only to flee. Occasionally, he caught site of a sinister shape moving through the crowd. Rob-E reported seeing nothing, and while Delian saw people moving through the city's streets, he didn't see anything unusual in their activities. Finally, Red called for one of the mercenaries, who gasped in horror: she had seen the same thing.

In Memoriam

R.I.P., Hester the Harrier
R.I.P., Len the Linkboy

setting global expectations with Test::Routine (body)

by rjbs, created 2012-09-11 18:25

Today I finished a bunch of work that significantly improved the test suite of our billing system. It all sprang from two of my favorite things coming together: adding more print statements and Test::Routine.

I'm going to simplify the truth a little bit, but here's what happened.

Sometimes, a customer pays for a year in advance at $20 per year. Six months later, he or she upgrades to a more expensive tier of service — let's say $40 per year. Now, instead of six months left, the customer only has three months, because the rate at which the paid balance is being drawn down has doubled. The customer now has two options:

  1. live with the three month difference
  2. pay $10 to get back to the original expiration date

We call option 2 "psyncing" for historical reasons.

When the billing system notices that the customer's expiration date has changed, it generates a quote to restore the original expiration date and sends it to the customer. The customer can ignore it (because it's just a quote) or can pay it.

A problem showed up not too long ago, where a customer received a quote saying something like, "If you pay this bill, your expiration date will be January 1, 2020." Unfortunately, it was wrong. It should have said 2019. The problem had to do with how expiration dates were calculated, and was fascinating, but only to me, so I won't relate it here. The point was: I had to fix this.

The psync code is pretty complicated, and I really didn't want to break anything. We had tests, but they largely tested that the things we expected did happen. Not enough tested that things we didn't expect didn't happen. That's sort of hard to test for! Still, I wanted to make sure that when I changed the psync code, we didn't start sending lots of psync notices where we hadn't before.

I went into the psync code and had it print out a big YOU JUST PSYNCED! every time it sent a notice. Unfortunately, I saw a bunch of notices in tests that shouldn't have caused any. I spent some time trying to figure out what I'd broken, but I had no luck. Finally I went back to the deployed branch and added the same notice... and there it was! The tests were doing psyncing in places we'd never expected, and because it didn't affect the limited expectations we had established, we never noticed.

I slowly went through the tests on the master branch, figured out why they were psyncing, and either added a "this should psync" assertion or fixed the code to avoid it. Adding those assertions was easy because of Test::Routine.

With Test::Routine, we write our tests as roles that can be composed before running. Our billing system is based around customer records called "ledgers," and one of the most commonly-used Test::Routine roles in the system is LedgerTester. It provides useful behavior for testing all kinds of ledger-related stuff. I added this method:

  sub assert_n_deliveries {
    my ($self, $n) = @_;
    my @deliveries = $self->get_and_clear_deliveries;
    is(@deliveries, $n);
    return @deliveries

Then, in any test where I wanted to say "by this point in the test I expect to see 1 message," I'd just call that method. I went through the code with the errant psyncs and added expectations. Unfortunately, they didn't always help! I knew that the test was sending a psync, because of my big blinking red notice, so I'd add an assertion that there was one delivery. What I didn't know was that it was also sending 22 invoices for unrelated reasons! I wanted a way to make sure that I'd accounted for every single delivery from every single test and I didn't want to have to copy and paste a lot of crap all over the place.

So, back to LedgerTester! Among other things, it does something like this:

  around run_test => sub {
    my ($orig, $self, @rest) = @_;

    local $ENV{MOONPIG_STORAGE_ROOT} = $self->tempdir;


This means that every individual test in any LedgerTester routine will get its own temporary storage space for the persistence layer. Incidentally, if you can't tell why that storage space would be different between runs, it's because LedgerTester composes another Test::Routine role, HasTempdir:

  has tempdir => (
    is   => 'ro',
    isa  => 'Str',
    lazy => 1,
    default => sub { tempdir(CLEANUP => 1 ) },
    clearer => 'clear_tempdir',

  before run_test => sub { $_[0]->clear_tempdir };

Thanks, roles!

Anyway, all I did was update that run_test advice:

  around run_test => sub {
    my ($orig, $self, @rest) = @_;

    local $ENV{MOONPIG_STORAGE_ROOT} = $self->tempdir;

    my @deliveries = $self->assert_n_deliveries(0, "no unexpected mail");
    for (map {; $_->{email} } @deliveries) {
      diag "-- Date: " . $_->header('Date');
      diag "   Subj: " . $_->header('Subject');


Now every time a test method ran, if its assertions didn't add up to the total mail count, there would be a failing test telling me how many unexpected messages I got and what they were. I reran my test suite and about 25 files started failing. I went through each file without looking at its test results, added assertions where I thought they belonged, and I got quite a bit closer to the right results. Some places, though, things were really surprising!

For example, sometimes instead of paying a virtual invoice to get funds onto a customer, we'd just shove money into the account in a way that could never happen in production. This meant that the invoice would go unpaid and dunn endlessly. Oops! Then fixing that would expose further problems in the invoicing schedule in the test. It also exposed a serious bug in the way that coupons were being handled. This was a bug that I'd theorized might exist, but hadn't yet proved, and adding those five lines above had done it for me.

Doing this update revealed a number of other minor problems, too, as any refactoring often does. More importantly, though, it demonstrated that not only should we try to set up expectations about every side-effect in our tests, we can do so, and pretty easily too. Without Test::Routine, this would've been a much bigger hassle.

RPG Recap: Beyond the Temple of the Abyss, 2012-09-08 (body)

by rjbs, created 2012-09-09 19:31
last modified 2012-09-09 19:32

Sometime during the Red Moon, 937

Continuing on their trek home, following the direction given by Find the Path, Delian, Rob-E, Heron and Thordis soon caught sight of a distant — and massive — statue on the horizon. They carried on toward it, and soon enough were in its shadow.

An area around the plinth was cleared of trees and showed signs of being used for quite a few campers over the last few weeks. The the moment, though, there were only four groups: A group of twelve men and women were camped together near the statue's base, each of them wearing a white chasuble and sword over his or her clothes. Six metallic skeletons stood, quite still, in formation beneath a high canvas tent. A lone wizard (judging by the flowing purple robes) sat apart, as did a lone barbarian of the northern woods.

One of the older devout waved to the arriving group and introduced them to Cosativa, the goddess depicted in the statue: defender of the realm, patron of those who have or are lost. He suggested that it was quite felicitous that Delian, himself quite lost, had come under her shadow.

The arrival of this motley crew also rekindled the interest of the idle wizard and druid, who got back to investigating the site before they could be beaten to the punch. The plinth, only three feet wide, was nearly half door at its base, and pretty soon everyone was overcome with a keen interest in getting the doors open — everyone but Thordis the Soldier and Heron the Medium, who watched dubiously from the rear.

The groups seemed to introduce themselves to one another with a series of shoves, each adventurer deciding he had a better idea than the one currently trying something. Yuki the Aspirant tried to pry the doors open with an iron spike, then Delian with a sword. Red the Medium tried to understand the ticket-selling kiosk nearby, and Yuki shoved him away to try his own luck at the buttons. Rob-E stayed out of it.

In the midst of all this, two of the white-clad devout came trotting over and tried to derail the mischief with calm explanations. They were only partially successful. The nervous young man explained that the door would open for ticket-holders, and that the kiosk would sell tickets, but only for an obsolete and rare form of specie. Their leader would be happy to sell some of those notes for the right price, and to aid the group in getting inside for no cost at all.

"I must urge you," he added, "not to take us up on this offer. While the view from Our Lady's eyes is known to be astounding, there are … threats … residing within the statue. Oh, and as for the temple below, none have ever returned from its depths."

Delian asked how much the tickets were going to run them and, getting an answer (five pounds of gold per head), declared that he'd fund an expedition.

There was tentative enthusiasm about this idea from everyone but the hirelings Heron and Thordis, who were wholly enthusiastic about waiting outside to see whether anybody would come out alive. "We'll watch your horse," Heron promised. Thordis wondered whether Delian would pay for his promised replacement sword up front. Just in case.

The elder devotee of Cosativa, after telling some tales of his own experiences within the tower, and with octopodes, and with voluntary amputation, agreed to help the group get into the tower. He also urged them to reconsider. Instead, Red considered trying to forge some of the banknotes needed for entry. After having a good look at one, though, he decided it would have to wait.

To prepare for an expedition to be had the next day, the group split up. Red went to work recruiting the metal men. This proved to be surprisingly easy. After extracting a promise that the site would be neither defaced nor looted, the "Group Leader" of the "Western Spear Centurion Squad" agreed to join Red with no compensation. (Group Leader did make some unusual requests for equipment, but was willing to accept swords in place of death rays or nanoplague dispersal grenades.)

Meanwhile, everyone else made for Allman, a nearby town. While they were able to purchase nearly everything they'd hoped for, the trip was not entirely without incident. Walking through the woods, they were startled to find an elephant-sized bag of floating flesh several stories above them. As they stood and stared, the thing turned toward them. Yuki felled the monstrosity with a few well-placed arrows, but not before it had fired a massive five-foot quill at the group, impaling Heron and killing him instantly.

He was briefly mourned, carefully looted, and unceremoniously buried beneath a small cairn.

In Memoriam

R.I.P., Heron the Medium


Yuki  = 0 + 137 = 137

RPG Recap: Alar, 2012-08-27 (body)

by rjbs, created 2012-09-04 17:44
last modified 2012-09-28 18:29

Year 28 of the 7th Imperator, 20th of Declarations

Still exploring and graverobbing beneath the scenic abandoned temple of Mexias, the group left some crypts unopened and instead headed west along a long corridor. Slits through the thick stone to the south afforded a view of a large room with a grey stone statue of an immense man wracked and bent. A nearby passage led to a ruined kitchen and a larder, unexplored, lay beyond that.

The gang headed south through a dining room and into another row of crypts, which they began to loot, finding precious little plunder. Calliope, though, found a strange black fiend hiding in one vault. Once found, it lept at her and clawed her vicious. Before it could escape, it was done in, but its keening attracted the attention of a handful of hulking bug-men who attacked with huge hooked arms. These, too, were laid low, as well as a bandaged corpse that rose from the dead from within one of the vaults. The party felt quite worn down by the fight and made a quick exit, slowing only when the saw the huge scorpion watching them from a side passage. It didn't bother them, though, and they made it to the surface. Calliope returned Sek's key to his person and the six vandals slunk off into the night.


Ayla     = 205 + 113      = 318
Calliope = 205 + 113 + 23 = 331
Helga    = 197 + 113      = 310
Ignatius = 205 + 113      = 318
Redorus  = 195 + 113 + 23 = 321
Tilton   = 167 + 113 + 11 = 291

RPG Recap: Beyond the Temple of the Abyss, 2012-07-28 (body)

by rjbs, created 2012-09-03 11:16
last modified 2012-09-09 16:47

Sometime during the Red Moon, 937

Having only barely escaped the gator attack with their lives, Rigby and Delian continued their steady flight from the zoo sewers, passing by a few strange sights before coming to a tunnel entirely blocked off by a throbbing violet fleshy growth. An old man stood nearby considering it, but the pair ignored him and tried to hack their way through. The wall immediately sprouted pseudopods and attacked back, but Delian and Rigby got the better of it.

They tried to extort a toll from the old man, but he suggested that they should think again. Reluctantly, they agreed to let him pass unmolested.

After more walking through the sewers, the two found an exit: a set of metal rungs led up to a covered opening. Testing the cover, they found it partially blocked. They could only open it an inch or two, and there was movement and talking outside. Rather than call for help immediately, they decided to spend the night down below to rest up. All their torches exhausted, they set up their blinking, beeping tent and settled down for the night. Another small group of torchless adventurers cautiously negotiated passage, and a tiny glowing humanoid flew past them, but on the whole they were untroubled until hours later, when an earth-shaking noise from somewhere to the south sent hundreds of rats scurrying toward the campsite.

Delian casually negotiated with someone aboveground to get the door unblocked. When he returned down the ladder to get Rigby, the two were both swarmed. Only Delian made it out. When he reached the top of the ladder, he was hauled out, still being bitten, and found himself in the midst of a dozen or more caravans, parked near a large round structure filled with colorful statues of insects. Just as he took in the sights, though, he passed out.

When he came to, he was lying in a heap in the old market, his load lighter by quite a bit. Fortunately, not all his valuables were gone. He still had some of the magic seeds from the Gladwell place, and he worked out a sale of them to one of the city's many wizards for over ten pounds of platinum. Suddenly fantastically wealthy, he then negotiated a tight-fisted rate of pay for three new hirelings: Heron the Medium, Thordis the Soldier, and Rob-E the Automaton. (They got 10 gp per day, a week up front, and would split a third of all the treasure recovered.)

He also made a donation to a cleric of Haarg in return for her services casting Find the Path from their scroll. It led them southeast from the city toward Edgwold, and when the spell's duration ended, they continued alone and on foot.

After not too long, though, they were set upon by small but vicious wolves, and might have been killed to a man, had Heron not produced a pouch of wolfsbane and chased them off. Delian was badly injured and unable to walk, so Rob-E was sent back to the city to buy a (single) horse. This was a success, but more trouble arose after another hour: the group came to a broad river and was unsure how to get the horse across. They were sure that a floating disc wouldn't be enough to carry him, and struggled to come up with other ideas.

Finally, they decided to send Thordis back to the distant trees to fell one and bring it back to make a raft. After some strenuous objections about being forced to use his sword for such a task, Thordis was promised a replacement and grudginly accepted. Rob-E returned to the city (again) and bought more rope. Eventually, with Heron's expert help, a raft was constructed. Thordis and Rob-E crossed the river and prepared to haul the raft across, on which Heron and Delian would be maintaining two floating discs, each bearing half the horse's weight.

Things went fairly well until they'd gotten a bit more than halfway across. The raft rocked more violently and the spellcasters, unable to retain their footing, fell. The discs dematerialized, and the horse fell into the water, where he splashed for a few minutes and then swam to the other side.

The group soldiered on to the southeast…


  Delian  = 1852 + 1115 = 2967 (Level up at 4000)
  Heron   =    0 +    5 =    5
  Thordis =    0 +    5 =    5
prev page
next page
page 3 of 82
2026 entries, 25 per page