rjbs forgot what he was saying

not logged in (root) | by date | tagcloud | help | login

RSS feed entries

collapse entry bodies

I went to Tokyo! (body)

by rjbs, created 2013-09-26 13:30
tagged with: @markup:md journal perl yapc
I must have done something right when I attended YAPC: :Asia 2011, because they
invited me back this year. I was *delighted* to accept the invitation, and I'm
glad I did.

night in Tokyo

I said I'd give a talk on the state of things in Perl 5, which I'd done at YAPC::NA and OSCON, and which had gone well. It seemed like the topic to cover, given that I was presumably being invited over in large part due to my current work as pumpking. I only realized at the last minute that I was giving the talk as a keynote to a plenary session. This is probably good. If I'd known further in advance, I might have been tempted to do more editing, which would likely have been a mistake.

Closer to the conference, I was asked whether I could pick up an empty slot and do something, and of course I agreed. I had some pipe dreams of making a new talk on the spot, but cooler heads prevailed and I did a long-shelved talk about Dist::Zilla.

Both talks went acceptably, although I was unhappy with the Dist::Zilla talk. I think there's probably a reason I shelved it. If I do talk about Dist::Zilla again, I'll write new material. The keynote went very well, although it wasn't quite a full house. I wasn't opposite any speaker, sure, but I was competing with the iPhone 5s/5c launch. Ah, well! I got laughs and questions, both of which were not guaranteed. I also think I got played off about ten minutes early, so I rushed through the end when I didn't need to.

This wouldn't have happened, if I'd stuck to my usual practices. Normally when I give a talk, I put my iPhone on the podium with a clock or timer on it, and I time myself. I had been using an iOS app called Night Stand for this, the last few years, but I couldn't, on Friday. I had, for no very good reason, decided to upgrade my iPhone and iPad to iOS 7 on the morning before the conference. Despite briefly bricking both devices, I only encountered one real problem: Night Stand was no longer installing. After my keynote, I went and installed a replacement app, and chastised myself for not sticking to my usual routine.

By the time I was giving that keynote, I'd been in town for four days. A lot of activity tends to follow YAPCs, so it would've been nice to stick around afterward instead, but I was concerned about getting my body at least somewhat onto Tokyo time beforehand. Showing up to give a presentation half dead didn't seem like a good plan.

The trip wasn't great. I left home around 5:30 in the morning and headed to the bus stop. Even though it was going to be 80°F most of my time in Tokyo, it was only 40°F that morning, and I traveled in long pants. I agonized over this, and had thought about wearing sweat pants over shorts, or changing once I got to the airport. I decided this was ridiculous, though. It turned out, later, that I was wrong.

I flew out of Newark, which was just the way it always is. I avoided eating anything much because prices there are insane, but when my flight was delayed for three hours, I broke down and had a slice of pizza and an orangina. I also used the time to complete my "learn a new game each week" goal by learning backgammon. I killed a lot of time over the next few days with that app. It didn't take long to get bored of my AI opponent, but I haven't yet played against a human being.

The flight was pretty lousy. I'd been unable to get an aisle seat, so I wasn't able to get up and move around as much as I wanted. Worse, the plane was hot. I've always found planes to be a little too warm on the ground and a little too cool in the air. The sun was constantly baking my side of the plane, though, so it was nearly hot to the touch. I was sweating and gross, and I wished I had switched to shorts. The food was below average. I chose a bad movie to watch. When we finally landed, immigration took about an hour. I began to despair. It would 24 hours of travel by the time I reached the Pauley's, where I would stay. Was I really going to endure another awful 24 hours in just six days?

My spirits were lifted once I got out of the airport. (Isn't that always the way?) I changed my dollars to yen, bought a tiny bottle of some form of Pepsi, and went to squint at a subway map.

pepsi nex: king of zero

On my previous trip, I had been utterly defeated by the map of the subway at Narita. It looks a lot like any other subway map, but at each station are two numbers, each 3-4 digits. Were they time? Station numbers? Did I need to specify these to buy a ticket? The ticketing machines, though they spoke English, were also baffling. I was lost and finally just asked the station agent for help getting to Ueno Station.

This time, I felt like an old hand. I had forgotten all about the sign, but its meaning was immediately clear. They were prices for travel to each station, in yen, for each of the two lines that serviced the route. I fed ¥1200 into a ticket machine, bought a ticket, and got on the Keisei line toward Ueno. I probably could've done it with the machine's Japanese interface! I felt like a champ. Later, of course, I'd realize that the Keisei line takes a lot longer than the Skyliner, so maybe it wasn't the best choice… but I still felt good. Also, that long ride gave me time to finally finish reading It Can't Happen Here. Good riddance to that book!

My sense of accomplishment continued as I remembered the way to Marty and Karen's place. When I got in, I called home and confirmed that I was alive. I said that before we did anything else, I needed a shower. Then we chatted about this and that for a few hours and I decided that I didn't need to eat, just sleep. When I woke up, the sun was already up! It was a great victory over jet lag! Then I realized that it was 5:30 a.m., and the sun just gets up quite early in Tokyo. Land of the rising sun, indeed!

I got some work done and called home again. (Every time I travel, FaceTime grows more excellent!) Eventually, Karen and I headed out to check out the things I'd put onto my "check out while in Tokyo" list. First up, the Meiji Shrine!

Meiji Shrine entrance

We went to shops, did some wandering, and did not eat at Joël Robuchon's place in Rippongi. (Drat!) We got soba and retired for the night. The next day, we met up with Shawn Moore for more adventures. We went to Yoyogi Park, got izakaya with Keith Bawden, Daisuke Maki, et al., and Shawn and I ended our night with our Japanese Perl Monger hosts. We had a variety of izakaya food, but nothing compared, for me, to a plate of sauteed cabbage and anchovy. I could've eaten that all night. I also learned, I think, that I don't like uni. Good to know!

The next day, Shawn, Karen, and I headed down to Yokohama. Shawn and I had to get checked into our hotel. We planned to get to Kamakura to see the statue of the Amida Buddha, but got too late of a start. They both shrugged it off, but I felt I was to blame: we had to wait while I un-bricked my iPhone after my first attempt to upgrade its OS. Sorry! (Of course, they got to go later, so I'm not that sorry!)

Before leaving Minami-Senju, though, we got curry. Shawn had been very excited for CoCo Curry on our 2011 trip, and I was excited for it this time. Their curry comes in ten levels of hotness. I'd gotten level five, last time, and this time got six. In theory, you have to provide proof that you've had level five before (and, you know, lived) in order to get level six. I didn't have my proof, though, and I thought I might need Shawn to badger the waitress for me. Nope! I got served without being carded. I had found level five to be fairly bland, and so I expected six to be just a bit spicy. It was hot! I didn't get a photo! I really enjoyed it, and would definitely order it regularly if it we had a CoCo Curry place in Pennsylvania.

If I go back to Tokyo, I will eat level seven CoCo Curry. This is my promise to you, future YAPC::Asia organizer. Yes, you may watch to see if I cry.

Our hotel was just fine. The only room I could get was a smoking room (yuck) but that was the only complaint I had, and I knew what I was getting into there. For some reason we turned on the television, and sumo was on. We stared at this for a while, transfixed. It didn't last long, though. The spectacle was interesting, but the sport much less so, at least to me. Karen hit the road, Shawn and I worked on slides in earnest, and then we headed out to look for food. I put Shawn in charge (this was a common theme of my trip) and he found an excellent yakiniku place. We ordered a bunch of stuff with no idea what it was, except the tongue, and were not disappointed. (Shawn warned me at the outset: "I don't know a lot of food words.")

"What did I just eat?"

After some more slide wrangling, we crashed and, the next morning, were off to the conference.

YAPC::Asia is a strange conference for me. On both of my trips there, I've been an invited speaker, and felt very welcome… but feeling welcome isn't the same as feeling like a part of things. The language barrier is very difficult to get past. It's frustrating, because you find yourself in a room full of brilliant, funny, interesting people, but you can't quite participate. It's sort of like being a young child again.

Of course, that's what happens when the room is full of Japanese-speakers listening to another Japanese-speaker. It certainly need not be the case in one-on-one conversation. I chatted with Daisuke Maki, Kenichi Ishigaki, Hiroaki Kobayashi, and some others, but it was far too few and too infrequent. It was much easier to stick to talking to the people I already knew. In retrospect, this was pretty stupid. While it's true that I don't see (say) Paul and Shawn and Karen very often, I can talk to them whenever I want, and I know what topics to ask them about and so on.

This year, YAPC::Asia had eleven hundred people. So, that's something like a dozen that I knew and 1088 that I didn't. Heck, there were even a few westerners I didn't go pester, where there'd be no language issue. I wanted to try to convince more of the amazing talent in the Japanese Perl community to come hack on perl5.git, and for the most part, I did not do this outside of my in-talk exhortation. In that sense, my YAPC::Asia was a failure of my own making, and I regret my timidity.

In every other aspect, the conference was an amazing success as far as I could tell. It was extremely friendly, professional, energetic, and informative. I sat through a number of talks in Japanese, and they were really interesting. People sometimes talk about how there's "CPAN" and "Darkpan" and that's that. You're either working with "the community" or you're not. The reality is that there are multiple groups. Of course "we" know that in "the" community. How much crossover is there between the Dancer community and the Perl 5 Porters? Some. Well, the Japanese Perl community — or, rather, the community in Japan that made YAPC::Asia happen — has some crossover with the community that makes YAPC::NA happen, but there are large disjunct segments, and they're solving problems differently, and it's ridiculous to imagine that we can't learn from each other. Even if it wasn't self-evident, it was evident in the presentations that were given.

look at all those volunteers

After attending the largest YAPC ever, by quite a lot (at 1100 people!) it was also sad to learn that this may be the last YAPC::Asia in Tokyo for some time. The organizers, Daisuke Maki and the enigmatic "941" have been doing it for years, and have declared that they're done with it. It seems unlikely that anyone will step in and run the conference in their stead, at least in Tokyo. There may be, they suggested, a change to regional Perl workshops: one in Sapporo, one in Osaka, and so on. Perl workshops are great, but will I make it to the Osaka Perl Workshop? Well, we'll see.

If I do, though, I'm going to do my best Paul Fenwick impression and force everyone there to talk to me all the time.

When the conference was over, Karen, Marty, Paul, Shawn and I headed to dinner (with Marcel, Reini, and Mirjam) and then to… karaoke! At first, Marty was reticent and not sure he'd stick around. Paul's opening number changed his mind, though, and we sang ridiculous songs for ninety minutes. I drank a Zima. A Zima! I thought this was pretty ridiculous, but Paul one-upped me, or perhaps million-upped me, by ordering a cocktail made with pig placenta. I declined to sample it.

The next day, after a final FaceTime chat with Gloria and a final high five for Paul, I headed out to the airport. In 2011, I cut it incredibly close and nearly missed my plane, and I wasn't going to do that this time. Miyagawa pointed me toward the Musashikosugi JR line and warned me that the ticket terminals there were confusing. He was right, too. I wasted ten minutes trying to figure them out before finally asking the station agent for help. If I'd just started there, I would've made an earlier train and not ended up sitting on a bench for forty minutes. So I ended my last train ride in Tokyo much as I began my first one: baffled by the system, reduced to pleading for help. I didn't mind, really. I'd just finished an excellent trip and was feeling great. (I also felt pretty good about blaming the computer and not myself, but that's another matter.)

Hello Kitty plane

Narita was fine. Great, even! The airline staff treated me like a king. I got moved to an aisle seat with nobody beside me! I killed time in the United lounge, had a few free beers, and transferred some movies to my iPad. In short order, we were aboard and headed home. The flight was only eleven hours, customs was quick, and soon (finally!) I was reunited with my family and off to Cracker Barrel for a "welcome back to America" dinner.

It was a great YAPC, and the most important thing I learned was the same as always: I'm there to talk to the people, not listen to the talks. I'll do better next time!

lexical subroutines in perl 5 (body)

by rjbs, created 2013-09-25 19:50
last modified 2013-09-26 20:22

One of the big new experimental features in Perl 5.18.0 is lexical subroutines. In other words, you can write this:

my sub quickly { ... }
my @sorted = sort quickly @list;

my sub greppy (&@) { ... }
my @grepped = greppy { ... } @input;

These two examples highlight cases where lexical references to anonymous subroutines would not have worked. The first argument to sort must be a block or a subroutine name, which leads to awful code like this:

sort { $subref->($a, $b) } @list

With our greppy, above, we get to benefit from the parser-affecting behaviors of subroutine prototypes. Although you can write sub (&@) { ... }, it has no effect unless you install that into a named subroutine, and it needs to be done early enough.

On the other hand, lexical subroutines aren't just drop-in replacements for code refs. You can't pass them around and have them retain their named-sub behavior, because you'll still just have a reference to them. They won't be "really named." So if you can't use them as parameters, what are their benefits over named subs?

First of all, privacy. Sometimes, I see code like this:

package Abulafia;

our $Counter = 0;

...

Why isn't $Counter lexical? Is it part of the interface? Is it useful to have it shared? Would my code be safer if that was lexical, and thus hidden from casual accidents or stupid ideas? In general, I make all those sorts of variables lexical, just to make myself think harder before messing around with their values. If I need to be able to change them, after all, it's only a one word diff!

Well, named subroutines are, like our variables, global in scope. If you think you should be using lexical variables for things that aren't API, maybe you should be using lexical subroutines, too. Then again, you may have to be careful in thinking about what "aren't API" means. Consider this:

package Service::Client;
sub _ua { LWP::UserAgent->new(...) }

In testing, you've been making a subclass of Service::Client that overrides _ua to use a test UA. If you make that subroutine lexical, you can't override it in the subclass. In fact, if it's lexical, it won't participate in method dispatch at all, which means you're probably breaking your main class, too! After all, method dispatch starts in the package on which a method was invoked, then works its way up the packages in @INC. Well, package means package variables, and that excludes lexical subroutines.

So, it may be worth doing, but it means more thinking (about whether or not to lexicalize each non-public sub), which is something I try to avoid when coding.

So when is it useful? I see two scenarios.

The first is when you want to build a closure that's only used in one subroutine. You could make a big stretch, here, and talk about creating a DSL within your subroutine. I wouldn't, though.

# Please forgive this extremely contrived example. -- rjbs, 2013-09-25
sub dothings {
  my ($x, $y, @rest) = @_;

  my sub with_rest (&) { map $_[0]->(), @rest; }

  my @to_x = with_rest { $_ ** $x };
  my @to_y = with_rest { $_ ** $y };

  ...
}

I have no doubt that I will end up using this pattern someday. Why do I know this? Because I have written Python, and this is how named functions work there, and I use them!

There's another form, though, which I find even more interesting.

In my tests, I often make a bunch of little packages or classes in one file.

package Tester {
  sub do_testing {
    ...
  }
}

package Targeter {
  sub get_targets {
    ...
  }
}

Tester->do_testing($_) for Targeter->get_targets(%param);

Sometimes, I want to have some helper that they can all use, which I might write like this:

sub logger { diag shift; diag explain(shift) }

package Tester {
  sub do_testing {
    logger(testing => \@_);
    ...
  }
}

package Targeter {
  sub get_targets {
    logger(targeting => \@_);
    ...
  }
}

Tester->do_testing($_) for Targeter->get_targets;

Well… I might write it like that, but it won't work. logger is defined in one package (presumably main::) and then called from two different packages. Subroutine lookup is per-package, so you won't find logger. What you need is a name lookup that isn't package based, but, well, what's the word? Lexical!

So, you could make that a lexical subroutine by sticking my in front of the subroutine declaration (and adding use feature 'lexical_subs (and, for now, no warnings 'experimental::lexical_subs')). There are problems, though, like the fact that caller doesn't give great answers, yet. And we can't really monkeypatch that subroutine, if we wanted, which we might. (Strangely abusing stuff is more acceptable in tests than in the production code, in my book.) What we might want instead is a lexical name to a package variable. We have that already! We just write this:

our sub logger { ... }

I'm not using lexical subs much, yet, but I'm pretty sure I will use them a good bit more in the future!

The Great Infocom Replay: Starcross (body)

by rjbs, created 2013-09-22 19:51

Having finished the Zork trilogy, it was time for me to continue on into the great post-Zork canon. I was excited for this, because it means lots of games that I haven't played yet. First up: Starcross. I was especially excited for Starcross! It's the first of Infocom's sci-fi games, and I only remembered hearing good things. I'd meant to get started on the flight to YAPC::Asia, but didn't manage until I'd begun coming home. On the train to Narita, things got off to a weird start.

First, I realized I needed to consult the game's manual to get started. I'm not sure if this was done for fun or as copy protection, but fortunately I had a scan of the file I needed. After getting into the meat of the game, it was time to get mapping. Mapping Starcross took a while to get right, but it was fun. The game takes place on a huge space station, a rotating cylinder, in which some of the hallways are endless rings. I liked the idea, but I think that up/down, port/starboard, and fore/aft were used in a pretty confusing way. I'm not sure the map really made sense, but was a nice change of pace without being totally incomprehensible.

The game's puzzles had a lot going for them. It was clear when there was a puzzle to solve, and it was often clear what had to be done, but not quite how. Some objects had multiple uses, and some puzzles had multiple solutions. Unfortunately, it has a ton of the classic text adventure problems, and they drained the fun from the game at nearly every turn.

The game can silently enter unwinnable state, which you don't work out until you can't solve the next puzzle. (It occurs to me that an interpreter with its own UNDO would be a big help here, since I don't save enough.)

There are tasks that need to be repeated, despite appearances. Something like this happens:

> SEARCH CONTAINER
You root around but don't find anything.

> SEARCH CONTAINER
You still don't find anything.

> SEARCH CONTAINER
Hey, look, a vital object for solving the game!
[ Your score has gone up 25 points. ]

…and my head explodes.

There are guess-the-verb puzzles, which far too often have as the "right" verb a really strange option. For example, there's a long-dead spaceman, now just a skeleton in a space suit.

> LOOK IN SUIT
It's a space suit with a dead alien in it.

> SEARCH SKELETON
You don't see anything special.

> EXAMINE SKELETON
It sure is dead.

> TOUCH SKELETON
Something falls out of the sleeve of the suit!

Argh!

There's a "thief" character that picks up objects and moves them around. It's used to good effect (as was the thief in Zork Ⅰ) but it wastes time. Wasting time wouldn't be a problem, if there wasn't a part of a time limit built into the game. The time limit can be worked around, but it means you need to play the game in the right order, which might mean going back to an early save once you work that out. (Why is it that I love figuring out the best play order in Suspended, but not anything else?) Even that wouldn't be so bad, in part because I happily I had started by solving a number of puzzles that can be solved in any order, but there was a problem. Most of the game's puzzles center around collecting keys, so by the end of the game you're carrying a bunch of keys, not to mention a few objects key to getting the remaining keys… and there's an inventory limit. It's not even a good inventory limit, where the game just says "you can't carry anything more." Instead, it's the kind where, when you're carrying too much, you start dropping random things.

Argh!

It did lead to one amusing thing, at least, when I tried to pick up a key and accidentally dropped the space suit I was wearing.

Still, the game is good. I particularly like the representational puzzles, like the solar system and repair room. Its prose is good, but neither as economical as earlier games nor as rich as later ones, making it inferior to both. As in earlier games, I'm frustrated by the number of things mentioned but not examinable. Getting "I don't know that word [which I just used]" is worse than "you won't need to refer to that." I'm hoping that the larger dictionaries of v5 games will allow for better messages like that. I've got a good dozen games until I get to those, though.

Next up will be Suspended. I'm not sure how that will go, since I've played that game many times every year for the past decade or so. After that, The Witness, about which I know nearly nothing!

the Zork Standard Code for Information Interchange (body)

by rjbs, created 2013-09-15 11:33
last modified 2013-09-16 12:15

I always feel a little amazed when I realize how many of the things that really interest me, today, are things that I was introduced to by my father. Often, they're not even things that I think he's passionate about. They're just things we did together, and that was enough.

One of the things I really enjoyed doing with him was playing text adventures. It's strange, because I think we only did three (the Zork trilogy) and I was not very good at them. I got in trouble for sneaking out the Invisi-Clues hint book at one point and looking up answers for problems we hadn't seen yet. What was I thinking?

Still, it's stuck with me, and I'm glad, because I still enjoy replaying those games, trying to write my own, and reading about the craft. Most of my (lousy, unfinished) attempts to make good text adventures have been about making the game using existing tools. (Generally, Inform 6. Inform 7 looks amazing, but also like it's not for me.) Sometimes, though, I've felt like dabbling in the technical side of things, and that usually means playing around with the Z-Machine.

Most recently, I was thinking about writing an assembler to build Z-Machine code, and my thinking was that I'd write it in Perl 6. It didn't go too badly, at first. I wrote a Perl 6 program that built a very simple Z-Machine executable, I learned more Perl 6, and I even got my first commit into the Rakudo project. The very simple program was basically "Hello, World!" but it was just a bit more complicated than it might sound, because the Z-Machine has its own text encoding format called ZSCII, the Zork Standard Code for Information Exchange, and dealing with ZSCII took up about a third of my program. Almost all the rest was boilerplate to output required fields of the output binary, so really the ZSCII code was most of the significant code in this program. I wanted to write about ZSCII, how it works, and my experience writing (in Perl 5) ZMachine::ZSCII.

First, a quick refresher on some terminology, at least as I'll be using it:

  • a character set maps abstract characters to numbers (called code points) and back
  • an encoding maps from those numbers to octets and back, making it possible to store them in memory

We often hear people talking about how Latin-1 is both of these things, but in Unicode they are distinct. That is: there are fewer than 256 characters in Latin-1, so we can always store an character's code point in a single octet. In Unicode, there are vastly more than 256 characters, so we must use a non-identity encoding scheme. UTF-8 is very common, and uses variable-length sequences of bytes. UTF-16 is also common, and uses different variable-length byte sequences. There are plenty of other encodings for Unicode characters, too.

The Z-Machine's text representation has distinct character set and encoding layers, and they are weird.

The Z-Machine Character Set

Let's start with the character set. The Z-Machine character set is not one character set, but a per-program set. The basic mapping looks something like this:

+-----------+---------------------------------------------------------+
| 000 - 01F | unassigned, save for (␀, ␡, ␉, ␤, and "sentence space") |
| 020 - 07E | same as ASCII                                           |
| 07F - 080 | unassigned                                              |
| 081 - 09A | control characters                                      |
| 09B - 0FB | extra characters                                        |
| 0FC - 0FE | control characters                                      |
| 0FF - 3FF | unassigned                                              |
+-----------+---------------------------------------------------------+

There are a few things of note: first, the overlap with ASCII is great if you're American:

20-2F: ␠ ! " # $ % & ' ( ) * + , - . /
20-39: 0 1 2 3 4 5 6 7 8 9
3A-40: : ; < = > ? @
41-5A: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
5B-60: [ \ ] ^ _ `
61-7A: a b c d e f g h i j k l m n o p q r s t u v w x y z
7B-7E: { | } ~

The next thing to note is the "extra characters," which is where you'll be headed if you're not just speaking English. Those 96 code points can be defined by the programmer. Most of the time, they basically extend the character repertoire to cover Latin-1. When that's not useful, though, the Z-Machine executable may provide its own mapping of these extra character by providing an array of words called the Unicode translation table. Each position in the array maps to one extra character, and each value maps to a Unicode codepoint in the basic multilingual plane. In other words, the Z-Machine does not support Emoji.

So: ZSCII is not actually a character set, but a vast family of many possible user-defined character sets.

Finally, you may have noticed that the basic mapping table gave (unassigned) code points from 0x0FF to 0x3FF. Why's that? Well, the encoding mechanism, which we'll get to soon, lets you decode to 10-bit codepoints. My understanding, though, is that the only possible uses for this would be extremely esoteric. They can't form useful sentinel values because, as best as I can tell, there is no way to read a sequence of decoded codepoints from memory. Instead, they're always printed, and presumably the best output you'll get from one of these codepoints will be �.

Here's a string of text: Queensrÿche

Assuming the default Unicode translation table, here are the codepoints:

Unicode: 51 75 65 65 6E 73 72 FF 63 68 65

ZSCII  : 51 75 65 65 6E 73 72 A6 63 68 65

This all seems pretty simple so far, I think. The per-program table of extra characters is a bit weird, and the set of control characters (which I didn't discuss) is sometimes a bit weird. Mostly, though, it's all simple and reasonable. That's good, because things will get weirder as we try putting this into octets.

Z-Machine Character Encoding

The first thing you need to know is that we encode in two layers to get to octets. We're starting with ZSCII text. Any given piece of text is a sequence of ZSCII code points, each between 0 and 1023 (really 255) inclusive. Before we can get to octets, we first built pentets. I just made that word up. I hope you like it. It's a five-bit value, meaning it ranges from 0 to 31, inclusive.

What we actually talk about in Z-Machine jargon isn't pentets, but Z-characters. Keep that in mind: a character in ZSCII is distinct from a Z-character!

Obviously, we can't fit a ZSCII character, which ranges over 255 points, into a Z-character. We can't even fit the range of the ZSCII/ASCII intersection into five bits. What's going on?

We start by looking up Z-characters in this table:

  0                               1
  0 1 2 3 4 5 6 7 8 9 A B C D E F 0 1 2 3 4 5 6 7 8 9 A B C D E F
  ␠       ␏ ␏ a b c d e f g h i j k l m n o p q r s t u v w x y z

In all cases, the value at the bottom is a ZSCII character, so you can represent a space (␠) with ZSCII character 0x020, and encode that to the Z-character 0x00. So, where's everything else? It's got to be in that range from 0x00 to 0x1F, somehow! The answer lies with those funny little "shift in" glyphs under 0x04 and 0x05. The table above was incomplete. It is only the first of the three "alphabets" of available Z-characters. The full table would look like this:

      0                               1
      0 1 2 3 4 5 6 7 8 9 A B C D E F 0 1 2 3 4 5 6 7 8 9 A B C D E F
  A0  ␠       ␏ ␏ a b c d e f g h i j k l m n o p q r s t u v w x y z
  A1  ␠       ␏ ␏ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
  A2  ␠       ␏ ␏ … ␤ 0 1 2 3 4 5 6 7 8 9 . , ! ? _ # ' " / \ - : ( )

Strings always begin in alphabet 0. Z-characters 0x04 and 0x05 mark the next character as being in alphabet 1 or alphabet 2, respectively. After that character, the shift is over, so there's no shift character to get to alphabet 0. You won't need it.

So, this gets us all the ZSCII/ASCII intersection characters… almost. The percent sign, for example, is missing. Beyond that, there's no sign of the "extra characters." Now what?

We get to the next layer of mapping via A2-06, represented above as an ellipsis. When we encounter A2-06, we read two more Z-characters, join the two pentets, interpret the resulting dectet as a little-endian 10-bit integer, and that's the ZSCII character being represented. So, in a given string of Z-characters, any given ZSCII character might take up:

  • one Z-character (a lowercase ASCII letter)
  • two Z-characters (an uppercase ASCII letter or one of the symbols in A2)
  • four Z-characters (anything else as 0x05 0x06 X Y, where X-Y points to ZSCII)

So, now that we know how to convert a ZSCII character to Z-characters without fail, how do we store that in octets? Easy. Let's encode this string:

»Gruß Gott!«

That maps to these twenty-four Z-characters:

»   05 06 05 02
G   04 0C
r   17
u   1A
ß   05 06 05 01
␤   00
G   04 0C
o   14
t   19
t   19
!   05 14
«   05 06 05 03

We start off with a four Z-character sequence, then a two Z-character sequence, then a few single Z-characters. The whole string of Z-characters should be fairly straightforward. We could just encode each Z-character as an octet, but that would be pretty wasteful. We'd have three unused bits per Z-character, and in 1979 every byte of memory was (in theory) precious. Instead, we'll pack three Z-characters into every word, saving the word's high bit for later. That means we can fit "!«" into two words like so:

!   05 14         0b00101 0b01110
«   05 06 05 03   0b00101 0b00110 0b00101 0b00011

…so…

0001 0101   1100 0101
1001 1000   1010 0011

Red and blue runs are the bits of our Z-characters. You can see that each word is three complete Z-characters. The green bits are the per-word high bits. This bit is always zero, except for the last word in a packed string. If we're given a pointer to a packed string in memory (this, for example, is the argument to the print_addr opcode in the Z-Machine instruction set) we know when to stop reading from memory because we encounter a word with the high bit set.

Okay! Now we can take a string of text, represent it as ZSCII characters, convert those to Z-characters, and then pack the whole thing into pairs of octets. Are we done?

Not quite. There are just two things I think are still worth mentioning.

The first is that the three alphabet tables that I named above are not constant. Just like the Unicode translation table, they can be overridden on a per-program basis. Some things are constant, like shift bits and the use of A2-06 as the leader for a four Z-character sequence, but most of the alphabet is up for grabs. The alphabet tables are stored as 78 bytes in memory, with each byte referring to a ZSCII code point. (Once again we see code points between 0x100 and 0x3FF getting snubbed!)

The other thing is abbreviations.

Abbreviations make use of the Z-characters I ignored above: 0x01 through 0x03. When one of these Z-characters is seen, the next character is read. Then this happens:

if (just_saw in (1, 2, 3)) {
  next   = read_another
  offset = 32 * (just_saw - 1) + next
}

offset is the offset into the "abbreviations table." Values in that table are pointers to memory locations of string. When the Z-Machine is printing a string of Z-characters and encounters an abbreviation, it looks up the memory address and prints the string there before continuing on with the original string. Abbreviation expansion does not recurse. This can save you a lot of storage if you keep referring to the "localized chronosynclastic infundibulum" throughout your program.

ZMachine::ZSCII

The two main methods of ZMachine::ZSCII should make good sense now:

sub encode {
  my ($self, $string) = @_;

  $string =~ s/\n/\x0D/g; # so we can just use \n instead of \r

  my $zscii  = $self->unicode_to_zscii($string);
  my $zchars = $self->zscii_to_zchars($zscii);

  return $self->pack_zchars($zchars);
}

First we fix up newlines. Then we map the Unicode string's characters to a string of ZSCII characters. Then we map the ZSCII characters into a sequence of Z-characters. Then we pack the Z-characters into words.

At every point, we're dealing with Perl strings, which are just sequences of code points. That is, they're like arrays of non-negative integers. It doesn't matter that $zscii is neither a string of Unicode text nor a string of octets to be printed or stored. After all, if someone has figured out that esoteric use of Z+03FF, then $zscii will contain what Perl calls "wide characters." Printing it will print the internal ("utf8") representation, which won't do anybody a lick of good. Nonetheless, using Perl strings keeps the code simple. Everything uses one abstraction (strings) intead of two (strings and arrays).

Originally, I wrote my ZSCII code in Perl 6, but the Perl 6 implementation was very crude, barely supporting the basics of ASCII-only ZSCII. I'm looking forward to (someday) bringing all the features in my Perl 5 code to the Perl 6 implementation, where I'll get to use distinct types (Str and Buf) for the text and non-text strings, sharing some, but not all, of the abstractions as appropriate.

Until then, I'm not sure what, if anything, I'll use this library for. Writing more of that Z-Machine assembler is tempting, or I might just add abbreviation support. First, though, I think it's time for me to make some more progress on my Great Infocom Replay…

random tables with Roland (body)

by rjbs, created 2013-09-10 08:42
last modified 2013-09-10 08:43

This post is tagged programming and dnd. I don't get to do that often, and I am pleased.

For quite a while, I've been using random tables to avoid responsibility for the things that happen in my D&D games. Instead of deciding on the events that occur at every turn, I create tables that describe the general feeling of a region and then let the dice decide what aspects are visible at any given moment. It has been extremely freeing. There's definitely a different kind of skill needed to get things right and to deal with what the random number gods decide, but I really enjoy it. Among other things, it means that I can do more planning well in advance and have more options at any moment. I don't need to plan a specific adventure or module each week, but instead prepare general ideas of regions on different scales, depending on the amount of time likely to be spent in each place.

Initially, I put these charts in Numbers, which worked pretty well.

Random Encounters spreadsheet

I was happy with some stupid little gimmicks. I color-coded tables to remind me which dice they'd need. The color codes matched up to colored boxes that showed me the distribution of probability on those dice, so I could build the tables with a bit more confidence. It was easy, but I found myself wanting to be able to drill further and further down. What would happen is this: I'd start with an encounter table with 19 entries, using 1d20+1d8 as the number generator. This would do pretty well for a while, but after you've gotten "goblin" a few times, you need more variety. So, next up "goblin" would stop being a result and would start being a redirection. "Go roll on the goblin encounter table."

As these tables multiplied, they became impossible to deal with in Numbers. Beyond that, I wanted more detail to be readily available. The encounter entry might have originally been "2d4 goblins," but now I wanted it to pick between twelve possible kinds of goblin encounters, each with their own number appearing, hit dice, treasure types, reaction modifiers, and so on. I'd be flipping through pages like a lunatic. It would have been possible to inch a bit closer to planning the adventure by pre-rolling all the tables to set up the encounter beforehand and fleshing it out with time to spare, but I wasn't interested in that. Even if I had been, it would have been a lot of boring rolling of dice. That's not what I want out of a D&D game. I want exciting rolling of dice!

I started a program for random encounters in the simplest way I could. A table might look something like this:

type: list
pick: 1
items:
  - Cat
  - Dog
  - Wolf

When that table is consulted, one of its entries is picked at random, all with equal probability. If I wanted to stack the odds, I could put an entry in there multiple times. If I wanted to add new options, I'd just add them to the list. If I wanted to make the table more dice-like, I'd write this:

dice: 2d4
results:
  2: Robot
  3: Hovering Squid
  4: Goblin
  5: Weredog
  6: Quarter-giant
  7: Rival adventurers
  8: Census-taking drone

As you'd expect, it rolls 2d4 to pick from the list.

This was fine for replacing the first, very simple set of tables, but I wanted more, and it was easy to add by making this all nest. For example, this is a table from my test/example files:

dice: 1d4
times: 2
results:
  1: Robot
  2: Hovering Squid
  3:
    dice: 1d4
    times: 1d4
    results:
      1: Childhood friend
      2-3: Kitten
      4: Goblin
  4: Ghost

This rolls a d4 to get a result, then rolls it again for another result, and gives both. If either of the results is a 3, then it rolls 1-4 more times for additional options. The output looks like this:

~/code/Roland$ roland eg/dungeon/encounters-L3
Robot
Ghost
~/code/Roland$ roland eg/dungeon/encounters-L3
  Goblin
  Kitten
  Kitten
  Kitten
~/code/Roland$ roland eg/dungeon/encounters-L3
Hovering Squid
  Childhood friend
  Kitten
  Goblin
  Kitten

Why are some of those things indented? Because the whole presentation of results stinks, because it's just good enough to get the point across. Oh well.

In the end, in the above examples, the final result is always a string. This isn't really all that useful. There are a bunch of other kinds of results that would be useful. For example, when rolling for an encounter on the first level of a dungeon, it's nice to have a result that says "actually, go roll on the second level, because something decided to come upstairs and look around." It's also great to be able to say, "the encounter is goblins; go use the goblin encounter generator."

Here's a much more full-featured table:

dice: 1d4 + 1d4
results:
  2:  Instant death
  3:  { file: eg/dungeon/encounters-L2 }
  4:  { file: eg/monster/zombie }
  5:
    - { file: [ eg/monster/man, { num: 1 } ] }
    - { file: eg/plan }
    -
      type: list
      pick: 1
      items:
        - canal
        - creek
        - river
        - stream
    - Panama
  6:
    dice: 1d6
    results:
      1-2: Tinker
      3-4: Tailor
      5: Soldier
      6:
        type: dict
        entries:
          - Actually: Spy
          - Cover: { file: eg/job }
  7: { times: 2 }
  8: ~

(No, this is not from an actual campaign. "Instant death" is a bit much, even for me.)

Here, we see a few of Roland's other features. The mapping with file in it tells us to go roll the table found in another file, sometimes (as in the case of the first result under result 5) with extra parameters. We can mix table types. The top-level table is a die-rolling table, but result 5 is not. It's a list table, meaning we get each thing it includes. One of those things is a list table with a pick option, meaning we get that many things picked randomly from the list. Result 7 says "roll again on this table two more times and keep both results." Result 8 says, "nothing happens after all."

Result 6 under result 6 is one I've used pretty rarely. It returns a hash of data. In this case, the encounter is with a spy, but he has a cover job, found by consulting the job table.

Sometimes, in table like this, I know that I need to force a given result. If I haven't factored all the tables into their own results, I can pass -m to Roland to tell it to let me manually pick the die results, but to let each result have a default-random value. If I want to force result six on the above table, but want its details to be random, I can enter 6 manually and then hit enter until it's done:

~/code/Roland$ roland -m eg/dungeon/encounters-L1
rolling 1d4 + 1d4 for eg/dungeon/encounters-L1 [4]: 6
rolling 1d6 for eg/dungeon/encounters-L1, result 6 [3]: 
Tailor

Finally, there are the monster-type results. We had this line:

- { file: [ eg/monster/man, { num: 1 } ] }

What's in that file?

type: monster
name: Man
ac: 9
hd: 1
mv: 120'
attacks: 1
damage: 1d4
num: 2d4
save: N
morale: 7
loot: ~
alignment: Lawful
xp-bonuses:
description: Just this guy, you know?
per-unit:
- label: Is Zombie?
  dice: 1d100
  results:
    1: { replace: { file: [ eg/monster/zombie, { num: 1 } ] } }
    2: Infected
    3-100: No

In other words, it's basically a YAML-ified version of a Basic D&D monster block. There are a few additional fields that can be put on here, and we see some of them. For example, per-unit can decorate each unit. (We're expecting 2d4 men, because of the num field, but if you look up at the previous encounter table, you'll see that we can override this to do things like force an encounter with a single creature.) In this case, we'll get a bunch of men, some of whom may be infected or zombified.

Not every value is treated the same way. The number encountered is rolled and used to generate units, and the hd value is used to produce hit points for each one. Even though it looks like a dice specification, damage is left verbatim, since it will get rolled during combat. It's all a bit too special-casey for my tastes, but it works, and that's what matters.

~/code/Roland$ roland eg/monster/man

Man (wandering)
  No. Appearing: 5
  Hit Dice: 1
  Stats: [ AC 9, Mv 120', Dmg 1d4 ]
  Total XP: 50 (5 x 10 xp)
- Hit points: 6
  Is Zombie?: No
- Hit points: 1
  Is Zombie?: No
- Hit points: 2
  Is Zombie?: No
- Hit points: 5
  Is Zombie?: No
- Hit points: 5
  Is Zombie?: Infected

(Notice the "wandering" up top? You can specify different bits of stats for encountered-in-lair, as described in the old monster blocks.)

In the encounter we just rolled, there were no zombies. If there had been, this line would've come into play:

1: { replace: { file: [ eg/monster/zombie, { num: 1 } ] } }

This replaces the unit with the results of that roll. Let's force the issue:

~/code/Roland$ roland -m eg/monster/man
rolling 2d4 for number of Man [5]: 4
rolling 1d8 for Man #1 hp [7]: 
rolling 1d100 for unit-extra::Is Zombie? [38]: 1
rolling 2d8 for Zombie #1 hp [4]: 
rolling 1d8 for Man #2 hp [8]: 
rolling 1d100 for unit-extra::Is Zombie? [10]: 
rolling 1d8 for Man #3 hp [7]: 
rolling 1d100 for unit-extra::Is Zombie? [90]: 
rolling 1d8 for Man #4 hp [2]: 
rolling 1d100 for unit-extra::Is Zombie? [13]: 

Man (wandering)
  No. Appearing: 3
  Hit Dice: 1
  Stats: [ AC 9, Mv 120', Dmg 1d4 ]
  Total XP: 30 (3 x 10 xp)
- Hit points: 8
  Is Zombie?: No
- Hit points: 7
  Is Zombie?: No
- Hit points: 2
  Is Zombie?: No


Zombie (wandering)
  No. Appearing: 1
  Hit Dice: 2
  Stats: [ AC 7, Mv 90', Dmg 1d6 ]
  Total XP: 20 (1 x 20 xp)
- Hit points: 4

Note that I only supplied overrides for two of the rolls.

You can specify encounter extras, which act like per-unit extras, but for the whole group:

extras:
  Hug Price:
    dice: 1d10
    results:
      1-9: Free
      10:  10 cp

Finally, sometimes one kind of encounter implies another:

extras:
  Monolith?:
    dice: 1d10
    results:
      1-9: ~
      10:  { append: Monolith }

Here, one time out of ten, roboelfs are encountered with a Monolith. That could've been a redirect to describe a monolith, but for now I've just used a string. Later, I can write up a monolith table using whatever form I want. (Most likely, this kind of thing would become a dict with different properties all having embedded subtables.)

Right now, I'm really happy with Roland. Even though it's sort of a mess on many levels, it's good enough to let me get the job done. I think the problem I'm trying to solve is inherently wobbly, and trying to have an extremely robust model for it is going to be a big pain. Even though it goes against my impulses, I'm trying to leave things sort of a mess so that I can keep up with my real goal: making cool random tables.

Roland is on GitHub.

Tabs Outliner for Chrome revisited (body)

by rjbs, created 2013-09-08 12:29
last modified 2013-09-09 17:40
tagged with: @markup:md journal

Earlier this year, I lamented the state of "workspaces" in Chrome. I said that I'd settled on using Tabs Outliner, but that I basically didn't like it. The author of the plugin asked me to elaborate, and I said I would. It has been sitting in my todo list for months and I have felt bad about that. Today, Gregory Meyers commented on that blog post, and it's gotten me motivated enough to want to elaborate.

I agree with everything Gregory said, although not everything he says is very important to me. For example, the non-reuse of windows doesn't bother me all that much. On the other hand, this nails it:

Panorama is intuitive. I didn't have to read a manual to understand how to use it. TO comes with an extensive list of instructions... because it is not intuitive. Now, supplying good instructions is better than leaving me totally lost. But it's better to not need instructions at all. I have to work much harder to use TO.

I wanted to use Tabs Outliner as a way to file tabs into folders or groups and then bring up those groups wholesale. For example, any time I was looking at some blog post about D&D that I couldn't read now, I'd put it a D&D tab group. It's not just about topical read-it-later, though. If I was doing research on implementations of some standard, I might build a tab group with many pages about them, along with a link to edit my own notes, and so on. The difference is that for "read it later," traditional bookmarks are enough. I'd likely only bring them back up one at a time. I could use Instapaper for this, too. For a group of research tabs (or other similar things), though, I want to bring them all up at once and have changes to the group saved together.

This just doesn't seem like that Tabs Outliner is good at.

Let's look at how it works. This is a capture of the Tabs Outliner window:

tabs outliner plugin

It's an outliner, just like its name implies. Each of the second-level elements is a window, and the third level elements are tabs.

At the top, you can see the topical tab groups I had created to act like workgroups that I could restore and save. I can double-click on, say, Pobox and have that window re-appear with its six tabs. If I open or close tabs in the window, then close the whole window, the outliner will be up to date. If this was all that Tab Outliner did, it might be okay. Unfortunately, there are problems.

First, and least of all, when I open a tab group that had been closed, the window is created at a totally unworkable size. I think it's based on the amount of my screen not taken up by the Tab Outliner window, but whatever the case, it's way, way too small. The first thing I do upon restoring a group is to fix the window size. There's an option to remember the window's original size, but it doesn't seem to work. Or, at least, it only seems to work on tab groups you've created after setting the preference which means that to fix your old tab groups, you have to create a new one and move all the tabs over by hand, or something like that. It's a pain.

Also in the screenshot, you'll see a bunch of items like "Window (crashed Aug 17)". What are those? They're all the windows I had open the last time I quit. Any time you quit Chrome, all your open windows, as near as I can tell, stay in Tab Outliner, as "crashes." Meanwhile, Chrome re-opens your previous session's windows, which will become "crashes" again next time you quit. If you have three open windows, then every time you quit and restart, you have three more bogus entries in the Tab Outliner. How do you clean these up? Your instinct may be to click the trash can icon on the tab group, but don't! If you do that, it will delete the container, but not the contents, and the tabs will now all have to be deleted individually from the outliner. Instead first collapse the group and then delete it with the trash icon.

Every once in a while, I do a big cleanup of these items.

Here's what I really want from a minimal plugin: I want to be able to give a window a name. All of its tabs get saved into a list. Any time I update them by adding a tab, closing a tab, or re-ordering tabs, the list is updated immediately. If I close the window, the name is still somewhere I can click to restore the whole window. If I have a named window open when I quit Chrome, then when I restart Chrome, it's still open and still named. Other open unnamed windows are still open, and still unnamed.

This would get me almost everything I want from tab groups, even if they don't get the really nice interface of Panorma / Tab Exposé.

managing GitHub Issues labels (body)

by rjbs, created 2013-09-06 19:14

I've been slowly switching all my code projects to use GitHub's bug tracking (GitHub Issues) in addition to their code hosting. So far I'm pretty happy with it. It's not perfect, but it's good enough. It's got a tagging system so that you can categorize your issues according to whatever set of tags you want. The tags are called labels.

Once you figure out what set of labels you want, you realize that you then have to go set the labels up over and over on all your repos. Okay, I guess. How long could that take, right?

Well, when you have a few hundred repositories, it can take long enough.

And what happens when you decide that blazing red was a stupid color for "non-critical bugs" and maybe you shouldn't have spelled "tests" with three t's.

Fortunately, GitHub has a really good API, and Pithub seems, so far, to be a very nice library for dealing with the GitHub API!

I wrote a little program to update labels on all my repos for me.

my new daily agenda emails (body)

by rjbs, created 2013-09-03 21:27
last modified 2014-02-17 18:34

I've finally finished (for now, anyway) another hunk of code in my ever-growing suite of half-baked productivity tools. I'm tentatively calling the whole mess "Ywar", and would paste the dictionary entry here, but all it says is "obsolete form of 'aware'". So, there you go. I may change that later. Anyway, a rose by any other name, or something, right?

Every morning, I get two sets of notices. The Daily Practice sends me an overview of my calendar, which makes it easy to see my streaks and when they end. Remember the Milk sends me a list of all the tasks due that day. These messages can both be tweaked in a few ways, but only within certain parameters. Even if I could really tweak the heck out of them, I'd still be getting two messages. Bah!

Fortunately, both TDP and RTM let me connect and query my data, and that's just what I do in my new cronjob. I get a listing of all my goals in TDP and figure out which ones are expiring on what day, then group them by date. Anything that isn't currently safe shows up as something to do today. I also get all my RTM tasks for the next month and put them in the same listing. That means that each email is a summary of the days of the upcoming month, along with the things that I need to get done on or before that date. That means I can (and should) start with the tasks listed first and, when I finish them, keep working my way down the list.

In theory, I could tweak the order in which I worked based on time estimates and priorities on my tasks. In practice, that's not going to happen. This whole system is held together by bubblegum, paperclips, and laziness.

Finally, the emails implement a nagging feature. If I've tagged a RTM task with "nag," it will show up on today's agenda if I haven't made any notes on it in the last two weeks. I think I'm better off doing this than using RTM's own recurring task system, but I'm not sure yet. This way, all my notes are on one task, anyway. I wanted this "automatic nagging" feature for tasks that I can't complete myself, but have to remind others about, and this was actually the first thing I implemented. In fact, it was only after I got my autonag program done that I saw how easily I could throw the rest of the behavior onto the program.

Here's what one of the messages looks like:

Ywar Notice

Right now it's plaintext only. Eventually, I might make a nice HTML version, but I'm in no rush. I do most of my mail reading in mutt, and the email looks okay in Apple Mail (although I think I found a bug in their implementation of quoted-printable!). The only annoyance, so far, is the goofy line wrapping of my per-day boundary lines. In HTML, those'd look a lot better.

I'll post the source code for this and some of my other TDP/RTM code at some point in the future. I'm not ashamed of the sloppy code, but right now my API keys are just sitting right there in the source!

The Great Infocom Replay: Zork Ⅲ (body)

by rjbs, created 2013-09-01 22:38
last modified 2013-09-15 07:17

It's been over six months since my last bit of progress on The Great Infocom Replay, but I have not given up. In fact, I've put "make progress on the Replay" into my daily practice, so maybe I'll keep making progress from here on out.

As with the other Zork games, my enjoyment of Zork Ⅲ was affected by the fact that I played it when I was young. Quite a few of the puzzles stuck with me, and it helped me work out an answer quickly in cases where I might have remained stumped for too long. I'm not sure whether I should read anything into this, so I won't.

I liked the general feel of the game. It was just a bit elegiac, but not pretentiously so. The prose is still (mostly) very spare, which is something I want to try to improve in my next attempt to make a text game. There's still some good humor, too. The writing is good.

I liked most of the puzzles, too. Most especially, I liked that the game subverts, several times, the idea that The Adventurer in Zork is a murder hobo. Sure, you can kill and steal, but you'll never become Dungeon Master if you do. The game makes it pretty clear, too, that you're a horrible person if you act like you did in Zork Ⅰ:

The hooded figure, fatally wounded, slumps to the ground. It gazes up at you once, and you catch a brief glimpse of deep and sorrowful eyes. Before you can react, the figure vanishes in a cloud of fetid vapor.

I'm hoping that I'll find the Enchanter trilogy to be a good follow-up to the Zork games, because I never played more than a few turns of those, and I'll be forced to pay more attention to detail and give more thought to solving puzzles.

Of the puzzles in Zork Ⅲ, I think that the Scenic View puzzle and the mirror box may be my favorites. They were interesting, unusual, and solving them made me feel clever. A few of the puzzles were no so great. The cliff puzzle is well known for being annoying: why would you think to just hang around in a room for no reason? You wouldn't. The Royal Museum puzzle is just great, but how are you supposed to tell that the gold machine moves? Or why would LOOK UNDER SEAT differ from EXAMINE SEAT? It's these little details that remind you that the Infocom games were still figuring out how to stump the user without annoying the user.

The Dungeon Master was a good ending for the Zork trilogy. I'm not sure whether it's the best of the three games, but I think that they form a nice set. After feeling sort of let down by Deadline, Zork Ⅲ has me feeling reinvigorated. Next up: Starcross!

THAC0, armor class, saving throws, skill, and other applications of the d20 (body)

by rjbs, created 2013-08-30 23:15
last modified 2013-08-30 23:16
tagged with: @markup:md dnd journal

More than a few times, when I've told people that I play an older version of D&D, I've gotten a slightly horrified look and the question, "Is that the one with THAC0?" What's so awful about THAC0? I ask, but the answers are vague. "It doesn't make any sense! It's bizarre!"

I think that most of the time, "the one with THAC0" means the second edition of AD&D, but pretty much every D&D before third edition had THAC0. It just means that your character has a certain target number to hit something with armor class zero. To hit armor class 0.

So, you roll a twenty sided die, add your target's armor class, and see whether your result is the target or better. This is basically how every single roll works in third edition, so why is it weird?

  d20 + modifier >= target

I think people get confused because it's tied to descending armor class, where it's better to have a low armor class. Of course it is! If your AC is a bonus to your enemy's to-hit roll, you want it to be low. The third edition system is mathematically identical. It just swaps the modifier and target. The enemy now determines the target, not the modifier, because the target is the enemy's armor class. The character's modifier is now (basically) constant based on his level.

So stop complaining that THAC0 is confusing!

One thing I like about THAC0 is that it is a little different than what people are used to. I think later editions of D&D try to boil everything down to one universal mechanic, which makes it harder to simply drop one optional bolt-on (like the seafaring rules from Cook's expert rules) or add other optional rules (like psionics). If there's a universal mechanic, everything needs to fit into them. If everything has its own simple, self-contained set of rules, you can monkey around without breaking the whole game.

I'm still thinking about non-combat challanges in this context. (This is where a "skill system" would often come into play, but I don't like the implications of enumerating all a character's skills.) A common mechanic is to try to roll at or under the relevant attribute. So, if you've got a 14 Charisma and you're trying to intimidate the town guard, you've got to roll under a 14. A natural 20 is the worst you can do, so it's a critical failure. A natural 1 is a great success. On a tie, the character with the higher attribute wins.

Building from that, some systems say that instead of 1 being perfect, you shoot for your attribute's value. That way, in a Strength contest between two characters, the winner is the one who rolls highest without exceeding his or her score wins the contest. Attribute scores don't need to be revealed or compared. Zak S. wrote about this, and points out the obvious (if silly) problem: it's not very exciting for everyone else at the table to see the die roll and stop on 13, even if that is the critical success value. Everybody wants to make a big noise at a 1 or a 20.

I've been thinking about this a lot lately as I prepare to switch my 4E campaign to a hacked B/X. I'm thinking about extending my list of saving throws to add a few more categories and just using that. I'm also tempted to just say "we're gonna use FUDGE dice" and bring in FAE-style Approaches, because those seem pretty awesome.

As usual, what I really need is more table time to just test a bunch of bad ideas and see which is the least worst.

This post has been sort of rambling and pointless. Please allow me to pay the Joesky tax:

The high priests of Boccob are granted knowledge of secrets and portents, but often at great price. Some of these powers (initially for 4E) are granted to the highest orders while they undertake holy quests:

Boundless Knowledge. Can learn any fact through a turn of meditation. The cost:

  • widely known mundane fact: 1 awesome point
  • secret (secret keeper present): 10 awesome points
  • secret (secret keeper not present): takes 8 hours, 10 temporary wisdom, recover 1/day
  • arcane secret: 1 permanent wisdom
  • mystery of the universe: takes 1d4 days, costs 1d4 permanent wisdom

Can't recover awesome points if not used once per day. Begin losing 2 temporary Wisdom per week after a week without learning an arcane secret.

Subtle Stars. Every night, the PC can consult the stars and learn two facts and one lie. Failure to consult the stars once a day leads to a -2 cumulative penalty to Will defense.

Curiosity. Every time the priest asks a question that goes unanswered (even in soliloquy), he must roll a d20. If it is more than his Wisdom + 2, he gets the answer and loses a point of Wisdom.

personal code review practices, mark Ⅱ (body)

by rjbs, created 2013-08-28 10:34
last modified 2013-08-28 10:51

Just about two months ago, I posted about my revived "work through some tickets in each queue then rotate" strategy. When I had first tried to do it, I hadn't had enough discipline, and it failed. After a month, it seemed to be going very well, because of two minor changes:

  1. I replaced "remember where I was in the list" with "keep the list in a text file."
  2. I used The Daily Practice to keep track of whether I was actually getting work done on the list regularly.

About a month later, I automated step 2. I just had my cron job keep track of the SHA1 of the file in git. If it changed, I must have done some work.

Yesterady, as I start month three of the regime, I've invested a bit more time in improving it, and I expect this to pay big dividends.

My process is something like this, if you don't want to go read old posts:

  1. keep a list of all my projects
  2. group them into "bug queue cleared" and "project probably complete" and "work still to do"
  3. sort the lists by last-review date, descending; fall back to alphabetical order
  4. every time I sit down to work on projects, start with the first project on the "work to do" list, which has been touched least recently
  5. when new bugs show up for the other two lists, put them into the "work to do" list at the right position

This was not a big problem. I kept the data in a Markdown table and when I'd finish review, I'd delete a line from the top, add it to bottom, and add today's date. The step that looked like it would be irritating was #5. I'd have to keep an eye on incoming bug reports, reorder lists, and do stupid maintenance work. Clearly this is something a computer should be doing for me.

So, the first question was: can I get the count of open issues in GitHub? Answer: yes, trivially. That wasn't enough, though. Sometimes, I have older projects with their tickets still in rt.cpan.org. Could I find out which projects used which bugtracker? Yes, trivially. What if the project uses GitHub Issues, but has tickets left in its RT queues? Yes, I can get that.

Those are the big things, but once you pick up the data you need for figuring them out, there are other things that you can check almost for free: is my GitHub repo case-flattened? If so, I want to fix it. Is the project a CPAN dist, but not built by Dist::Zilla? Did I forget to enable Issues at GitHub? Am I missing any "Kwalitee point" on the CPANTS game scoreboard?

code-review

Writing the whole program took an hour, or maybe two, and it will clearly save me a fair bit of time whenever I do project review. I even added a --project switch so that I can say "I just did a round of work on Some::Project, please update my last reviewed date." It rebuilds the YAML file and commits the change to the repo. Since it's making a commit, I also added -m so I can specify my own commit message, in case there's something more to say than "I did some work."

This leaves my Markdown file in the lurch. That wouldn't bother me, really, except that I've been pointing people at the Markdown file to keep track of when I might get to that not-very-urgent bug report they filed. (I work on urgent stuff immediately, but not much is urgent.) Well, no problem here: I just have the program also regenerate the Markdown file. This eliminates the grouping of projects into those three groups, above. This is good! I only did that so I could avoid wasting time checking whether there were any bugs to review. Now that my program checks for me, there's no cost, so I might as well check every time it comes up in the queue. (Right now, it will still prompt me to review things with absolutely no improvements to be made. I doubt this will actually happen, but if it does, I'll deal with it then.)

The only part of the list that mattered to me was the list of "stuff I don't really plan to look at at all." With the automation done, the list shrinks from "a bunch of inherited or Acme modules" into one thing: Email-Store. I just marked it as "never review" and I'm done.

So, finally, this is my new routine:

  1. If The Daily Practice tells me that I have to do a code review session…
  2. …or I just feel like doing one…
  3. …I ask code-review what to work on next.
  4. It tells me what to work on, and what work to do.
  5. I go and do that work.
  6. When I'm done, I run code-review --project That-Project and push to github.
  7. Later, a cron job notices that I've done a review and updates my daily score.

Note that the only part of this where I have to make any decisions is #5, where I'm actually doing work. My code-review program (a mere 200 lines) is doing the same thing for me that Dist::Zilla did. It's taking care of all the stuff that doesn't actually engage my brain, so that I can focus on things that are interesting and useful applications of my brain!

My code-review program is on GitHub.

the stupidest profiler I could write (body)

by rjbs, created 2013-08-23 23:10
last modified 2013-08-23 23:11

There's a stupid program I rewrite every few months. It goes like this:

perl -pe 'BEGIN{$t=time} $_ = sprintf "%0.4f: $_", time - $t; $t = time;'

It prints every line of the input, along with how long it had to wait to get it. It can be useful for tailing a log file, for example. I wanted to write something similar, but to just tell me how long each line of my super-simple program took to run. I decide it would be fun to do this with a Devel module that would get loaded by the -d switch to perl.

I wrote one, and it's pretty dumb, but it was useful and it did, in the end, do the job I wanted.

When you pass -d, perl sets $^P to a certain value (on my perl it's 0x073F) and loads perl5db.pl. That library is the default perl debugger. You can replace it with your own "debugger," though, by providing an argument to -d like this:

$ perl -d:SomeThing ...

When you do that, perl loads Devel::SomeThing instead of perl5db.pl. That module can do all kinds of weird stuff, but the simplest thing for it to do is define a subroutine in the DB package called DB. &DB::DB is then called just before each statement runs, and can get information about just what is being run by looking at caller's return values.

One of the bits set on $^P tell it to make the contents of each loaded file available in a global array with a funky name. For example, the contents of foo.pl are in @{"::_<foo.pl"}. Woah.

My stupid timer keeps track of the amount of time taken between statements and prints your program back at you, telling you how long was spent on each line, without measuring the breakdown of time spent calling subroutines loaded from elsewhere. It expects an incredibly simple program. If you execute code on any line more than once, it will screw up.

Still, it was a fun little exercise, and maybe demonstrative of how things work. The code documentation for this stuff is a bit lacking, and I hope to fix that.

use strict;
use warnings;
package Devel::LineTimer;
use Time::HiRes;

my %next;
my %seen;
my $code;
sub emit_trace {
  my ($filename, $line) = @_;
  $code ||= do { no strict; \@{"::_<$filename"}; };
  my $now = Time::HiRes::time();

  $line = @$code if $line == -1;

  warn "Program has run line $line more than once.  Output will be weird.\n"
    if $seen{$line}++ == 1;

  unless (keys %next) {
    %next = (start => $now, line => 1, hunk => 0);
  }

  my @code = @$code[ $next{line} .. $line - 1 ];

  my $dur = $now - $next{start};

  printf STDERR "%03u %04u %8.04f %s",
    $next{hunk}, $next{line}, $dur, shift @code;

  my $n = $next{line};
  printf STDERR "%03u %04u %8s %s",
    $next{hunk}, ++$n, '.' x 8, $_ for @code;

  %next = (start => $now, line => $line, hunk => $next{hunk}+1);
}

package DB {
  sub DB {
    my ($package, $filename, $line) = caller;
    return unless $filename eq $0;
    Devel::LineTimer::emit_trace($filename, $line);
  }
}

END { Devel::LineTimer::emit_trace($0, -1) }

1;

With the module above installed in @INC somewhere, you can then run:

$ perl -d:LineTimer my-program

...and get output like...

000 0001   0.0000 #!perl
000 0002 ........ use 5.16.0;
000 0003 ........ use warnings;
000 0004 ........ use Email::MessageID;
000 0005 ........ use Email::MIME;
000 0006 ........ use Email::Sender::Transport::SMTP;
000 0007 ........
001 0008   0.0093 my $email = Email::MIME->create(
001 0009 ........   header_str => [
001 0010 ........     From => 'Ricardo <rjbs@cpan.org>',
001 0011 ........     To   => 'Mr. Signes <devnull@pobox.com>',
001 0012 ........     Subject => 'This is a speed test.',
001 0013 ........     'Message-Id' => Email::MessageID->new->in_brackets,
001 0014 ........   ],
001 0015 ........   body => "There is nothing much to say here.\n"
001 0016 ........ );
001 0017 ........
002 0018   0.0028 my $smtp = Email::Sender::Transport::SMTP->new({
002 0019 ........   host => 'mx-all.pobox.com',
002 0020 ........ });
002 0021 ........
003 0022   1.7395 $smtp->send($email, {
003 0023 ........   to   => 'devnull@pobox.com',
003 0024 ........   from => 'rjbs@cpan.org',
003 0025 ........ });

Yahoo!'s family accounts still stink (body)

by rjbs, created 2013-08-22 16:57
tagged with: @markup:md journal

It is amazing how bad Yahoo!'s "family account" experience is. I want to make an account for my six year old daughter to use to upload her photos to Flickr. Googling for Yahoo! family accounts and flickr finds this text:

Yahoo! Family Accounts allow a parent or legal guardian to give consent before their child under 13 creates an account with Yahoo!. A child is someone who indicates to us that they are under the age of 13. [...] Your child may have access to and use of all of Yahoo!'s products and services, including Mail, Messenger, Answers, mobile apps, Flickr, Search, Groups, Games, and others. To learn more about our privacy practices for specific products, please visit the Products page of our Privacy Policy.

(Emphasis mine, of course.)

I had to search around to find where to sign up. I had to use the normal signup page. I filled the form out and kept getting rejected. "The alternate email address you provided is invalid." After trying and trying, I finally realized that they have proscribed the use of "yahoo" inside the local part. So, com.yahoo.mykid@mydomain.com was not going to work. Fine, I replaced yahoo with ybang. Idiotic, but fine.

After I hit submit, I was, utterly without explanation, given a sign in page. I tried to sign in several times with the account I just requested, but was told "no such account exists."

Instead, I tried to log in with my own account, and that worked. I was taken to a page saying, "Do you want to create a Family Account for your child?" Yes, I do! Unfortunately, the CAPTCHA test that Yahoo! uses is utterly awful. It took me half a dozen tries to get one right, and I've been human since birth. Worse, the form lost data when I'd resubmit. It lost my credit card number — which is excusable — but also my state. Actually, it was worse: it kept my state but said "this data is required!" next to it. I had to change my country to UK, then back to USA, then re-pick my state. Then it was satisfied.

Finally, I got the account set up. I was dropped to the Yahoo! home page, mostly showing me the daily news. (To Yahoo!'s credit, none of this was horrible scandal rag stuff for my six year old. Less to their credit, the sidebar offered me Dating.) I verified my email address and went to log her in to Flickr. Result?

We're sorry, you need to be at least 13 years of age to share your photos and videos on Flickr.

So, what now? Now I create a second account as an adult, upload all her photos there, and give her the account when she's older, I guess. Or maybe I'll use something other than Flickr, since right now I'm pretty sick of the many ways that Yahoo! has continued to make Flickr worse.

getting stuff accomplished, next steps (body)

by rjbs, created 2013-08-16 22:57
last modified 2014-02-17 18:35

I've got nearly every goal on my big board lit up. So, now I'm getting into a routine of getting all the regular things done. Next up, I'm going to try to get better at doing the one-off tasks I have to do, like file my expenses, arrange a piano tuning, and that sort of thing. For this, I'm going to try using Remember the Milk. I've used it in the past and liked it fine, but I didn't stick with it. I think that if I integrate it into my new routine, it'll work.

I've put in a few tasks already, and gotten some done. Today, I added some tasks with the "nag" tag, telling me that they're things I need to bug other people about until they get done. Other tasks, I'm creating with due dates. Yet others are just general tasks.

My next step will be to use the Remember the Milk API (with WebService::RTMAgent, probably) to help deal with these in three ways:

  1. require that I never have any task more than one day overdue (I'm cutting myself a little slack, okay?)
  2. require a new note on any "nag" task every once in a while
  3. require that ... well, I'm not sure

That lousy #3 needs to be something about getting tasks done. I think it will be something like "one task in the oldest 25% of tasks has to get done every week." I think I won't know how to tweak it until I get more tasks into RTM.

Maybe I'll do that on vacation next week. That sounds relaxing, right?

the perils of the Daily Practice (body)

by rjbs, created 2013-08-09 20:26

Warning: This is sort of rambling.

I have no doubt that automating a bunch of my goals on The Daily Practice has helped me keep up with doing them. As I keep working to stay on top of my goals, though, I'm finding that the effects of TDP on my activity are more complex and subtle than I had anticipated.

The goals that are getting the most activity are:

  • automated
  • already started
  • achievable with a small amount of work performed frequently

My best streak, for example, is "review p5p commits." All I have to do, each day, is not have any unread commit notifications more than a week old. Every day, we have under two dozen notices, generally, so I can just read the ones that come in each day and I'm okay. If I miss a day, I'm still good for a while. After that comes "catch up with p5p," which is the same.

The next goals are in the form "do work on things which you will then record by making commits in git." For example, I try to keep more on top of bug reports lately. So far, so good. These goals are still going strong, and have been going strong for as long as my other automated goals. The score is lower, though, because they don't show up as done each day, but only on days I do the work. Despite that, the structure of the goals is the same: make sure the work is done before each safe period is over. This suggets an improvement to TDP: I'd like my goals' scores to be their streak lengths, in days, rather than the number of times I've performed something. This seems obvious to me, in retrospect.

The goal that trails all of these is "spend an hour on technical reading." I didn't get started on that immediately. Once I did, though, I've been motivated to keep the chain going. My strong suspicion, though, is that I only felt motivated because I had already established streaks with my easier to perform, automaticaly-measured goals. Still, my intuition here is that it's much easier to get going once at least a single instance is on the big board. Unless there's a streak at all, there's no streak to break. This suggests another improvement, though a more minor one. Right now, scores are only displayed for streaks with more than one completion. You don't see a score until you've done something twice. I think it would be better to keep the streaks looking visually similar, to give them all equal value. After all, the value isn't that I did something 100 times in a row, but that for 100 days, it was getting done.

Then come the goals that I haven't started at all. These goals are just sitting there, waiting for me to start a streak. Once I do start, I think I'll probably stick to it, but I have to overcome my initial inertia. Once I get it started, I get my nice solid line, and then I have a reason to keep it going. On the other hand, if I have no streak, there is no incentive to get started. I think this is a place to make improvements: just like I'd rather see scoring mean "every day in the streak is worth one point," I'd like to see "every day that a goal is not safe counts as a cumulative negative point." Now I can't just put in goals that I might start eventually. Leaving a goal undone for a long time costs me. I think there's something more to be found here, around the idea that something done irregularly, even if not meeting the goal, is better than something utterly ignored. Right now, that isn't reflected. Maybe that's for the best, though.

These aren't the real "dangers" to my productivity that I've seen in using TDP. There are two main things that I've worried about.

First, TDP sometimes squashes the value in doing more than one's goal. For example, my bug-fixing task says I have to do an hour of work every three days. On a certain day, I might feel motivated to do more than one hour of work. I may feel like I'm on a roll. I will not be rewarded for doing so. In theory, I could get two points for the day instead of one, but it won't actually extend my streak, which is what really counts. That is: if my streak is extended, I'm earning a day off from the task, so I have more time to do other work. This is what should happen if I do extra work today. It isn't what happens, though, which makes it a strange economy.

A related phenomenon is that if I were to write two journal entries today, I would benefit from saving one to publish later, because then the streak would extend from that day. It feels like a disincentive to actually do the extra work today, although this may be a problem with me that I need to work out on my own. In fact, there is a flip-side to this problem: if I do extra work now to extend my streak beyond its usual lenght, I'm breaking the regularity of my schedule, which might not fit in with the idea of getting into a schedule.

I don't really buy that, though.

The other problem is that once you buy into the idea that you must keep your streaks going — which is a pretty motivating idea — you're prioritizing things for which goals have been created over things for which they have not. Possibly you're heavily prioritizing them. It's important to remain aware of this fact, because there's a danger that any other work will be neglected only because you haven't thought to put it on the big board.

There are categories of tasks, too, that I've been struggling not to unconsciously deprioritize because they can't be usefully made into long-term goals. I'm trying to learn new Decktet games, to make plans to see friends more often, to work on spare time code projects, and so on. These are more "to do" items, and TDP is not a todo list. I think I'm going to end up having to write automation to connect it to a todo list manager, much as I did for my code review todo. Otherwise, I'll chug along with my current routine, but will stagnate by never doing new things.

These are good problems to have. They're the problems I get to have once I'm making reliable progress at keeping up with clear responsibilities or promises. Nonetheless, they are problems, and I need to recognize them, keep them in mind, and figure out how to overcome them.

the Random Wizard's Top 10 Troll Questions (body)

by rjbs, created 2013-08-02 14:15
last modified 2013-08-02 17:47
tagged with: @markup:md dnd journal rpg

I've not usually a big fan of blog-propagated questionnaires, but this one looked good, because it will force me to articulate a few of my thoughts on my D&D game. These are Random Wizard's Top 10 Troll Questions. I already posted answers to the 20 Quick Questions on Rules that he mentions, for my D&D game. My answers to the 20 Quick Questions are in the game's GitHub repo.

1. Race (Elf, Dwarf, Halfling) as a class? Yes or no?

Yes. I like the idea of matching classes up with monster manual entries. I have a Fighter class for now, but will probably break it up to Bandit and Soldier, eventually, to match my game's monster manual. So, I match elf or dwarf up with the monster manual entry. Goblins, in my manual, break down into a number of very distinct groups. If someone was to play a goblin, I'd make a per-group class, or at least have rules for specialization within the goblin class, the same way I customize clerics per-church.

2. Do demi-humans have souls?

The nature of the personal life force of a sentient creature is sort of blurry in my game, but the rough answer is, "Yes, but some demi-humans have different kinds of souls." Elves are often unable to interact with the technology of the ancient empire, for example, because it doesn't consider them to be alive at all.

3. Ascending or descending armor class?

Descending. I really like using THAC0, I find it very easy to do the math in my head. In fact, everyone I've played with does, once I get them to stop reacting violently to "this stupid thing I hate from 2E" and see how simple the rule is.

4. Demi-human level limits?

Probably, it hasn't come up. In fact, human level limits, too. I don't see anybody PC breaking past level 12-14 in my game, ever.

5. Should Thief be a class?

Yes. Actually, a few classes. When I get around to it, I want to break Thief into Assassin, Burglar, and Dungeoneer. Or, the other way to put this is: I do like the idea of classes for skill-based archetypes, but I think that Thief, as written, is not a very good such class. I'm not sure who it best represents in the fiction? With its d4 hit dice, it's neither the Grey Mouser nor Conan, both of whom would otherwise be decent candidates.

6. Do characters get non-weapon skills?

Kinda. I should really codify it. Basically, I assume that characters are good at the stuff related their archetype. (This is part of why I like more specialized classes than "Fighter.") If the player wants to declare that his or her character has an unusual skill for some reason, I'll allow it at least a few times.

I don't like skill lists.

7. Are magic-users more powerful than fighters (and, if yes, what level do they take the lead)?

We're using pretty basic fighter and magic-user classes, most of the time. Even the tweaks I'd like to make won't change the balance much I think. So, at low levels, the fighters are more powerful. So far, we haven't seen any magic-user survive long enough to overtake the fighters.

I've been slowly tweaking the rules to try to change the balance just a little.

8. Do you use alignment languages?

No.

I have publicly stated my bafflement by alignment languages before, and although I was glad to get a pretty clear answer as to why they existed, I didn't think they were really justified. When different cults have secret languages, they're just secret languages.

9. XP for gold, or XP for objectives (thieves disarming traps, etc...)?

Yeah, sure, XP for all kinds of stuff. Gold, monsters, traps, fast-talking, whatever. I wrote about gold as experience before.

10. Which is the best edition?

Heh.

Right now, I use the Moldvay Basic Set as the go-to reference, with plenty of stuff from Cook's Expert Set. I'd like to read Holmes, as I have read good things, and it looks like at least I should steal some of its rules for stuff, but I don't have a copy. I stole some of the psionics rules from 2E, and 1E has tons of tables and stuff to steal. I'm hacking in something like Action Points when I backport my 4E campaign to Basic. They're all fun, but I think Moldvay is a great framework from which to start hacking, and that's what I've done.

Bonus Question: Unified XP level tables or individual XP level tables for each class?

Individual tables. I really like unified XP, in theory, because it can make multiclassing a lot simpler. In practice, I've never really liked how it works.

being a polyglot programmer (barely) (body)

by rjbs, created 2013-07-31 22:03
tagged with: @markup:md journal programming

I like learning new programming languages. Unfortunately, I rarely make the time to get any good at them. I'm hoping to figure out how to force myself to write something non-trivial in something at least relatively unlike what I do all day.

I did some hacking on a Python implementation of statsd, and I started on a tool to build Z-machine programs in Perl 6. I took an online course in Scala and got this:

I know Scala!

That was fun, and I did write some things more complex that I might have written if I was working through a book. Speaking of working through books, I read a bunch of Introducing Erlang and Erlang Programming, and of Haskell: the Craft of FP. Now I'm back to working through Starting Forth — thankfully I have a print edition, as the web one is dreadful. I really enjoyed Programming in Prolog, too, and hope to someday get around to The Craft of Prolog. There are a number of other languages or books I could mention, but it all comes down to the same thing: I'm very good at dabbling, but I've found it very challenging to become proficient at foreign languages because I've found no motivation in the form of code I want to write. Or, more specifically, code that I want to write, but that I can't write faster in Perl, or where I don't mind suffering getting it done more laboriously.

What I need to do is look for existing programs I like, and want to hack on, and then do so. I think it will probably be really painful, but worth it.

What I wish I could do is become good friends with somebody expert in these languages, interested in helping me learn, possibly while hanging out over chips and salsa. Actually, it seems to turn out that part of the reason I'm not getting as much Erlang programming as I'd like is the same reason I'm not playing as much D&D as I like. Making friends isn't so hard, but finding friends with the exact set of skills and interests you hope they'll have can be a pretty tough challenge!

Dream Codex : Fonts

more old school computer fonts, including TI99/4A
by rjbs, created 2013-07-29 22:16
tagged with: fonts

Apple II Fonts

old school computer fonts
by rjbs, created 2013-07-29 22:16
tagged with: fonts

I get points for blogging this! (body)

by rjbs, created 2013-07-25 12:47
last modified 2014-02-17 18:36

I feel like I'm always struggling with productivity. I don't get the things done that I want to get done, and I'm never sure where I lost my momentum, or why, or how I can keep with it. I've tried a bunch of productivity tools, and most of them have failed. For a while, now, I've had an on-again-off-again relationship with The Daily Practice, which I think is great. Even though I think it's great, I don't always manage to keep up with it, which means it doesn't actually do me much good.

I'm on-again, though, and I'm trying to use it to combine some of my other recent changes to my routine. For example, I wrote about using a big queue to many all my projects' bug queues and forcing a reading bottleneck to avoid reading too many books at once, and thus getting none read. I also want to try to get sustained momentum on keeping my email read and replied-to. I'm not sure whether TDP will help me stay on target, but I think it will help me have a single place to see whether I'm on a roll.

The Daily Practice is a calendar for your goals. You tell it what things you want to do, and how often. Then, when you've done the thing, you tell TDP that you did. As long as you keep doing things often enough, you rack up points every time that you extend the chain. If you fail to keep it going, you lose all your points. It looks like this:

The Daily Practice

I started to think about what kind of goals would be useful to demonstrate momentum. My list looked something like:

  • get some email-reading done
  • clear out old mail that's marked for reply
  • spend time working on RT tickets and GH issues
  • catch up with reading p5p posts
  • review commits to perl.git
  • keep processing my long-lived perl issues list
  • write blog posts
  • read books
  • read things long-ago marked "to read" in Instapaper
  • keep up with RSS reader
  • keep losing weight

I started to think about how I'd track these. It was easy to track my email catch-up on my last big push. I was headed to Massachusettes with my family and while Gloria drove, I read email. Every state or so, I'd tweet my new email count. Doing this from day to day sounded annoying, though. If I just finished reading two hundred email messages, I want to reward myself with a candy. I don't want to go log into my productivity site and say that I've done it.

Fortunately, I realized that just about all the goals I had could be measured for me. I just needed to write the code and post results to TDP. I made doe eyes at TDP support and was given access to the beta API. Then I didn't do anything about it for a while… until getting to OSCON. I've been trying to use the conference as a time to work through some outstanding tasks. So far, so good.

I've written a program, review-life-stats, which measures the things I care about, when possible, and reports progress to TDP. Some of the measurements are a little iffy, or ripe for gaming, or don't measure exactly what I pretend they do. I think it will be okay. The program keeps track of the last measurement for each thing, storing them in a single-table SQLite file. It will only do a new measurement every 24 hours.

The way to think of these monitors is that they only report success. It has to say "today was a good day" or nothing. There is no reason (or means) to say "today was bad," and my monitors don't consider whether there's a streak going on (but they could be made to if it would help).

Here are the ones I've written so far:

  • the total count of flagged messages must be below the last measurement (or 10, whichever is higher)
  • the total count of unread messages must be below the last measurement (or 25, whichever is higher)
  • I must have written a new journal entry
  • I must have made a new commit to perlball.git; any commit will do. I check this by seeing whether the SHA1 of the master branch has changed, using the GitHub API.
  • There must be no unread messages from the perl.git commit bot over three days old.
  • The count of unread perl5-porters messages over two weeks old must be below the last measurement or zero.

I won't be able to automate "did I spend time reading?" as far as I can predict. I'm also probably going to have to do something stupid to track whether I'm catching up on my bug backlog. Probably I'll have to make sure that the SHA1 of cpan-review.mkdn has changed. I'm also not sure about my taks to keep reviewing smoke reports or to plan my upcoming D&D games.

The other goals that I can automate, somehow, are going to be doing semi-regular exercise, keeping my weight descending, and working through my backlog of Instapaper stuff. Those will require talking to the RunKeeper, Withings, and Instapaper APIs, none of which look trivial. Hopefully I can get to each one, in time, though, and hopefully they'll all turn out to be simple enough to be worth doing.

dealing with my half-read book problem (body)

by rjbs, created 2013-07-05 22:47
last modified 2013-07-05 22:48
tagged with: @markup:md journal reading

I just recently wrote about trying to deal with my backlog of bug reports and feature requests. It is not, sad to say, the only backlog of stuff I've been meaning, but failing, to do. There's also my backlog of reading.

my overgrown reading queue

(and that's only the physical books)

I need to get through these! Lately, I'm only getting any reading done on new books, and I'm buying a lot of new books. Also, my birthday is coming up soon, and my wishlist is full of books. I don't want to stop wishing for books… I just want to feel like I have a plan to read the books I get!

Too often, I started reading a book, then put it aside to read something else, then never get back to the thing I was reading to start with. No more! Or, at least, not as often anymore! I don't mind giving up on a book, or even saying, explicitly, that I'll try again in a year. I just don't like the idea that I'm putting books aside "for a little while" and then never getting back to them. Perhaps ironically, the book with which I've done this the most often is The Magic Mountain, which I've been reading off and on since 1997. For that book, I think it is a good plan. For almost any other book I've been trying to read, not so much.

So, the new plan is this: I keep big list of all the books I'm really meaning to read. This doesn't mean everything I own but haven't read. It's all the stuff I've asked for or purchased in order to read "soon." It's not a queue, either, because I'll pick what to read as I go. Who wants to pick, a year in advance, what book to read around Christmas 2014? Not me, man. Not me.

I categorized each book in the list as either fiction, humanities, or technical. (I was sorely tempted to replace "technical" with "quadrivium" just to keep the tables nicely aligned without spaces, but I resisted.) I will work on no more than one book of each category at a time. When I finish one, I won't pick its replacement until I feel like it. Once I will either finish reading the replacement or clearly decide to either give up on it or put it away for a good long while, while I read something else.

To start, I picked the most recent story collection I was working on, a nice short technical book, and a thick and difficult book of history that I was really, really enjoying (in a way) when I was making real progress in it.

 Literature: Moon Moth and Other Stories, The
 Technical : Starting Forth
 Humanities: Rise and Fall of the Third Reich

I have no idae whether this plan will work, but I feel like it's better than continuing along the random non-plan that I've been following. I put the big list of book on GitHub along with my previous big list of software libraries. Hopefully keeping them in one place will help me tie them together as "things to keep updated" in my head.

once again trying to keep up with the tickets (body)

by rjbs, created 2013-07-03 22:42
last modified 2013-07-27 12:39

I maintain a bunch of published code. I probably wrote more than half of it, and I've been the sole maintainer for years on most of the rest. I inherited a lot of bug reports, I get new bug reports, and I get feature requests. I used to try to respond to everything immediately, or at least within a few days.

This doesn't happen anymore.

Now, I've got piles and piles of suggestions and patches, some of which are obviously good, some of which I'd like to see happen but don't want to write, some of which are okay but need work, and some of which just aren't a good idea. Any time I want to try to clear out some of these tickets, I have to pick which ones. This takes too much time, especially because I'm often feeling sort of run down and like I want to cruise through some easy work, so I spent a lot of time looking at lists of tickets, burning up the time I could be spending just doing something.

A while back — maybe six months or a year ago — I thought that what I'd do was work alphabetically through my module list. I'd keep working on the next target until every ticket was stalled or closed. Eventually I got burned out or distracted.

I also didn't announce this plan publicly enough, so failing to keep on it brought me insufficient shame.

Now I'm back to the plan, roughly as follows:

I made a big list of all my code. (I'm sure I missed some stuff, but probably not much. Hopefully those omissions will become clear over time.) I started with everything in alphabetical order. Every time that I want to get some work done on my backlog, I go to the list, start at the top, and clear as many tickets as I can. If I clear the queue, or when I'm done doing work for the evening (or afternoon, or whatever) I file the dist away.

If there are still tickets left, I put it at the bottom of the "main sequence," where I keep working on stuff over and over. Otherwise it goes into either "maintenance," meaning that it'll pop to the top of the queue once it actually gets bugs, or "deep freeze," meaning that I seriously doubt any future development will happen. The deep freeze is also where I put code that lives primarily in the perl core but gets CPAN releases.

While I'm going through my very first pass through the main sequence, things that have never been reviewed are special. If a bug shows up in a "maintenance" item, it will go to the top of the already-reviewed stuff in the main sequence, but below everything still requiring review. I'm also checking each module to make sure that it points to GitHub Issues as its bug tracker and that it doesn't use Module::Install.

What I should do next is write a few tools:

  • something to check the bugtracker and build tool of all my dists at once
  • something to check whether there are tickets left in the rt.cpan.org queue
  • something to check whether there are non-stalled tickets for dists in maintenance or deep freeze

I don't know whether or when I'll get to those.

In the meantime, I'm making some progress for now. I'm hoping that once I finish my first pass, I'll be able to do a better job of clearing the backlog and keeping up, and then I'll feel pretty awesome!

I may try to publish more interesting data, but for now my maintenance schedule is published, and I'm keeping it up to date as I go.

Template Toolkit 2: still making me crazy (body)

by rjbs, created 2013-07-03 12:19

Template Toolkit 2, aka TT2, has long been a thorn in my side. Once upon a time, I really liked it, but the more I used it, the more it frustrated me. In almost every case, my real frustrations stem from the following set of facts:

  • TT2 is a templating system for Perl.
  • TT2 provides a language for use when adding logic to the templates.
  • The language is inferior to Perl. It may be useful to be inferior in some ways, to encourage programmers to move complex logic out of templates, but…
  • The language has significant conceptual mismatches with Perl.

I'll start with this object:

  package Thing {
    sub new      { bless { state => 0 } }
    sub name     { my $self = shift; $self->{name} // 'Default' }
    sub set_name { my $self = shift; $self->{name} = shift }

    sub next  { my $self = shift; return $self->{state}++ }
    sub error { my $self = shift; $self->{error} }
  }

…and I'll write this code…

  my $tt2   = Template->new;
  my $thing = Thing->new;

  my $template = <<'END';
  Got  : [% thing.next  %]
  Error: [% thing.error %]
  State: [% thing.state %]
  END

  $tt2->process(\$template, { thing => Thing->new })
    or die $tt2->error;

The output I get is:

  Got  : 0
  Error: 
  State: 1

We've got one problem, already: I was able to look at the object's guts, and not because I obviously dereferenced the reference as a hash, but because I forgot that state was not a method. There is, as far as I can tell, no way to prohibit fallback from methods to dereference by configuring TT2.

There's another problem: we stringified an undef, where we might have wanted some kind of default to display. In Perl we'd get a warning, but we don't here. We probably wanted to write:

Error: [% thing.error.defined ? thing.error : "(none)" %]

That works. That also calls error twice, so maybe:

Error: [% SET error = thing.error; error.defined ? error : "(none)" %]

…but that won't work, because it sets error to an empty string, which is defined. Why? Because TT2 doesn't really have a concept of an undefined value. This can really screw you up if you need to pass undef to an object API that was designed for use by Perl code.

This should be obvious:

Name : [% thing.name %]

You get "Default" as the name.

Name : [% CALL thing.set_name("Bob"); thing.name %]

…and we get Bob. If we ever needed to clear it again, though,

Name : [% CALL thing.set_name("Bob"); CALL thing.set_name(UNDEF); thing.name %]

Well, this won't work, because UNDEF isn't really a thing. It isn't declared, so it defaults to meaning an empty string. I thought you could, once upon a time, do something like this:

[% CALL thing.set_name( thing.error ) %]

…and that the undef returned by error would be passed as an argument. I may be mistaken. It doesn't work now, anyway.

We need to detect these errors, anyway, right? In Perl, we'd have use warnings 'uninitialized' to tell us that we did print $undef. In TT2, there's STRICT. We update our $tt2 to look like:

my $tt2   = Template->new(STRICT => 1);

Now, undefs in our templates are fatal. It's important to note that the error isn't stringifying undef, but evaluating something that results in undef. Our original template:

  my $template = <<'END';
  Got  : [% thing.next  %]
  Error: [% thing.error %]
  State: [% thing.state %]
  END

…now fails to process. The error is: var.undef error - undefined variable: thing.error. In other words, thing.error is undefined, so we can't use it. If we try to use our earlier solution:

  my $template = <<'END';
  Got  : [% thing.next  %]
  Error: [% thing.error.defined ? thing.error : "(none)" %]
  State: [% thing.state %]
  END

We still get an error:

  var.undef error - undefined variable: thing.error.defined

So, we can't check whether anything is defined, because if it isn't, it would've been illegal to evaluate it that far. You can always pass in a helper:

  my $undef_or = sub {
    my ($obj, $method, $default) = @_;
    $obj->$method // $default;
  };

  my $template = <<'END';
  Got  : [% thing.next  %]
  Error: [% undef_or(thing, "error", "(none)") %]
  State: [% thing.state %]
  END

  $tt2->process(\$template, { thing => Thing->new, undef_or => $undef_or })
    or die $tt2->error;

This, of course, still doesn't solve the inability to pass an undefined value to a Perl interface. In fact, it doesn't deal with any kind of variable passing.

I like the idea of discouraging templates from including too much program logic. On the other hand, I loathe the idea of providing a large and complex language in the templates that can still be used to put too much logic in there, but without making as much sense as Perl or working well with existing Perl interfaces.

I'll take Text::Template or HTML::Mason any day of the week, instead.

Notes from YAPC in Austin (body)

by rjbs, created 2013-07-01 10:34
tagged with: @markup:md journal perl yapc

I'm posting this much later than I started writing it. I thought I'd get back to it and fill in details, but that basically didn't happen. So it goes.

This year's YAPC was in Austin. A lot of people complained about the weather, but it was pretty much the same weather we had at home when I left home, so I wasn't bothered. This was good planning on the part of the YAPC organizers, and I thank them for thinking of me.

I'm just going to toss down notes on what I did, for future memory.

I landed on Sunday, having flown with Peter Martini and Andrew Rodland. Walt picked us up at the airport and we went to Rudy's for barbecue… but first we had to check in. I was worried, because it was after 19:00, and it sounded like nobody would be at the B&B to let me in. I called, and nobody was there. I wandered around the back of the building and found a note for me. It told me where to find my key and how to get in. "…and help yourself to the soda, lemonade, and wine in the fridge." Nice. I really liked the place, The Star of Texas Inn, and would stay there again, if the opportunity arose.

Rudy's was fantastic. I had some very, very good brisket and declared that I needed nothing else. I tried some of the turkey and pork, too, though, and they were superb. The jalapeño sausage, I could take or leave. The sides were great, too: creamed corn, new potatoes, potato salad. The bread was a distraction. I also had a Shiner bock, because I was in Texas.

From Rudy's we went to the DoubleTree, where lots of other attendees were staying, and I said hello to a ton of people. Eventually, though, Peter and I headed back to our respective lodgings. I worked a little on my slides and a lot on notes for the RPG that I planned to run on Thursday night.

Monday morning, I caught up with Thomas Sibley, who was staying at the same B&B. We had breakfast (which was fine) and headed to the conference. I attended:

  • Mark Keating's history of Perl, which was good, except that he seems to think that my name is "Ricky." I think he's been talking to my mom too much.
  • Sterling Hanenkamp's telecommuting panel discussion, on which I was a panel member. I think it went pretty well, although I wonder whether we needed an aggressive interviewer to push us harder.
  • John Anderson's automation talk, which was good, but to which I must admit I payed limited attention. I forget what I got distracted by, but something.

For lunch, we had a "p5p meetup" at the Red River Diner. The food was fine and the company was good, but we ended up quite a few more people than I'd expected, and it sort of became a generic conference lunch. Jim Keenan presented me with a copy of the Vertigo score, which is sitting on my desk waiting for a good 45 minute slot in which to be played. Sawyer was keen to get anything with blueberries in it. "We don't have these things in Israel, man! They're incredible!" I was tickled.

In the next slot, I spent most of my time in the hallway, talking to people who were interested in the state of Perl 5 development. The big questions that arose in these discussions, and similar ones later in the week: how can Perl 5 get more regular core contributors, and how can interested people start helping? For the second one, I need to boil things down to a really tight answer with a memorable URL. I'm not sure it will help, but it might.

I attended the MoarVM talk, which was interesting, but which I can't judge very well. At any rate, I'm excited to see the Rakudo team doing more cool stuff. After that, Larry spoke. It was good, and I was glad to be there. The lightning talks were good, and then there was the "VIP mixer." That's basically free drinks and an opportunity to meet all kinds of new people. I did! I would've met more, but it was loud in there. If we could've done it outside, I bet I would've stayed much longer, but I was losing my voice within the hour.

After that, we were off to Torchy's Tacos. Walt had previously described their tacos as "a revelation." They were definitely the two best tacos I'd ever eaten. Especially amazing was the "tuk tuk," a Thai-inspired delicacy. I went back to Torchy's twice more before I left town, and regret nothing. I'll definitely go again, if I go back to Austin.

Tuesday, in fact, Walt, Tom and I headed to Torchy's for breakfast. It was a good plan. We got to the venue in time for Walt to give his talk about OS X hackery (phew!). I saw a live demo of Tim Bunce's memory profiler, which is clearly something I'll be using in the future, though it looks like it will take significant wisdom to apply effectively. Before lunch, I took in Mark Allen's talk on DTrace, which provided more incentive for me to finally learn how to use the thing. I've been working on the giant DTrace book since YAPC. I also managed, during the talk, to predict and find a stupid little bug when DTrace and lexical subs interact.

For lunch, Walt suggested we eat Bacon, so we did. Peter, Walt, and I piled into his rental and got over there. The Cobb salad was very good, the bacon fries okay. I was very glad to have a selection of local beers beyond Shiner and Austin Amber, and the guy behind the counter suggested Fire Eagle, which I enjoyed.

After lunch, Reini's talk on p2, Karen's TPF update, Matt S Trout on automation, and then Stevan's talk about the state of Perl. Despite calling Perl "the Detroit of scripting languages," it made no mention of RoboCop, nor did it liken anyone to The Old Man, Clarence Boddicker, or Dick Jones. It was a good talk, but I was understandably let down.

For dinner, the whole conference (or much of it) headed out for barbecue. The barbecue was made by The Salt Lick, and while good, it did not beat out Rudy's. Dinner ended with "game night," and I ran a hacked up D&D game, set in what I'm calling Trekpocalypse. More on that in another entry.

Wednesday started with my last trip to Torchy's. It was good.

We took our time getting to the conference, and then I killed a bunch of time in the hallway track. The first talk I got to was Sawyer's talk about async. The talk was good, and one to learn from, as a speaker. I think he did a great job keeping people involved, especially with a hunk of "spot the error" slides in the middle. By the end, he had built a program that did a bunch of parallel queries against IMDB, and then showed results to the audience. He spent a fair hunk of time just commenting on the character actors he dug up, and this went over well, as he'd paid for the digression with strong technical content until that point. I was pleased!

After that, I was obliged to go to Andrew Rodland's talk on StatsD, as I knew that I needed to start using it at work. It was useful, and I've been graphing more stuff, now, which has also been cool. In fact, this talk led to me finding a bug in py-statsd, which has now been fixed. Woo!

After that, it was time for me to give my talk on perl 5. I think it went quite well! I had been worried about it, since I was editing and reworking it until the last day. I was happy with it, though, and will not be making major changes before giving it at OSCON. I look forward to seeing the attendee feedback, once it's in. After that, it was a lot of post-talk chat in the hallways, then Peter Martini's talk about his work on adding subroutine signatures to Perl. Everyone was excited, and rightly so.

After that, Matt S Trout and the lightning talks brought things to a conclusion, and I spent some time saying goodbyes. With everyone heading his or her own way, Tom Sibley and I decided that our way was "toward cocktails." I'd identified a place on Yelp that looked good, the Firehouse Lounge and we headed down. The drinks were okay, but it was amazingly loud, so we headed out. Actually, I should qualify the drink review: my drink was pretty good. I ordered the drinks for both of us, yelling the order across the bar, and I ordered the wrong thing for Tom, forgetting which one he'd settled on. I was mortified. Tom graciously let it go.

We hadn't eaten yet, and we were still hoping for some more drinks, so I consulted Yelp and it suggested Bar Congress, which was probably the best food and drink advice I've ever gotten from a website. I wrote a review which I won't repeat here, but: I would go there again in a heartbeat. If I get back to Austin, for some reason, I will make a point of getting there.

After dinner, we headed back to the inn and I turned in. I'd have to get up early for my flight, so I packed and went right to sleep. In the morning, I used the "Catch a Cab in Austin" iOS app that people at YAPC had been talking about. It worked, and I got to the airport with plenty of time, and my flights home were uneventful. As always, I'd had a great time, but I was ready to get home to my family.

Next year's YAPC::NA will be in Orlando, and although it won't be easy to be as good as this year's, I'm pretty sure it will do just fine.

print-ing to UDP sockets: not so good (body)

by rjbs, created 2013-06-27 20:21

We've been rolling out more and more metrics at work using Graphite and StatsD. I am in heaven. I'm not very good at doing data analysis, but fortunately there are some very, very obvious things I can pick out from our current visualizations, and I'm finding all kinds of things to improve based on these.

I'm using Net::Statsd::Client, as it looked convenient. Under the hood, for now, it uses Etsy::StatsD. I found a very confusing bug and when I told the author of Net::Statsd::Client, he confirmed that he'd seen it. I've worked out the details, and it has made me grumpy! The moral of this story will be: don't use print to send to a UDP socket. (I doubt I'll print to a socket again, after this.)

As a rule, I was sending very simple measurements to StatsD. They'd look like this:

  pobox.host.mx-1.messages|1c

This means: increment the counter with the given name.

StatsD listens for UDP. In theory, you can send a bunch of these in one datagram, and they're separated by newlines. In practice, though, I was sending exactly one measurement per datagram. Sometimes, though, the server was receiving mangled data. The metric names would be wrong, or the whole string would be mangled. I fired up a network sniffer and saw things like this:

  ost.mx-1.messages|1c␤pobox.host.mx-1.messages|1c␤pobox.host.
  mx-1.messages|1c␤pobox.host.mx-1.messages|1c␤pobox.host.mx-1.
  messages|1c␤pobox.host.mx-1.messages|1c␤pobox.host.mx-1.
  messages|1c␤

Okay, it's a bunch of +1 operations run together… but what's up with the first one being truncated? And, more importantly, what was sending them in one datagram!? A review of the StatsD libraries will show that they don't do any buffering. All that Etsy::StatsD does is open a UDP socket and print to it. You can send multiple metrics at once, if you want, but you have to go out of your way to do it, and I wasn't.

Further, sockets don't buffer their output in Perl! When you connect to a socket, it's set to auto-flush. Why was there buffering happening? Andrew Rodland, author of Net::Statsd::Client said that it only happened while the StatsD server was local and unavailable. Immediately, things fell into place.

If you're running Linux, you can try this fun experiment. First, run this server:

my $sock = IO::Socket::INET->new(
  Proto      => 'udp',
  LocalHost  => 0,
  LocalPort  => 3030,
  MultiHomed => 1,
);

while (1) {
  my $data;
  my $addr = $sock->recv($data, 1024);

  print "<<$data>>\n";
}

Then, run this client:

  use IO::Socket::INET;
  use Time::HiRes qw(sleep);

  my $sock = IO::Socket::INET->new(
    Proto    => 'udp',
    PeerHost => 'localhost',
    PeerPort => 3030,
  );

  for (1 .. 20) {
    print $sock "1234567890";
    sleep 0.5;
  }

You'll see the server print out the datagrams it's receiving. It all looks good.

If you start the server after the client, though, or kill it and restart it during the client's run, you'll see the server receive datagrams with the number sequence more than once. This is bad enough. My belief, which I haven't put hard to the test, is that when the buffer to send is full, data is lost from the left. Even if your data were capable of being safely concatenated, it wouldn't be safe.

This is, at least in part, a product of the fact that Linux tries much harder to deliver UDP datagrams to the local interface. They are, to some extent, guaranteed. I'm not yet sure whether the behavior of print in Perl with such a socket is a bug, or merely a horrible behavior emerging from the circumstances around it. Fortunately, no matter which, it's easy to avoid: just replace print with send:

  send $sock, "0123456789", 0;

With Etsy::StatsD patched to do that, my problems have gone away.

prev page
next page
page 3 of 82
2049 entries, 25 per page