rjbs forgot what he was saying

not logged in (root) | by date | tagcloud | help | login

rjbs's tags

-- - ?? + ++
4   80s  
4   _to_read  
1   abe.pm  
7   addex  
1   ads  
26   advice  
1   airlines  
1   algorithm  
9   amazon  
1   amber  
1   america  
1   animals  
1   aol  
43   apple  
7   applescript  
2   architecture  
1   arduino  
1   arf  
5   art  
1   assembler  
5   astronomy  
2   baby  
4   backup  
2   baking  
1   banjo  
2   barcode  
1   bash  
1   battletech  
6   beer  
2   bethlehem  
5   bible  
4   bicycle  
6   blog  
1   bonjour  
11   book  
6   bookmarks  
48   books  
9   booze  
1   boston  
1   brainfuck  
1   breadmachine  
1   bryar  
1   bugzilla  
1   bus  
2   c  
1   calculator  
1   calendar  
27   campaign  
1   car  
4   cartoons  
1   cdbi  
1   cellphone  
6   chart  
1   chemistry  
14   chess  
1   china  
1   chinese  
20   christianity  
2   christmas  
1   chrome  
1   cloud  
6   cocoa  
16   code  
15   color  
9   comics  
7   compsci  
1   computers  
1   conference  
2   convention  
1   cooking  
21   cpan  
1   cricket  
2   criticism  
1   cron  
3   crossword  
4   crypto  
3   css  
11   culture  
1   cvs  
1   cygwin  
1   dad  
1   dashboard  
8   database  
2   dbi  
1   debian  
2   debug  
8   delicious  
3   design  
2   dice  
2   dictionary  
27   distzilla  
1   diy  
72   dnd  
2   dns  
1   drawing  
4   dreamcast  
12   dreams  
1   drm  
1   dropbox  
1   dvorak  
4   ebook  
1   ebooks  
2   economics  
8   editor  
2   emacs  
65   email  
1   encoding  
4   english  
3   ergo  
1   erlang  
9   esperanto  
2   etiquette  
1   evolution  
1   exchange  
2   exercises  
1   extreme  
3   family  
1   fax  
1   fbl  
2   ff  
2   fiction  
6   firefox  
1   flags  
10   flash  
2   fletch  
1   flickr  
1   fluxx  
1   foaf  
1   folk  
11   fonts  
20   food  
4   forth  
2   forum  
1   free  
2   freesoftware  
2   friends  
1   frink  
1   fud  
3   functional  
1   fundraising  
2   furniture  
8   game  
3   gameboy  
18   gamecube  
205   games  
50   gamesite  
1   gaming  
1   geography  
2   geometry  
1   german  
19   git  
1   github  
1   glasses  
1   gloria  
1   gmail  
15   go  
2   golf  
3   goof  
6   google  
9   gov  
1   groups  
11   guineapigs  
2   gun  
3   gwb  
7   haiku  
2   halo  
47   hardware  
11   haskell  
1   hate  
1   health  
1   high-st  
3   hiring  
15   history  
7   hiveminder  
1   home  
14   house  
16   howto  
7   html  
2   http  
35   humor  
1   icehouse  
2   icon  
6   icons  
4   idea  
1   ideas  
3   illusion  
16   image  
1   infocom  
6   infocom-replay  
2   inform  
12   int-fiction  
1   io  
1   ipad  
3   iphoto  
6   ipod  
5   irc  
10   itunes  
1   jargon  
2   java  
28   javascript  
4   jonstewart  
1   jott  
1179   journal  
1   jquery  
1   json  
1   karaoke  
15   keyboard  
5   keynote  
1   kinesis  
1   knave  
3   kwiki  
10   language  
2   lasertag  
2   latex  
2   latin  
6   law  
1   lazyweb  
2   lego  
2   library  
1   lighttpd  
11   linux  
4   liquidplanner  
10   lisp  
1   list  
2   literature  
7   logic  
1   logo  
2   lovecraft  
1   lua  
115   macosx  
1   magazine  
1   make  
2   management  
2   map  
4   maps  
6   mario  
1   markdown  
6   martha  
10   math  
3   mecha  
1   media  
2   memory  
1   metabase  
2   metroid  
2   mh  
1   microscopy  
1   microsoft  
1   mk  
12   mnm  
4   money  
3   moose  
2   motivation  
1   mouse  
20   movies  
3   mozilla  
1   mp3  
5   msie  
23   music  
10   mutt  
2   mysql  
1   mythology  
1   negativland  
1   nethack  
23   network  
9   networking  
8   news  
1   oauth  
2   ocaml  
3   omnifocus  
1   omniweb  
2   online  
1   ook  
1   openid  
1   oracle  
8   oscon  
1   oscon2008  
1   outliner  
7   paranoia  
2   parsing  
1   pdf  
6   pedagogy  
3   pennsylvania  
342   perl  
2   perlmonks  
1   perlqa2011  
1   perlqa2012  
1   perlqa2013  
1   perlqa2014  
7   pets  
2   philosophy  
22   phone  
1   photos  
8   php  
1   physics  
1   piercing  
1   piet  
1   pike  
5   platformer  
1   pobox  
10   pod  
3   poetry  
11   politics  
1   porn  
1   portland  
2   postfix  
2   postgres  
2   ppw2007  
1   pr0n  
6   presentation  
1   printer  
31   productivity  
2   profiling  
1   programing  
432   programming  
5   prolog  
7   ps2  
2   psx  
1   psych  
1   pvoice  
11   python  
1   qmail  
1   quality  
1   querylet  
2   quiz  
2   rant  
3   rants  
2   reading  
3   recipe  
55   reference  
1   regex  
12   religion  
7   repair  
1   replication  
1   resource  
3   rest  
18   retail  
11   review  
1   rhetoric  
1   rifts  
1   robot  
1   rpc  
117   rpg  
5   rss  
3   rtf  
1   rubik  
20   rubric  
17   ruby  
1   rugby  
1   russia  
6   rx  
2   safari  
1   satire  
1   scala  
2   scheme  
8   science  
1   sco  
2   screen  
2   search  
10   security  
2   sega  
3   service  
4   sh  
5   shaving  
1   sheetmusic  
1   slack  
1   slony  
2   smalltalk  
1   smoking  
1   social  
13   socialnetworking  
184   software  
1   solaris  
2   sonic  
2   space  
8   spam  
1   speech  
2   sports  
1   sql  
2   sqlite  
1   startrek  
3   starwars  
2   steelbat  
1   strategy  
68   stupid  
10   subversion  
1   superhero  
3   svk  
1   switcher  
5   syntax  
1   tarot  
1   taxes  
17   testing  
1   tex  
1   thanksgiving  
1   theatre  
1   thunderbird  
7   tiddlywiki  
6   time  
5   tivo  
6   todo  
1   tolkien  
55   tool  
1   tools  
1   toys  
10   travel  
1   trek  
25   tutorial  
16   tv  
3   typing  
1   uk  
5   unicode  
9   unix  
1   usenet  
2   vcs  
1   vector  
2   vi  
10   video  
92   videogame  
14   vim  
1   virtual  
1   virtue  
2   visualization  
2   vnc  
1   vocabulary  
1   voip  
1   voting  
6   war  
1   weapons  
1   weather  
58   web  
1   webgames  
1   weight  
2   wii  
20   wiki  
1   wikipedia  
14   win32  
2   wireless  
1   wodehouse  
1   wordplay  
19   work  
3   writing  
20   wtf  
18   xbox  
1   xcode  
1   xen  
8   xml  
1   xp  
1   xslt  
2   xul  
1   yahoo  
3   yaml  
21   yapc  
1   yapcasia  
5   ywar  
4   zelda  
1   zen  
3   zombie  
1   zombies  
5   zsh  

RSS feed rjbs's entries with a body

collapse entry bodies

I bought a Wii U! (body)

by rjbs, created 2014-04-10 23:11
last modified 2014-04-10 23:11
tagged with: @markup:md journal videogame wii

On Tuesday, Gloria and I celebrated our 14th anniversary! We went out to Tulum (yum!) and Vegan Treats (yum!) and it wasn't quite late enough that we wanted to go pick up the kid, so we decided to go walk around Target. I said I'd been thinking about buying a Wii U, and Gloria said I should. (Or maybe she just didn't say "I strongly object." I'm not splitting hairs, here.)

I bought one. Today, we went looking for some games to use the Target credit that we got by buying the game toward this week's buy-2-get-1 promotion on video games. In the end, Gloria ended up driving to Quakertown to buy video games for me while I sat at home poking at security code. I owe her big time — nothing new there!

So, now I've played a handful of Wii U games, done a Wii-to-Wii-U transfer, used the Wii eShop, and tried out a few of the non-game features on the Wii U. This is my preliminary report.

I bought the Wii U because I wanted to play Nintendo games. I have almost no interest in playing non-Nintendo games on it. (I kind of want to play ZombiU, though.) Eventually, there were enough games for the Wii U from Nintendo that I thought it would be worth the investment. We now own:

  • Super Mario Bros. Wii U
  • Super Mario 3-D World
  • NES Remix
  • Super Luigi Bros. Wii U ☺
  • Pikmin 3
  • Donkey Kong Country: Tropical Freeze
  • NintendoLand
  • Scribblenauts Unmasked (not Nintendo, but I really wanted it)
  • Scribblenauts Unlimited (which I got because it was effectively free)

I've played the first three, although none of them very much. They are all excellent in the way that I expect from Nintendo. It amazes me how they are able to produce such consistently great games! The only major franchise Nintendo game I remember disliking in the last ten years is Metroid: The Other M, which was outsourced. (By the way, I loathed that game!)

The big problem so far is the controller. The Wii U gamepad is way cool, but as a controller it's just a little weird. I think I'm very used to my hands being at a bit of an angle when playing games, and the gamepad makes me hold them straight. I'm not sure whether the distance between them is really an issue.

On the other hand, I can play those games with the Wiimote, too. It's not bad, especially NES Remix. I'm left feeling, though, that the Wii U gamepad is a poor replacement for a "normal" gamepad, and the Wiimote is a poor replacement for an NES controller. I think I'll probably much prefer playing both the Mario games with the Wii U "pro" controller, which is much more like an Xbox or PS3 controller. As for NES Remix, I think I'll stick with the Wiimote, but I'd like something a bit more substantial. The height:width ratio on the Wiimote isn't quite right, and I feel like I'm holding the thing the wrong way. The Wii U menu doesn't help with that: it assumes that you can use the cross pad and buttons like you're holding the remote vertically, even when you're deep in playing a game that uses it horizontally.

Still, I bought the Wii U for the games, and so far they're just great. I expect that trend to continue, so I'm sure I'll be delighted with the purchase over time.

Finally, there's the matter of everything that is neither the hardware nor the games. For example, the menu system, the social network, the system setup, and so on. In short: it's all bizarre.

There's this pervasive idea in Wii U that everybody who plays Wii U is your buddy, and you want them to post sticky notes on your game. When you reach a new level in any game, you might see notes from other players, including typed or hand-drawn notes. These range from the relevant to the insipid to the bizarre. They can be turned off, but they're weird. Weirder, these occur on the main screen. Instead of a menu like the Wii had, showing me all my options, the default is to show a swarming mass of Mii avatars who chatter amongst themselves about nothing much. Can you imagine if you were using Windows, and little random speech bubbles popped up here and there talking about cool new programs that were going to be released soon? It's just weird.

On the other hand, it seems like shutting these off shuts off some kind of avenue to receiving news and advance information. I'll probably do it anyway.

What would be cool, though, would be to get this chatter from just my actual circle of friends. I'd love to be able to use the Wii main menu and "Miiverse" as a sort of bulletin board with friends. With the whole Internet, though? Not so much.

I haven't yet set up any friendships, but I will. I'm looking forward to a Nintendo gaming experience where I don't need to tell my friends a 16 digit code to be befriended. My assumption is that Nintendo is still several years behind the curve, but hopefully they're at last on the acceptable side of it. The Wii code experience was indefensible.

My goal is to play through Super Mario Bros. Wii U first, or at least first-ish. After that, Mario 3-D Land. I'm not too worried about playing in order, though. Everything looks good. I mostly bought Scribblenauts: Unmasked to see whether I can stump it. My daughter demanded that I start it up to see if it could make Raven, and was delighted to see that it could. I'm looking forward to seeing whether it can make Doctor Phosphorus.

lazyweb request tracker (body)

by rjbs, created 2014-04-03 23:22
last modified 2014-04-03 23:23
tagged with: @markup:md journal

I like using Remember the Milk. It's a to do list tracker. I use it for lots of little one-off tasks (blog ideas, games to try) and for simple projects that don't have GitHub repositories. It's got an API (which is kind of weird) and an iOS app (which is very good) and a bunch of other interesting little services.

I'd like to use it for more things. Most of them would be a little tough to do, because of the particulars of RTM. Today, though, I realized a useful thing to do: every time I think, "I wonder how I can do XYZ?" and tweet it, I'll also put it in a list in RTM. Then if I get an answer, I can record it, and if I don't, I can remember what I was looking for and ask again later. Or maybe figure out a solution on my own!

so long, module list! (body)

by rjbs, created 2014-03-26 10:48
tagged with: @markup:md cpan journal perl

There's a file in every CPAN mirror called 03modlist.data that contains the "registered module list." It's got no indenting, but if it did, it would look something like this:

sub data {
  my $result  = {};
  my $primary = "modid";
  for (@$CPAN::Modulelist::data) {
    my %hash;
    @hash{@$CPAN::Modulelist::cols} = @$_;
    $result->{ $hash{$primary} } = \%hash;
  }
  $result;
}
$CPAN::Modulelist::cols = [
  'modid',       'statd', 'stats', 'statl', 'stati', 'statp',
  'description',
  'userid',      'chapterid',
];
$CPAN::Modulelist::data = [
  [
    'ACL::Regex', 'b', 'd', 'p', 'O', 'b',
    'Validation of actions via regular expression',
    'PBLAIR', '11'
  ],
  ...
];

It's an index of some of the stuff on the CPAN, broken down into categories, sort of like the original Yahoo index. Or dmoz, which is apparently still out there! Just like those indices, it's only a subset of the total available content. Unlike those, things only appear on the module list when the author requests it. Over the years, authors have become less and less likely to register their modules, so the list became less relevant to finding the best answer, which meant authors would be even less likely to bother using it.

Some things that don't appear in the module list: DBIx::Class, Moose, Plack, Dancer, Mojolicious, cpanminus, and plenty of other things you have heard of and use.

Rather than keep the feature around, languishing, it's being shut off so that we can, eventually, delete a bunch of code from PAUSE.

The steps are something like this:

  • stop putting any actual module data into 03modlist
  • stop regenerating 03module list altogether, leave it a static file
  • convert module registration permissions to normal permissions
  • delete all the code for handling module registration stuff
  • Caribbean vacation

There's a pull request to make 03modlist empty already, just waiting to be applied… and it should be applied pretty soon. Be prepared!

the 2014 Perl QA Hackathon in Lyon: the work (body)

by rjbs, created 2014-03-18 22:37
last modified 2014-03-19 11:09

Today is my first full day back in Pennsylvania after the Perl QA Hackathon in Lyon, and I'm feeling remarkably recovered from four long days of hacking and conferring followed by a long day of travel. I can only credit my quick recovery to my significantly increased intake of Chartreuse over the last week.

The QA Hackathon is a small, tightly-focused event where the programmers currently doing work on the CPAN toolchain get together to get a lot of work done and to collaborate in ways that simply aren't possible for most of the rest of the year. I always come out of it feeling refreshed and invigorated, partly because I feel like I get so much done and partly because it's such an inspiration to see so many other people getting even more done all in one place.

I'm not going to recount the work I did this year in the order that I did it. You might be able to reconstruct this by looking at my git logs, but I'll leave that up to you. Also, I'm sticking to technical stuff here. I might make a second post later about non-code topics.

For this year's hackathon, I wasn't sure exactly what my agenda would be. I thought I might be working on the conversion of the perl 5 core's podcheck.t to Pod::Checker 1.70. In the end, though, I spent most of my time on PAUSE. PAUSE, the Perl Author Upload SErver, is a cluster of programs and services, mostly thought of as two parts:

  • a web site for managing user accounts and receiving archive uploads
  • a program that scans for new uploads and decides whether to put their contents into the CPAN indexes

I didn't do any work on the web site this year. (I would like to do that next, though!) Instead, I worked entirely on the indexer. The indexer is responsible for:

  • deciding whether to index new uploads at all
  • deciding what packages, at what versions, a new upload contains
  • checking the uploading user's authorization to update the index for those packages
  • actually updating the master database and the on-disk flatfile indexes

In the summer of 2011, David Golden and I got together for a one-day micro-hackathon. Our goal was to make it possible to write tests for the indexer's behavior, and I think we succeeded. I'm proud of what we accomplished that day, and it's made all my subsequent work on PAUSE possible.

I also worked on PAUSE change last year, and a few of the things we'd done then had not ever been quite finished. I decided that my first course of action would be to try to get those sorted out.

PAUSE Work

03modlist

The "module list" isn't talked about a lot these days. It's a header-and-body format file where the body is a "safe to eval" (ha ha) Perl document that describes "registered" modules and their properties. You can see it here: http://www.cpan.org/modules/03modlist.data

Most modules on the CPAN are only in "the index," also known as "02packages." There's very little information indexed for these. Given a package name, you can find out which file seems to have the latest version of it, and you can find who has permission to update it. That's it. Registered modules, on the other hand, list a description, a category, a programming style, and other things. The module list isn't much used anymore, and the kinds of data that it reports are now found, instead, in the META.json files meant to be included with every distribution.

I had filed a pull request to produce an empty 03modlist in 2013, but it wasn't in place. Andreas, David, and I all remembered that there was a reason we hadn't put it in place, but we couldn't remember specifics or find any evidence. We decided to push forward. I got in touch with a few people who I knew would be affected, rebased my work, and got a schedule in place. There wasn't much more to do on this front, but it will happen soon. The remaining steps are:

  1. write an announcement
  2. apply the patch
  3. post the announcement that it's been done

I expect this to be done by April.

After that's done, and a month or two have passed with no trouble, we'll be able to start deleting indexer code, converting "m" permissions to "f" or "c" permissions (more on that later), and eliminating unneeded user interface.

dist name permissions

Generally, if I'm going to release a module called Foo::Bar at version 1.002, it will get uploaded in a file called Foo-Bar-1.002.tar.gz. In that filename, Foo-Bar is the "dist name." Sometimes people name their files differently. For example, one might upload LWP::UserAgent in lwp-perl-5.123.tar.gz. This shouldn't matter, but does. The PAUSE indexer only checks permissions on packages, and nothing else. Unfortunately, some tools work based on dist names. One of these is the CPAN Request Tracker instance. It would allow distribution queues to clash and merge because of the lax (read: entire lack of) permissions around distribution names.

Last year, I began work to address this. The idea was that you may only use a distribution name if you have permissions on the matching module name. If you want to call your distribution Pie-Eater, you need permissions on Pie::Eater. We didn't get the work merged last year, because only at the last minute did we realize that there were over 1,000 cases where this wasn't satisfied. It was far more than we'd suspected. (This year, when I reminded Andreas of this, he was pretty dubious. I wasn't: I remembered the stunned disbelief I'd already worked through last year!)

A small group of us discussed the situation and realized that about 99% of the cases could be solved easily: we'd just give module permissions out as needed. A few other cases could be fixed automatically or were not, actually, problematic. The rest were so convoluted that we left them to be fixed as needed. Some of them dated to the 1990's, so it seemed unlikely that it would come up.

I filed a pull request to make this change, in large part based on the work from last year. It was merged and deployed.

Unfortunately, there was a big problem!

PAUSE does not (yet!) have a very robust transaction model, and its database updates were done one by one, with AutoCommit enabled. There was no way to entirely reject a file after starting work, prior to this commit, and I thought the simplest thing to do would be to wrap the indexing of each dist in a transaction. It made it quite easy to write the new check safely, although it required some jiggery-pokery with $dbh disconnect times. In the end, all the tests were successful.

Unfortunately, the tests and production behaved differently, using different database systems. Andreas and I spent about an hour on things before rolling back the changes and having dinner. The next morning, everything was clear. We knew that a child process was disconnecting from the database, but couldn't find out where. We'd set InactiveDestroy on the handle, so it shouldn't have been disconnecting… but it turned out that another object in the system had its own DESTROY method which disconnected explicitly. That fixed things, and after nearly a year, the feature was in place!

package name case-changing

Last year, we did a fair bit of work to make permission checks case-insensitive. The goal was that if "Foo" was already registered, nobody else could claim "foo". We wanted to prevent case-insensitive filesystems from screwing up where case-sensitive filesystems would work. Of course, this isn't a real solution, but it helps discourage the problem.

When we did this, we had to decide what to do when someone who had permissions on Foo tried to switch to using "foo". We decided that, hey, it's your package and you can change it however you like. This turned out to be a mistake, best demonstrated by some recent trouble with the Perl ElasticSearch client. We decided that if you want to change case, you have to be very deliberate about it. Right now, that means dropping permissions and re-indexing. In the future, I hope to make it a bit simpler, but I'm in no rush. This is not a common thing to want to do. I filed a pull request to forbid case-mismatching updates.

I also filed a pull request to issue a warning when package and module names case-mismatch. That is, if you upload a dist containing lib/Foo/Bar.pm with package foo::bar in it, you'll get a warning. In the future, we may start rejecting cases like this, but for now, it's really not good enough. We only handle some cases where the problem might be there, but it's probably most of them.

Indexing warnings are a new thing. I'm not sure what warnings we might add in the future, but it's easy to do so. Given the kinds of strictness we've talked about adding, being able to warn about it first will probably come in useful later.

fixing bizarro permissions

In the middle of some of the work above, while I was in the middle of some other discussion, at some point, somebody leaned over and said, "Hey, did you see the blog post about how to steal permissions on PAUSE distributions?" I blanched. I read the post, which seemed to describe something that should definitely not be possible, and decided it was now my top priority. What luck to have this posted during the hackathon!

In PAUSE, there are three kinds of permission:

  • first-come permission, given to the first person to upload a package
  • co-maintainer permission, handed out by the first-come user
  • module list permission, given to the registered owner in the module list

Let's ignore the last one for now, since they're going to go away.

The bug was that when nobody had first-come permissions on a package, the PAUSE code assumed that nobody could have any permissions on it, and would re-issue first-come. It wasn't the only bug that inspection turned up, though.

It might sound, from above, like a given package owner would only need either first-come or co-maint, but actually you always need co-maint. First-come is meant to be granted in addition to that. This was required, but not enforced, and if a user ended up with only f permissions, they're sometimes seem not to exist, and permissions could be mangled. I filed a pull request to prevent dist hijacking along with some tests.

While running the tests, I started seeing something really bizarre. Normally, permission lines in the permissions index test file would look like this:

Some::Package,USER1,f
Some::Package,USER2,c

...but in the tests, I was sometimes seeing this:

Some::Package,USER1,1
Some::Package,USER2,2

Waaaah? I was baffled for a while until something nagged at me. I noticed that the SQL generating the data to output was using double-quote characters for string literals, rather than standard single-quotes. This is fine in MySQL, which is used in production, but not in SQLite, which is used in the tests. I filed a pull request to switch the quotes. I'll probably file more of those in the future. Really, it would be good to test with the same system as is used in production, but that's further off.

package NAME VERSION

Almost a year ago, Thomas Sibley reported that PAUSE didn't handle new-style package declaration. That is, it only worked with packages like this:

package Foo::Bar;
our $VERSION = '1.001';

...but not any of these:

package Foo::Bar 1.001;

package Foo::Bar 1.001 { ... }

package Foo::Bar {
  our $VERSION = '1.001';
}

I strongly prefer package NAME VERSION when possible, but "possible" didn't include "anything released to the CPAN" because of this bug. I filed a pull request to support all forms of package. I'm really happy about this one, and look forward to making it possible for more of my dists to use the newer forms of package!

respecting release_status in the META.json file

The META.json file has a field called release_status. It's meant to be a way to put a definitive statement in the distribution itself that it's a trial release, not meant for production use. Right now, there are two chief ways to indicate this, both related only to the name of the file uploaded to the CPAN. That file doesn't stick around, and we want a way to decide what to do based on the contents of the dist, not the archive name.

Unfortunately, PAUSE totally ignored this field. I filed a pull request to respect the release_status field. Andreas suggested that we should inform users why we've done this, so I filed a pull request to add "why we skipped your dist" reports. I used that facility for the "dist name much match module name" feature above, and I suspect we'll start issuing those reports for more situations in the future, too.

spreading the joy of testing

Neil Bowers was at the hackathon, and had asked a question or two about how the indexer did stuff. I took this as a request for me to pester him mercilessly about learning how to write tests with the indexer's testing code. Eventually, and presumably to shut me up, he stopped by and I walked him through the code. In the process of doing so, we realized that half the tests — while all seemingly correct — had been mislabeled. I filed a pull request to fix all the test names.

I'm hoping to file some other related pulls to refactor the test file to make it easier to write new indexer tests in their own files. Right now, the single file is just a bit too long.

fixes of opportunity

Lots of the other work exposed little bugs to fix.

Because I was doing all my testing on perl 5.19.9, one of our new warnings picked up a precedence error in the code. I filed a pull request to replace an or with a ||.

Every time I ran the tests, I got an obnoxious flood of logging output. Sometimes it was useful. Usually, it was a distraction. I filed a pull request to shut up the noise unless I was running the tests in verbose mode.

Peter Rabbitson had noticed that when PAUSE skips a "dev release" because of the word TRIAL in the release filename, it was happy for that string to appear anywhere in the name. For example, MISTRIAL-1.234.tar.gz would have been skipped. I filed a pull request to better anchor the substring. I filed a matching pull request with CPAN::DistnameInfo that fixed the same bug, plus some other little things. I'm glad I did this (it was David Golden's idea!) because Graham Barr pointed out that historically people have used not just ...-TRIAL.tar.gz but also ...-TRIAL1.tar.gz.

I found some cases where we were interpolating undef instead of … anything else. I filed a pull request to use a default string when no module owner could be found.

PAUSE has a one second sleep after each newly-indexed distribution. I'm not sure why, and assume it's because of some hopefully long dead race condition. Still, in testing, I knew it wouldn't be needed, and it slowed the test suite down quite a lot every time I added a new test run of the indexer. I filed a pull request to updated the TestPAUSE system to skip the sleep, shaving off a good 90% of the indexer's test's runtime.

While testing something unrelated, Andreas and I simultaneously noticed a very weird alignment issue with some otherwise nicely-formatted text. I filed a pull request to eliminate some accidental indenting.

Dist::Zilla

I had hoped to spend the last day plowing through relevant tickets in the Dist::Zilla queue, but it just didn't happen. I did get to merge and tweak some work from David Golden to make it easier to run test suites in parallel. With the latest Dist::Zilla and @RJBS bundle, my tests suites run nine jobs at once, which should speed up both testing and releasing.

Version Numbers

One night, Graham Knop, Peter Rabbitson, David Golden, Leon Timmermans, Karen Etheridge, and I sat down over an enormous steak to discuss how Perl 5's abysmal handling of module versioning could be fixed. I hope that we can make some forward movement on some of the ideas we hammered out. They can all get presented later, once they're better transcribed. I have a lot of them on the back of a huge paper place-mat, right now.

perl5.git

I did almost nothing on the perl core, which is as I expected. On Friday morning, though, I was on the train to and from the Chartreuse distillery, with no network access, so I wanted to work on something requiring nothing but my editor and git. I knew just what to do!

Perl's lexical warnings are documented in two places: warnings, which documents a few things about the warnings pragma, and perllexwarn, which documents other stuff about using lexical warnings. There really didn't seem to be any reason to divide the content, and it has led, over and over, to people being unable to find useful documentation. I merged everything from perllexwarn into warnings. Normally, this would have been trivial, but warnings.pm is a generated file and perllexwarn.pod was an auto-updated file, so I had to update the program that did this work. It was not very hard, but it kept me busy on the train so that I was still working even while off to do something a bit more tourist-y.

Is that all?

I know there was some more to all this, and it might come back to me. I certainly had plenty of interesting discussions about a huge range of topics with many different groups of attendees. They ranged from the wildly entertaining to the technically valuable. I'll probably recount some of them in a future post. As for this post, meant only to recount the work that I did, I think I've gotten the great majority of it.

Thanks!

I was able to attend the 2014 Perl QA Hackathon because of the donations of the generous sponsors and the many donors to The Perl Foundation, which paid for my travel. Those subsidies, though, would not have been very useful if there hadn't been a conference, so I also want to thank Philippe "BooK" Bruhat and Laurent Bolvin who took on the organization of the hackathon. Finally, thanks to Wendy van Dijk, who began each day with a run to the market for fresh lunch foods. I had plenty of good food while in Lyon, but the best was the daily spread of bread and cheese. (Wendy also brought an enormous collection of excellent liquor, on which I will write more another day.)

I'm look forward to next year's hackathon already. I hope that it will stick to the same size as this year, which was back to the excellent and intimate group of the first few years. Until then, I will just have to stay productive through other means.

today's timezone rant (body)

by rjbs, created 2014-03-07 19:02
last modified 2014-03-08 20:52

Everybody knows, I hope, that you have to be really careful when dealing with time in programs. This isn't a problem only in Perl. Things are bad all over. If you know what you're doing when you start, you can avoid many, many problems. Unfortunately, not all our code is being bulit anew by our present selves. Quite a lot of it exists already, written by other, less experienced programmers, and often (to our great shame) our younger selves.

Every morning, I look at any unusual exceptions that have been reported overnight. Last night, I saw a few complaining about "invalid datetime values," and I saw that they were about times around two something in the morning. A chill went up my spine. I knew what was going to be the case. I checked with MySQL:

mysql> update my_table set expires = '20140309015959' where id = 134866408;
Query OK, 1 row affected (0.00 sec)

mysql> update my_table set expires = '20140309030000' where id = 134866408;
Query OK, 1 rows affected (0.00 sec)

mysql> update my_table set expires = '20140309020000' where id = 134866408;
ERROR: Invalid TIMESTAMP value in column 'expires' at row 1

So, 01:59:59 is okay. 03:00:00 is okay. 02:00:00 through 02:59:59 is not okay. Why? Time zones! Daylight saving time causes that hour to not exist in America/New_York, and the field in question is storing local times. You can't store March 9th, 2014 2:00 in the field because no such moment in time exists. The lesson here is that you shouldn't be storing your time in a local format. Obviously! I tend to store timestamps as integers, but storing them as universal time would have avoided this problem.

Of course, since there's a lot of data already stored in local times, and it can't always be "just fixed," we also have a bunch of tools that work with times, being careful to avoid time zone problems. Unfortunately, that's not always easy. This problem, though, came from a dumb little routine that looks something like this:

sub plusdays {
  Date::Calc::Add_Delta_YMDHMS( Now, 0, 0, 0, 0, 0, 86_400 * $_[0]);
}

So, you want a time a week in the future? plusdays(7)! You want a time 12 hours from now? plusdays(0.5). Crude, but effective and useful. Unfortunately, when it's currently 2014-03-08 02:30 and you ask for one day later, you get 2014-03-08 02:30 — a non-time.

The solution to this should was trivial. We already use DateTime extensively. It just hadn't gotten done to this one little piece of code. I wrote this:

sub plusdays {
  DateTime->local_now->add(seconds => $_[0] * 86_400)
}

It's a good thing that we did this in terms of seconds. See, this does what we want:

my $dt = DateTime->new(
  time_zone => 'America/New_York',
  year => 2014, month  => 3, day => 8, hour => 2, minute => 30,
);

say $dt->clone->add(seconds => 86_400);

It prints 2014-03-09T03:30:00.

On the other hand, if we replace the last line with

say $dt->clone->add(days => 1);

then we get this fatal error:

Invalid local time for date in time zone: America/New_York

This is totally understandable. It's the kind of thing that lets us distinguish between adding "a month" and adding "30 days," which are obviously distinct operations. Not all calendar days are 86,400 seconds long, for example.

Actually, this problem wouldn't have affected us, because we don't use DateTime. We use a subclass of DateTime that avoids these problems by doing its math in UTC. Unfortunately, this has other bizarre effects.

While I doing the above edit, I saw some other code that was also using Date::Calc when it could've been using DateTime. (Or, as above, our internal subclass of DateTime.) This code generated months in a span, so if you say:

my @months = month_range('200106', '200208');

You get:

('200106', '200107', '200108', '200109', ..., '200208')

Great! Somewhere in there, I ended up writing this code:

my $next_month = $curr_month->clone->add(months => 1);

...and something bizarre happened! The test suite entered an infinite loop as it tried to get from the starting month to the ending month. I added more print statements and got this:

CURRENTLY (2001-10-01 00:00) PLUS ONE MONTH: (2001-10-31 23:00)

What??

Well, as I said above, our internal subclass does its date math in UTC to avoid one kind of problem, but it creates another kind. Because the offset to UTC changes over the course of October, the endpoint seems one hour off when it's converted back to local time. The month in local time, effectively, is an hour shorter than the month in UTC. So, in this instance, I opted not to use our internal subclass.

Now, the real problem here isn't DateTime being hard to use or date problems being intractably hard. The problem is that when not handled properly from the start, date representations can become a colossal pain. We're only stuck with most of the stupid problems above because the code in question started with a few innocent-seeming-but-actually-terrible decisions which then metastasized throughout the code base. If all of the time representations had been in universal time, with localization only done as needed, these problems could have been avoided.

Of course, you probably knew that, so in the end, I guess I'm just venting. I feel better now.

that might be good enough for production... (body)

by rjbs, created 2014-02-27 10:30
last modified 2014-02-27 11:29

Sometimes, the example code in documentation or teaching material is really bad. When the code's dead wrong, that might not be the worst. The worst may be code that's misleading without being wrong. The code does just what it says it does, but it doesn't keep its concepts clear, and students get annoyed and write frustrated blog posts. This code might be good enough for production, but not for pedagogy.

I'm back to learning Forth, which I'm really enjoying. The final example in the chapter on variables is to write a tic-tac-toe board. (By the way, more evidence that Forth is strange: variables aren't introduced until chapter nine, more than halfway through the book.)

The exercise calls for the board state to be stored in a byte array, initialized to zeroes, with 1 used for X and -1 used for O. I thought nothing of this and got to work, but no matter what, when I played an "O" I would get a "?" in my output board — an indication that my code was finding none of -1, 0, or 1 in the byte in memory. Why?

Well, bytes are 0 to 255, so -1 isn't a natural value, but "everybody knows" the convention is that -1 is a way of writing 255. I wrote this code which, given a number on the stack, returns the character that should display it:

: BOARD.CHAR
  DUP 0 =  IF '- ELSE
  DUP 1  = IF 'X ELSE
  DUP -1 = IF 'O ELSE '? THEN THEN THEN SWAP DROP ;

The -1 there is a cell value, not a byte value, so on my Forth it's not 255 but 18,446,744,073,709,551,615. Oops.

The answer should be easy: I want a way to say CHAR -1 or something. We didn't see that yet in the book. How does the author do it? At this point, I'm already a little annoyed that I'm going to have to look at the author's answer, but that's life. My guess is that either he's using something he didn't show us, or he's using a literal 255.

It's neither. His factoring of the problem's a bit different, but:

: .BOX  ( square# -- )
  SQUARE C@  DUP 0= IF  2 SPACES
      ELSE  DUP 1 = IF ." X "
            ELSE ." O "
            THEN
            THEN
  DROP ;

He totally punts! If there's anything in a cell other than 0 or 1, he displays an O. Bah!

I found absolutely no value in this use of -1, so I stored all of O's moves as 2. All tests successful.

fixing my accidentally strict mail rules… or not (body)

by rjbs, created 2014-02-24 20:36

I recently made some changes to Ywar, my personal goal tracker, and I couldn't be happier! Mostly.

Ywar is configured with a list of "checks." Each check looks up some datum, compares it to previous measurements, decides whether the goal state was met, and saves the current measurement. The checks used to run once a day, at 23:00. This meant that, for the most part, the feedback I got was the next morning in my daily agenda mail. I could hit refresh at 23:05, if I wanted, and if I was awake. If I did something at 8:00, I'd just have to remember. For the most part, this wasn't a big problem, but I wanted to be able to run things more often.

Last week, when I was working on my Goodreads/Ywar integration, I also made the changes needed to run ywar update more often. There were two main changes: every measurement now carries a log of whether it resulted in goal completion, and checks don't get the last measured value, but the "last state," which contains both the last value measured and the value measured at the last completion.

While I was at it, I added Pushover notifications. Now, when I get up in the morning, I step on my scale. A few minutes later, my phone bleeps, telling me, "Good job! You stepped on the scale!" Over breakfast, I might read an article I've saved to Instapaper. While I was the dishes, or maybe while I read a second article, my iPad bleeps. "Good job! You read something from Instapaper!"

This is surprisingly motivating. I'm completing goals much more often than I used to, now. (The Goodreads integration has also been really motivating.)

This change also inadvertantly introduced a pretty significant change in my email rules. Most of them follow the same pattern, which is something like this:

  • at least once every five days, have less unread mail than the previous day

Some of them say "flagged" instead of "unread," or limit their checks to specific folders, but the pattern is pretty much always like the one above. When I started passing each check both the "last measured" and "last completion" values, I had to decide which they'd use for computing whether the goal was completed. In every case, I chose "last completion." That means that the difference checked is always between the now and the last time we met our goal. This has a massive impact here.

It used to be that all I had to do to keep my "keep reading email" goal alive was to reduce my unread mail count from the previous date. Imagine the following set of end-of-day unread message counts:

  • Sunday: 50
  • Monday: 100
  • Tuesday: 70
  • Wednesday: 100
  • Thursday: 75
  • Friday: 80
  • Saturday: 70

Under the old rules, I would get three completions in that period. On each of Tuesday, Thursday, and Saturday, the count of unread messages goes down from the previous day.

Under the new reules, I would get only one completion: Tuesday. After that, the only way, ever, to get another completion is to get down to below 70 unread messages. Maybe in a few days, I get to 60, and now that's my target. This gets pretty unforgiving pretty fast! My current low water mark for unread mail is 28, and I get an averge of 126 new messages each day. These goals actually have a minimum threshold, so that anything under the threshold counts, even if last time I was further below it. Right now, it's set at 10 for my unread mail goal.

It would be pretty easy to fix this to work like it used to work. I'd get the latest measurement made yesterday and compare to that. I'm just not sure that I should restore that behavior. The old behavior made it very easy to read the easy mail and ignore the stuff that really needed my time. I could let some mail pile up on Wednesday, read the light stuff on Thursday, and I'd still get my point. I kept thinking that I needed something "more forgiving," but I don't think that's true. I don't even think it makes sense. What would "more forgiving" mean? I'm not sure.

One thing to consider is that if I can never keep a streak alive, I won't bother trying. It can't be too difficult. It has to seem possible, and to be possible, without being a huge chore. It just shouldn't be so easy that no progress is really being made.

Also, I need to make sure that once I've broken my streak, any progress starts me up again. If I lose my streak and end up with 2000 messages, having to get back to 25 is going to be a nightmare. My original design was written with this in mind: any progress was progress enough. The new behavior ratchets the absolute maximum down, so that once I've gotten rid of most of those 2000 messages, I can't let them pile back up by ignoring 5 one day, 5 the next, and then reading six the third. Maybe the real solution will be to keep exactly the behavior I have, but to fiddle with the minimum threshold.

The other thing I want to think about, eventually, is message age. I don't want to ding myself for mail received "just now." If a message hasn't been read for a week, I should really read it immediately. If it's just come in this afternoon, though, it should basically be ignored in my counts. For now, though, I think I can ignore that. After all, my goal here is to read email, not to spend my email reading time on writing tools to remind me of that goal!

freemium: the abomination of desolation (body)

by rjbs, created 2014-02-24 10:38
tagged with: @markup:md games journal

I've never been a fan of "freemium," although I understand that game developers need to get paid. It often feels like the way freemium games are developed goes something like this:

  • design a good game
  • focus on making the player want to keep playing
  • insert arbitrary points at which the player must stop playing for hours
  • allow the user to pay money to continue playing immediately

This model drives me batty. It's taking a game and making it worse to encourage the user to pay more. It is, in my mind, the opposite of making a good game that you can make better by paying more. I gladly fork over money for add-on content on games that were good to start with. I never, ever pay to repair a game that has been broken on purpose.

The whole thing reminds me of the 486SX processor, where you could buy a disabled 486 processor now and later upgrade it with a completely new processor that was pinned to only fit into the add-on slot. At least the 486SX could be somewhat explained away as a means to make some money on processors that didn't pass post-production inspection. These fremium games are just broken on purpose from the start.

I think the deciding factor for me is whether I can play the game as much as I want without hitting the pay screen. Years ago, everyone at work was playing Travian. It's a simple browser-based nation-building game, something like a very simplified Civilization. Your workers collect resources and you use them to build cities, troops, and so on. The game is multiplayer and online, so you are in competition with other nations with whom you may eventually go to war or with whom you may establish trade routes. You can keep playing as long as you have resources to spend and free workers. by paying money, you could speed up work or acquire more resources, but the game didn't throw up a barrier every half hour forcing you to wait. It was all a natural part of the game's design, and made sense to have in a simultaneous-play multiplayer game. (Of course, the problem here is that players willing to spend more money have a tactical advantage. That's a different kind of problem, though.)

I used to play an iOS game called Puzzle Craft. The basic game play is tile-matching, and it's all built around the idea that you're the founder of a village that you want to grow into a thriving kingdom. At first, you tile-match to grow crops. Over time, new kinds of tiles are added, and you can respond by developing new tools and by changing the matching rules. You can also build a mine, for a similar but not identical tile matching game. You'll need to deal with both resources to progress along your quest.

I was very excited to see that the makers of Puzzle Craft released a new game this week, Another Case Solved. It's a tile-matcher built in a larger framework, just like Puzzle Craft, but this game is a silly hard boiled detective game. Matching tiles helps you solve mysteries. The game is fun to look at and listen to, but playing it has made me angrier and angrier.

Unlocking major cases requires solving minor cases. Solving minor cases requires a newspaper in which to find them. Newspapers are delivered every fifteen minutes, and you can't have more than three or four of them at a time. In other words, if you want to play more than four (very short) games an hour, you have to spend "candy" to get more newspapers, and you get a piece or two of candy every 12 hours. Also, after a little while, the minor cases become extremely difficult to solve, meaning that every hour you're allowed to play the game three or four times, and that you will probably lose most of them, because there is a low turn limit in each game. Of course, you can keep playing after the turn limit by paying candy.

The whole setup makes it completely transparent that the time and turn limits are there to cajole the player into paying to be allowed to play the free game. It sticks in my craw! I like the game. It is fun. I would pay for it, were it something I could buy at a fixed price. Microtransactions to continue playing the game, though, burn me up.

Maybe I should keep telling myself that I pumped a lot of quarters into Gauntlet when I was a kid. How different is this?

I think it's pretty different. I've seen people play for a very, very long time on one quarter.

integrating Ywar with Goodreads (body)

by rjbs, created 2014-02-17 20:37

Ywar is a little piece of productivity software that I wrote. I've written about Ywar before, so I won't rehash it much. The idea is that I use The Daily Practice to track whether I'm doing things that I've decided to do. I track a lot of these things already, and Ywar connects up my existing tracking with The Daily Practice so that I don't have to do more work on top of the work I'm already doing. In other words, it's an attempt to make my data work for me, rather than just sit there.

For quite a while now, only a few of my TDP goals needed manual entry, and most of them could clearly be automated. It wasn't clear, though, how to automate my "keep reading books" tasks. I knew Goodreads existed, but it seemed like using Goodreads would be just as much work as using TDP. Either way, I have to go to a site and click something for each book. I kept thinking about how to make my reading goals more motivating and more interesting, but nothing occurred to me until this weekend.

I was thinking about how it's hard for me to tell how long it will take me to finish a book. Lately, I'm taking an age to read anything. Catch-22 is about 500 pages and I've been working on it since January 2. Should I be able to do more? I'm not sure. My current reading goals have been very vague. I thought of them as, "spend 'enough time' reading a book from each shelf once every five days." This makes it easy to decide sloppily whether I've read enough, but it's always an all-at-once decision.

In Goodreads, I can keep track of my progress over several days. That means I can change my goal to "get at least 50 pages read a week." There's no fuzzy logic there, just simple page count. It might not be right for every book, but I can adjust it as needed. If it's too low or high, I can fix that too. It seemed like a marked improvement, and it also gave me a reason to consider looking at Goodreads a bit more, where I've seen some interesting recommendations.

With my mind made up, all I had to do was write the code. Almost every time that I've wanted to write code to talk to the developer API of a service that's primarily addressed not via the API, it's been sort of a mess that's usable, but weird and a little annoying. So it was with Goodreads. The code for my Goodreads/Ywar integration is on GitHub. Below is just some of the weirdness I got to encounter.

This request gets the books on my "currently reading" shelf as XML.

sprintf 'https://www.goodreads.com/review/list?format=xml&v=2&id=%s&key=%s&shelf=currently-reading',
  $user_id,
  $api_key;

The resource is review/list because it's a list of reviews. Go figure! That doesn't mean that there are actually any reivews, though. In Goodreads, a review represents the intersection of a user and a book. If it's on your shelf, it has a review. If there's no review in the usual sense of the word, it just means that the review's body is empty.

The XML document that you get in reply has a little bit of uninteresting data, followed by a <reviews> element that contains all the reviews for the page of results. Here's a review:

<review>
  <id>774476430</id>
  <book>
    <id type="integer">168668</id>
    <isbn>0684833395</isbn>
    <isbn13>9780684833392</isbn13>
    <text_reviews_count type="integer">7875</text_reviews_count>
    <title>Catch-22 (Catch-22, #1)</title>
    <image_url>https://d202m5krfqbpi5.cloudfront.net/books/1359882576m/168668.jpg</image_url>
    <small_image_url>https://d202m5krfqbpi5.cloudfront.net/books/1359882576s/168668.jpg</small_image_url>
    <link>https://www.goodreads.com/book/show/168668.Catch_22</link>
    <num_pages>463</num_pages>
    <format></format>
    <edition_information/>
    <publisher>Simon &amp; Schuster </publisher>
    <publication_day>4</publication_day>
    <publication_year>2004</publication_year>
    <publication_month>9</publication_month>
    <average_rating>3.96</average_rating>
    <ratings_count>355544</ratings_count>
    <description>...omitted by rjbs...</description>
    <authors>
      <author>
        <id>3167</id>
        <name>Joseph Heller</name>
        <image_url><![CDATA[https://d202m5krfqbpi5.cloudfront.net/authors/1197308614p5/3167.jpg]]></image_url>
        <small_image_url><![CDATA[https://d202m5krfqbpi5.cloudfront.net/authors/1197308614p2/3167.jpg]]></small_image_url>
        <link><![CDATA[https://www.goodreads.com/author/show/3167.Joseph_Heller]]></link>
        <average_rating>3.94</average_rating>
        <ratings_count>368314</ratings_count>
        <text_reviews_count>9588</text_reviews_count>
      </author>
    </authors>
    <published>2004</published>
  </book>

  <rating>5</rating>
  <votes>0</votes>
  <spoiler_flag>false</spoiler_flag>
  <spoilers_state>none</spoilers_state>
  <shelves>
    <shelf name="currently-reading" />
    <shelf name="literature" />
  </shelves>
  <recommended_for></recommended_for>
  <recommended_by></recommended_by>
  <started_at>Thu Jan 02 17:04:20 -0800 2014</started_at>
  <read_at></read_at>
  <date_added>Tue Nov 26 08:37:09 -0800 2013</date_added>
  <date_updated>Thu Jan 02 17:04:20 -0800 2014</date_updated>
  <read_count></read_count>
  <body>

  </body>
  <comments_count>review_comments_count</comments_count>
  <url><![CDATA[https://www.goodreads.com/review/show/774476430]]></url>
  <link><![CDATA[https://www.goodreads.com/review/show/774476430]]></link>
  <owned>0</owned>
</review>

It's XML. It's not really that bad, either. One problem, though, was that it didn't include my current position. My current position in the book is not a function of my review, but of my status updates. I'll need to get those, too.

I was intrigued, though by the format=xml in the URL. Maybe I could get it as JSON! I tried, and I got this:

  [...,
  {"id":774476430,"isbn":"0684833395","isbn13":"9780684833392",
  "shelf":"currently-reading","updated_at":"2014-01-02T17:04:20-08:00"}
  ...]

Well! That's certainly briefer. It's also, obviously, missing a ton of data. It doesn't include book titles, total page count, or any shelves other than the one that I requested. That is: note that in the XML you can see that the book is on both currently-reading and literature. In the JSON, only currently-reading is listed. Still, it turns out that this is all I need, so it's all I fetch. I get the JSON contents of my books in progress, and then once I have them, I can get each review in full from this resource:

  sprintf 'https://www.goodreads.com/review/show.xml?key=%s&id=%s',
    $api_key,
    $review_id;

Why does that help? I mean, what I got in the first request was a review, too, right? Well, yes, but when you get the review via review/show.xml, you get a very slightly different set of data. In fact, almost the only difference is the inclusion of comment and user_status items. It's a bit frustrating, because in both cases you're getting a review element, and their ids are the same, but their contents are not. It makes it a bit less straightforward to write an XML-to-object mapper.

When I get review 774476430, which is my copy of Catch-22, this is the first user status in the review:

  <user_status>
    <chapter type="integer" nil="true"/>
    <comments_count type="integer">0</comments_count>
    <created_at type="datetime">2014-02-16T12:47:14+00:00</created_at>
    <id type="integer">39382590</id>
    <last_comment_at type="datetime" nil="true"/>
    <note_updated_at type="datetime" nil="true"/>
    <note_uri nil="true"/>
    <page type="integer" nil="true"/>
    <percent type="integer">68</percent>
    <ratings_count type="integer">0</ratings_count>
    <updated_at type="datetime">2014-02-16T12:47:14+00:00</updated_at>
    <work_id type="integer">814330</work_id>
    <body/>
  </user_status>

By the way, the XML you get back isn't nicely indented as above. It's not entirely unindented, either. It's sometimes properly indented and sometimes just weird. I think I'd be less weirded out if it just stuck to being a long string of XML with indentation at all, but mostly libxml2 reads the XML, not me, so I should shut up.

The important things above are the page and percent items. They tell me how far through the book I am as of that status update. If I gave a page number when updating, the page element won't have "true" as its nil attribute, and the text content of the element will be a number. If I gave a percentage when updatng, as I did above, you get what you see above. I can convert a percentage to a page count by using the num_pages found on the book record. The whole book record is present in the review, as it was the first time, so I just get all the data I need this time via XML.

Actually, though, there's a reason to get the XML the first time. Each time that I do this check, it's for in-progress books on a certain shelf. If I start by getting the XML, I can then proceed only with books that are also on the right shelf, like, above, "literature." Although you can specify multiple shelves to the review/list endpoint, only one of them is respected. If there are four books on my "currently reading" shelf, but only one is "literature," then by getting XML first, I'll do two queries instead of five.

So I guess I should go back and start with the XML.

By the way, did you notice that review/list takes a query parameter called format, which can be either XML or JSON, and maybe other things... but that review/show.xml includes the type in the path? You can't change the xml to json and get JSON. You just get a 404 instead.

In the end, making Ywar get data from Goodreads wasn't so bad. It had some annoying moments, as is often the case when using a mostly-browser-based web service's API. It made me finally use XML::LibXML for some real work, and hopefully it will lead to me using Goodreads more and getting some value out of that.

games I've played lately (body)

by rjbs, created 2014-02-03 21:51
tagged with: @markup:md games journal

About a year ago, I told Mark Dominus that I wanted to learn to play bridge, but that it was tough to find friends who were also interested. (I'd rather play with physically-present people whom I know than online with strangers.) He said, "Sackson's Gamut of Games has a two-player bridge variant." I had never heard of Sackson, A Gamut of Games, or the two-player variant. I said, "oh, cool," and went off to look into it all.

So, A Gamut of Games is a fantastic little book that you can get at Amazon for about ten bucks. (That's a kickback-to-rjbs link, by the way.) It's got about thirty games in it, most of which you've probably never heard of. I've played only about a fifth of them so far, or less, and so far they've been a lot of fun. The first game in the book is called Mate, and is meant to feel a bit like chess, which it does. It's a game of pure skill, which is quite unusual for a card game. When I first got the book, I would teach the game to everybody. It was easy to learn and play, fun, and a novelty. I also taught Martha how to play, using a Bavarian deck of cards, which made the game more fun for her. We'd go to the neighborhood bar, get some chicken tenders and beer (root and otherwise) and play a few hands.

I got to wondering why more of the games in Gamut weren't available electronically. I found a Lines of Action app for iOS, but it was a bit half-baked, on the network side. Then, through Board Game Geek, I found a site called Super Duper Games offering online Mate, in the guise of "Chess Cards." It's a really mixed experience, but I am a big fan.

The site's sort of ugly, and incredibly slow. There are some weird display issues and some things that are, if not bugs, are darn close. On the other hand, it's got dozens of cool board games that you haven't played before, and you can play them online, against your friends or strangers. If you join, challenge me to something. I'm rjbs.

So far, I've only played a small number of the available games.

One of my favorites is Abande. Like many SDG games, I think it would be even better played in real time, and I'm hoping to produce a board for playing it. Another great one is Alfred's Wyke, which should be easy to play at a table with minimal equipment. I think I'll play it with wooden coins, which I bought in bulk several years ago.

There's also Amazons, Archimedes, Aries, and many more.

In many cases, the games would be improved by realtime play, I think, but only one game has stricken me as greatly hampered by its electronic form. Tumblewords is a cross between Scrabble and Connect Four, which sounds pretty great. It seems like it probably is pretty great, too, but it's got a problem. In some games, like cribbage, part of the goal is to correctly identify what you've just scored. Similarly, in Tumblewords, part of the challenge should be noting all the words that each move introduces. On Super Duper Games, the computer does this for you, using a dictionary. It means you get points for all sorts of words that you'd never have noticed otherwise. I think I may have to play this one in real space before anything else!

Check out Super Duper Games, even if only to read the rules for the games there and play them. Or, maybe try playing something! If you don't want to challenge me, there are dozens of open challenge sitting around at any time.

Dist::Zilla is for lovers (body)

by rjbs, created 2014-01-25 11:21
last modified 2014-01-25 14:20
tagged with: @markup:md journal perl

I don't like getting into the occasional arguments about whether Dist::Zilla is a bad thing or not. Tempers often seem to run strangly high around this point, and my position should, at least along some axes, be implicitly clear. I wrote it and I still use it and I still find it to have been worth the relatively limited time I spent doing it. Nonetheless, as David Golden said, "Dist::Zilla seems to rub some people wrong way." These people complain, either politely or not, and that rubs people who are using Dist::Zilla the wrong way, and as people get irritated with one another, their arguments become oversimplified. "What you're doing shows that you don't care about users!" or "Users aren't inconvenienced at all because there are instructions in the repo!" or some other bad over-distillation.

The most important thing I've ever said on this front, or probably ever will, is that Dist::Zilla is a tool for adjusting the trade-offs in maintaining software projects. In many ways, it was born as a replacement for Module::Install, which was the same sort of thing, adjusting trade-offs from the baseline of ExtUtils::MakeMaker. I didn't like either of those, so I built something that could make things easier for me without making install-time weird or problematic. This meant that contributing to my repository would get weird or problematic for some people. That was obvious, and it was something I weighed and thought about and decided I was okay with. It also meant, for me, that if somebody wanted to contribute and was confused, it would be up to me to help them, because I wanted, personally, to make them feel like I was interested in working with them¹. At any rate, of course it's one more thing to know, to know what the heck to do when you look at a git repository and see no Makefile.PL or Build.PL, and having to know one more thing is a drag. Dist::Zilla imposes that drag on outsiders (at least in its most common configurations), and it has to be used with that in mind.

Another thing I've often said is that Dist::Zilla is something to be used thoughtfully. If it was a physical tool, it would be yellow with black stripes, with a big high voltage glyph on it. It's a force multiplier, and it lets you multiply all kinds of force, even force applied in the wrong direction. You have to aim really carefully before pulling the trigger, or you might shoot quite a lot of feet, a surprising number of which will belong to you.

If everybody who was using Dist::Zilla thought carefully about the ways that it's shifting around who gets inconvenienced by what, I like to imagine that there would be inconsiderate fewer straw man arguments about how nobody's really being inconvenienced. Similarly, if everybody who felt inconvenienced by an author's choice in built tools started from the idea that the author has written and given away their software to try and help other users, there might be fewer ungracious complaints that the author's behavior is antisocial and hostile.

Hopefully my next post will be about some fun code or maybe D&D.

1: My big failure on this front, I think, is replying promptly, rather than not being a big jerk. I must improve, I must improve, I must improve...

Dist::Zilla and line numbering (body)

by rjbs, created 2014-01-14 11:22

brian d foy wrote a few times lately about potential annoyances distributed across various parties through the use of Dist::Zilla. I agree that Dist::Zilla can shuffle around the usual distribution of annoyances, and am happy with the trade offs that I think I'm making, and other people want different trade offs. What I don't like, though, is adding annoyance for no gain, or when it can be easily eliminated. Most of the time, if I write software that does something annoying and leave it that way for a long time, it's actually a sign that it doesn't annoy me. That's been the case, basically forever, with the fact that my Dist::Zilla configuration builds distributions where the .pm files' line numbers don't match the line numbers in my git repo. That means that when someone says "I get a warning from line 10," I have to compare the released version to the version in git. Sometimes, that someone is me. Either way, it's a cost I decided was worth the convenience.

Last week, just before heading out for dinner with ABE.pm, I had the sudden realization that I could probably avoid the line number differences in my shipped dists. The realization was sparked by a little coincidence: I was reminded of the problem just after having to make some unrelated changes to an unsung bit of code responsible for creating most of the problem.

Pod::Elemental::PerlMunger

Pod::Weaver is the tool I use to rewrite my sort-of-Pod into actual-Pod and to add boilerplate. I really don't like working with Pod::Simple or Pod::Parser, nor did I like a few of the other tools I looked at, so when building Pod::Weaver, I decided to also write my own lower-level Pod-munging tool. It's something like HTML::Tree, although much lousier, and it stops at the paragraph level. Formatting codes (aka "interior sequences") are not handled. Still, I've found it very useful in helping me build other Pod tools quickly, and I don't regret building it. (I sure would like to give it a better DAG-like abstraction, though!)

The library is Pod::Elemental, and there's a tool called Pod::Elemental::PerlMunger that bridges the gap between Dist::Zilla::Plugin::PodWeaver and Pod::Weaver. Given some Perl source code, it does this:

  1. make a PPI::Document from the source code
  2. extract the Pod elements from the PPI::Document
  3. build a Pod::Elemental::Document from the Pod
  4. pass the Pod and (Pod-free) PPI document to an arbitrary piece of code, which is expected to alter the documents
  5. recombine the two documents, generally by putting the Pod at the end of the Perl

The issue was that step two, extracting Pod, was deleting all the Pod from the source code. Given this document:

package X;

=head1 OVERVIEW

X is the best!

=cut

sub do_things { ... }

...we would rewrite it to look like this:

package X;

sub do_things { ... }
__END__
=head1 OVERVIEW

X is the best!

=cut

...we'd see do_things as being line 9 in the pre-munging Perl, but line 3 in the post-munging Perl. Given a more realistic piece of code with interleaved Pod, you'd expect to see the difference in line numbers to increase as you got later into the munged copy.

I heard the suggestion, many times, to insert # line directives to keep the reported line numbers matching. I loathed this idea. Not only would it be an accounting nightmare in case anything else wanted to rewrite the file, but it meant that the line numbers in errors wouldn't match the file that the user would have installed! It would make it harder to debug problems in an emergency, which is never okay with me.

There was a much simpler solution, which occurred to me out of the blue and made me feel foolish for not having thought of it when writing the original code. I'd rewrite the document to look like this:

package X;

# =head1 OVERVIEW
#
# X is the best!
#
# =cut

sub do_things { ... }
__END__
=head1 OVERVIEW

X is the best!

=cut

Actually, my initial idea was to insert stretches of blank lines. David Golden suggested just commenting out the Pod. I implemented both and started off using blank lines myself. After a little while, it became clear that all that whitespace was going to drive me nuts. I switched my code to producing comments, instead. It's not the default, though. The default is to keep doing what it has been doing.

It works like this: PerlMunger now has an attribute called C, which refers to a subroutine or method name. It's passed the Pod token that's about to be removed, and it returns a list of tokens to put in its place. The default replacer returns nothing. Other replacers are built in to return blank lines or commented-out Pod. It's easy to write your own, if you can think of something you'd like better.

Karen Etheridge suggested another little twist, which I also implemented. It may be the case that you've got Pod interleaved with your code, and that some of it ends up after the last bits of code. Or, maybe in some documents, you've got all your Pod after the code, but in others, you don't. If your concern is just keeping the line numbers of code the same, who cares about the Pod that won't affect those line numbers? You can specify a C for replacing the Pod tokens after any relevant code. I decided not to use that, though. I just comment it all out.

PkgVersion

Pod rewriting wasn't the only thing affecting my line numbers. The other thing was the insertion of a $VERSION assignment, carried out by the core plugin PkgVersion. Its rules are simple:

  1. look for each package statement in each Perl file
  2. skip it if it's private (i.e., there's a line break between package and the package name)
  3. insert a version assignment on the line after the package statement

...and a version assignment looked like this:

{
  $My::Package::VERSION = '1.234';
}

Another version-assignment-inserter exists, OurPkgVersion. It works like this:

  1. look for each comment like # VERSION
  2. put, on the same line: our $VERSION = '1.234';

I had two objections to just switching to OurPkgVersion. First, the idea of adding a magic comment that conveyed no information, and served only as a marker, bugged me. This is not entirely rational, but it bugged me, and I knew myself well enough to know that it would keep bugging me forever.

The other objection is more practical. Because the version assignment uses our and does not wrap itself in a bare block, it means that the lexical environment of the rest of the code differs between production and test. This is not likely to cause big problems, but when it does cause problems, I think they'll be bizarre. Best to avoid that.

Of course, I could have written a patch to OurPkgVersion to insert braces around the assignment, but I didn't, because of that comment thing. Instead, I changed PkgVersion. First off, I changed its assignment to look like this:

$My::Package::VERSION = '1.234';

Note: no enclosing braces. They were an artifact of an earlier time, and served no purpose.

Then, I updated its rules of operation:

  1. look for each package statement in each Perl file
  2. skip it if it's private (i.e., there's a line break between package and the package name)
  3. skip forward past any full-line comments following the package statement
  4. if you ended up at a blank line, put the version assignment there
  5. otherwise, insert a new line

This means that as long as you leave a blank line after your package statement, your code's line numbers won't change. I'm now leaving this code after the # ABSTRACT comment after my package statements. (Why do the VERSION comments bug me, but not the ABSTRACT comments? The ABSTRACT comments contain more data — the abstract — that can't be computed from elsewhere.) Now, this can still fall back to inserting lines, but that's okay, because what I didn't include in the rules above is this: if configured with die_on_line_insertion = 1, PkgVersion will throw an exception rather than insert lines. This means that as I release the next version of all my dists, I'll hit cases once in a while where I can't build because I haven't made room for a version assignment. That's okay with me!

I'm very happy to have made these changes. I might never notice the way in which I benefit from them, because they're mostly going to prevent me from having occasional annoyances in the future, but I feel good about that. I'm so sure that they're going to reduce my annoyance, that I'll just enjoy the idea of it now, and then forget, later, that I ever did this work.

making my daemon share more memory (body)

by rjbs, created 2014-01-10 19:45
last modified 2014-01-10 19:45
Quick refresher: when you've got a unix process and it forks, the new fork can share memory with its parent, unless it starts making changes. Lots of stuff is in memory, including your program's code. This means that if you're going to `require` a lot of Perl modules, you should strongly consider loading them early, rather than later. Although a runtime `require` statement can make program start faster, it's often a big loss for a forking daemon: the module gets re-compiled for every forked child, multiplying both the time and memory cost.

Today I noticed that one of the daemons I care for was loading some code post-fork, and I thought to myself, "You know, I've never audited that program to see whether it does a good job at loading everything pre-fork." I realized that it might be a place to quickly get a lot of benefit, assuming I could figure out what was getting loaded post-fork. So I wrote this:

use strict;
use warnings;
package PostForkINC;

sub import {
  my ($self, $code) = @_;

  my $pid = $$;

  my $callback = sub {
    return if $pid == $$;
    my (undef, $filename) = @_;
    $code->($filename);
    return;
  };

  unshift @INC, $callback;
};

When loaded, PostForkINC puts a callback at the head of @INC so that any subsequent attempt to load a module hits the callback. As long as the process hasn't forked (that is, $$ is still what it was when PostForkINC was loaded), nothing happens. If it has forked, though, something happens. That "something" is left up to the user.

Sometimes I find a branch of code that I don't think is being traversed anymore. I love deleting code, so my first instinct is to just delete it… but of course that might be a mistake. It may be that the code is being run but that I don't see how. I could try to figure it out through testing or inspection, but it's easier to just put a little wiretap in the code to tell me when it runs. I built a little system called Alive. When called, it sends a UDP datagram about what code was called, where, and by whom (and what). A server receives the datagram (usually) and makes a note of it. By using UDP, we keep the impact on the code being inspected very low. This system has helped find a bunch of code being run that was thought long dead.

I combined PostForkINC with Alive and restarted the daemon. Within seconds, I had dozens of reports of libraries — often quite heavy ones — being loaded after the fork.

This is great! I now have lots of little improvements to make to my daemon.

There is one place where it's not as straightforward as it might have been. Sometimes, a program tries to load an "optional" module. If it fails, no problem. PostForkINC can seem to produce a false positive, here, because it says that Optional::Module is being loaded post-fork. In reality, though, no new code is being added to the process.

When I told David Golden what I was up to, he predicted this edge case and said, "but you might not care." I didn't, and said so. Once I saw that this was happening in my program, though, I started to care. Even if I wasn't using more memory, I was looking all over @INC to try to find files that I knew couldn't exist. Loading them pre-fork wasn't going to work, but there are ways around it. I could put something in %INC to mark them as already loaded, but instead I opted to fix the code that was looking for them, avoiding the whole idea of optional modules, which was a pretty poor fit for the program in question, anyway.

I've still got a bunch of tweaking to do before I've fixed all the post-fork loading, but I got quite a lot of it already, and I'm definitely going to apply this library to more daemons in the very near future.

todo for 2014 (body)

by rjbs, created 2014-01-07 11:33
tagged with: @markup:md journal

Huh. It looks like I haven't written a todo list for the year since 2008. I don't know whether I wish I had, but I'm a bit surprised. I'm going to list some things from my lists 2005-2008 that I did not accomplish and start there.

write more prose and/or poetry

I keep meaning to write more, and I don't. In 2013, I wrote 145 opening sentences as part of my daily routine, but that's less a success than it is a reminder of how easy it is to produce an opening sentence to keep that Daily Practice streak going. I need to find some other set of assignments to do, and actually do them.

I've also changed my Daily Practice goal to give me a larger set of exercises to do, including writing synopses for things to try writing later. Then later, I hope to make a second goal for trying to flesh out some of those ideas.

write a Cocoa application

I just didn't do it. I need to find a project that's doable. I think what I want to do is instead first work on a little web service I have been wanting to tackle. Then I can write a client as a Cocoa project. Like a number of other project ideas, I think part of what blocks me is solitude. I'd like to work on these projects with someone, for motivation, for company, and for second opinions. I just don't have anybody local who's interested, and doing this sort of thing remotely can be harder to sort out.

write more programs for fun

I spent a lot of time dealing with maintaining code that, for my purposes, already works. I want to spend less time doing this. I've been doing a good job at doing parts of it steadily, which hopefully means I can keep up with the existing work, but I don't want to do more. If anything, I want to do less. Handling feature requests for a bunch of features that you don't need isn't very interesting, but on the other hand just accepting anything without regard for its impact on design quality isn't any good, either.

Instead of spending more time on existing working code, I want to write more new (presumably pretty broken) code. I have a ton of ideas for things to do, and I just need to set aside the time to do it — but see above, regarding solitude.

spend more time with friends

I tend to go out for a beer with a friend only once every few months. This seems ridiculous. I also have a growing collection of unplayed board games. I just need to make more of an effort.

get my driver's license

Yeah.

cook and bake more often

I've gotten pretty good at making pancakes and fried eggs, and that's it. I want to keep working on the basics, and Martha likes helping, so I need to get her more involved in helping pick projects, and then doing them. I don't know what we should try next, though. Maybe roasts.

I got an Arduino! (body)

by rjbs, created 2013-12-30 09:34
last modified 2013-12-30 09:35

For Christmas, Gloria gave me an Arduino Starter Kit! It's got an Arduino Uni, a bunch of wires, some resistors and LEDs and stuff, a motor, and I don't know what else yet. I hadn't been very intereted in Arduino until Rob Blackwell was giving a pretty neat demo at the "Quack and Hack" at DuckDuckGo last year. Still, I knew it would just be another thing to eat up my time, and I decided to stay away. Finally, though, I started having ideas of things that might be fun, but not too ambitious. I put the starter kit on my Christmas wish list and I got the Arduino Workshop book for cheap from O'Reilly.

The starter kit comes with its own book, but there are quite a few passages that appear verbatim in both books. I'm not sure of their relationship, but they're different enough that I've been reading both. I've gotten a decent idea of how to accomplish simple things, but I don't really understand the underlying ideas, yet. As I sat, squinting at a schematic, I wondered: Is this what beginning programmers feel like? "If I write these magic words, I know what will happen, but not why!" I already had a lot of sympathy for that kind of thinking, but it has been strengthened by this experience.

For example, I know that I can put a resistor on either side of a device in my circuit and it works, and I generally understand why, but then I don't understand how a rectifying diode helps prevent problems with a spike caused by a closing relay? I need to find a good elementary course on electricity and electronics and I need to really let it sink in. This is one of those topics, like special relativity, that I've often understood for a few minutes, but not longer. (Special relativity finally sunk in once I wrote some programs to compute time dilation.)

I'm going to keep working through the books, because there's clearly a lot more to learn. I'm not sure, though, what I'm hoping to do after I get through the whole thing. Even if I don't keep using it after I finish the work, though, I think it will have been a good experience and worth having done.

My favorite project so far was one of my own. The Arduino Workshop has a project where you build a KITT-like scanner with five LEDs using pulse width modulation to make them scan left to right. That looked like it would be neat, but I skipped the software for left to right scanning and instead wrote a little program to make it count to 2⁶-1 over and over on its LED fingers.

Here's the program:

void setup() {
  for (int pin = 2; pin < 7; pin++)  pinMode(pin, OUTPUT);
}

void loop() {
  for (int a = 0; a < 64; a++) {
    for (int pin = 2; pin < 7; pin++) {
      digitalWrite(pin, (a & ( 2 << pin-2 )) ? HIGH : LOW);
    }
    delay(250);
  }
}

I originally got stuck on the 2<<pin-2 expression because I wanted to use exponentiation, which introduces some minor type complications. I was about to sort them out when I remembered that I could just use bit shifting. That was a nice (if tiny) relief.

Here's what the decide looks like in action:

Working with hardware is different from software in ways that are easy to imagine, but that don't really bug you until you're experiencing them. If I have a good idea about how to rebuild a circuit to make it simpler, I can't take a snapshot of the current state for a quick restore in case I was wrong. Or, I can, but it's an actual snapshot on my camera, and I'll have to rebuild by hand later.

If I make a particularly bad mistake, I can destroy a piece of hardware permanently. Given the very small amounts of power I'm using, this probably means "I can burn out an LED," but it's still a real problem. I've been surprised that there's no "reset your device memory between projects" advice. I keep imagining that my old program will somehow cause harm to my new project's circuits. Also, plugging the whole thing into my laptop makes me nervous every time. It's a little silly, but it does.

My next project involves a servo motor. That should be fun.

Trekpocalypse Now (body)

by rjbs, created 2013-12-22 21:02
last modified 2013-12-23 07:54
tagged with: @markup:md dnd journal rpg yapc

At YAPC::NA in Austin this year, I ran a sorta-D&D game on game night. I have been meaning to write it up nicely, but I think it's just not going to happen, so I'm going to write it up badly. Here we go…

The Ward

The PCs are wardens, residents of the Ward, a strange stronghold of almost perpetual night, along with two hundred others. The elite protect and provide for the rest of the populace. Sometimes a few venture out of the ward, but rarely and often never to return. Beyond the ward are the Sunlit Lands, as far as most wardens ever travel.

The Golden Circle of the Ward maintains communication with distant cells, including one at the War Sactum. The War Sanctum is a small chamber where The Voice (the wardens' oft-witnessed deity) is much more responsive and where terrible powers are accessible to those who know the right secrets. After reports of an upcoming celestial event of great importance, the War Sanctum was attacked by the Alexandrians, an aggressive group of Mogh. With no other cells of the Golden Circle able to respond, the PCs are assembled and sent. They have 3 hours.

From the Ward, the players must first pass through the Sunlit Lands.

The exact terrain of the Sunlit Lands changes from time to time, although there are a limited number of configurations it takes on. Now, it is an expansive field in a bay, surrounded by cliffs. There are huts built here and there. The sun is warm and bright. A few dozen people are scattered around, picnicking. The PCs exit the Ward through a steel blast door set into a cliff wall.

The inhabitants of the Sunlit Lands are the Hollow Men. They are pleasant, somewhat vapid people, with very few worries. They cannot leave the Sunlit Lands, having lived on its food for too long. The same is true of anyone who lives on their food for more than a few days.

The PCs know the next step is London. London is currently accessible only through an undersea tunnel — one of its less helpful locations. ("Can't you wait a few weeks?" ask the Hollow Men, if asked) Someone will have to wade out, dive under, and get the hatch open. This leads to an expanse of metal tubing long enough for everyone to get in, but it will fill with water, though the water will dissolve as it fills. It fills fast enough that this isn't a big help, but once the entrance is closed, things will dry off.

London is a twilit metropolis of narrow alleys and dreary weather. The streets are crowded with anxious citizens of all ranks. The PCs will, unless disguised quickly, draw attention.

The vast majority of citizens of London just want to be left alone. They will react to weird stuff by gasping or acting scandalized, but will not raise an alarm. The people to fear are the League of Yellow Men and their henchmen. The League rules the city with an iron fist, opposed only by the network of thieves and urchins run by Abdul Amir.

The game didn't get much further than that, due to a TPK, but a few more areas stood between London and the War Sanctum.

Trekpocalypse!

The game was set aboard Enterprise (NCC-1701-D) several generations after a catastrophic space catastrophe. The challenge was to make it from Ten Forward to the battle bridge, making it through decks controlled by several of the factions now struggling for control of the ship. The Alexandrian faction of the Mogh were the final boss in this scenario, but the Yellow Men, Machine Men, and others were around. I'd started to scribble down ideas for what else I'd do if I had time. The Cult of the Immortal seemed like they'd be fun, and the nanite-possessed Skull Crushers led me to thinking about even more ridiculous backstory.

Eventually, I forced myself to stop thinking of new ideas, since I already had way too many, and get to work on some rules.

I started with the Moldvay rules, which is a pretty nice simple set of rules onto which to bolt hacks. I wanted to have distinct classes that felt like classes, weren't too complicated, and captured at least some of the Trek flavor. I looked at stealing from Hulks & Horrors and Starships & Spacemen but I decided I didn't quite like them.

Instead, I sketched out six (or seven) distinct classes and, rather than come up with complete rules up through level four, I just threw together characters at that level. The character sheet image, above, links to the full set of PC pregens that I made.

Attributes

PCs have six attributes:

  • Strength
  • Dexterity
  • Intelligence
  • Technology
  • Command
  • Empathy

Classes

The Crimson are the leaders of the wardens. They act both as warriors and priests, having the ability to influence the Voice. Each Crimson PC has a pool of Access points. They can get information or favors from the Voice by rolling 1d20 vs. 20, and they can add their total Access points to the roll, or they can spend a number of Access points equal to the level of the effect they want to achieve. The pregens all had 5 Access, meaning that all effects needed a 15 or better, but every time the PC got a guaranteed success, all subsequent attempts in the episode got harder.

The Gold Circle specialize in more powerful effects, but they take much longer. For example, the Remote Viewing power takes 3 turns to invoke — that's a half hour! Gold Circle PCs get a number of Bright Ideas per episode, though, and these can drastically reduce the time required. The first few Bright Ideas spent on an effect change the units. After they get down to rounds, they go to turns, and after that, to instant. A power can only be used once, unless you have another Bright Idea. Powers usually have a minimum required time, usually one round. I provided two Gold Circle builds. One was a support character, with Remote Viewing, Shields Up!, Dispel Illusion, and Cornucopia. The other was a stealth attacker with access to the teleporter and a personal cloaking device.

The Azure are experts in understanding (rather than just using) the technology of past generations. They don't have any powers on their own, but they're the only characters who can use tech more complicated than ray guns and comm badges. Azure PCs are trained in using specific devices. Each device has a number of charges, and once they're used up the device can't be used again until it's recharged, which takes … well, too long to do in the game's three hour window.

Empath PCs have psychic powers! They're pretty close to AD&D 2nd Edition psionicist characters. They have a pool of psi power points, and powers cost points. Their prime attributes are Command and Empathy, depending on their build. They'd probably all have some of the same powers, then some different ones. One power was Weaken Will, which lets the empathy attack the target's effective Command stat, making them more susceptible to later powers.

The Mogh (aka Warface (get it, Worf-face?)) are Klingons. Or, I guess, part-Klingons. I like to imagine they're all descended from Alexander and Worf, but who knows? They can regenerate 2 hp per round (but their shields stink; see below). In combat, they can switch between different stances that give them different advantages or disadvantages.

Twinlings were the final PC race, and I was very happy to include them. Why don't we see more of those guys on the show? If I ever wrote an episode… Well, anyway, they were very much a support class. A single twinling PC is two entities, and gets two actions per round, but if either one dies, that's it. I'd probably give the survivor a round or two to be heroic, but that's it. All the twinlings' powers were there to enhance the other characters' powers. They could restore charges to devices for the Azure, repair device (or shield) damage, temporarily boost shield strength, extend the range or duration of other powers, or (and I was sad nobody used this) duplicate any device or power they'd seen used in the last day. Using any twinling power consumed a set of Spare Parts.

The Green were going to be descendants of Orions, for no good reason other than I had a bunch of other colors represented. I didn't come up with any good ideas, though.

Combat

Everybody got two sets of attack and defense stats, one for melee and one for beam. Beam weapons have an effectively infinite range. (I guess if there'd been extravehicular combat, I'd think about that harder.) Melee weapons… well, you know.

Everybody got a shield, except for the PC with the personal cloak. Shields have a strength stat, which is the amount of damage they can absorb, in hp, before failing. They always absorb whole attacks, so when a 10 hp attack hits your 1 hp shield, you take no damage. Phew! Shields also have their own hit points, and every time they fail, they lose a hit point. As long as they're not reduced to zero hit points, it will recharge after a few rounds in which the character takes no damage. So, if your shield is strength 5, charge 2, hp 5, then it can absorb two 4 hp attacks before failing. Then its wearer needs to stay out of the line of fire for two rounds, and the shield will be up again. This can happen five times.

The Mogh PCs have specialized shields. They have strength 1, but infinite hit points. They're really just there to give them one attack worth of immunity while they charge into melee combat. Shooting at your enemy from full cover is dishonorable! Or, at least, not what those guys did best. The hilarious side effect of this was that the Mogh PCs always ran right into combat, thus becoming targets for friendly fire. At least one died that way. I believe the rule for friendly fire was that if you missed by more than your beam bonus, and the target was engaged in melee with your ally, you hit your ally.

Again?

It was a fun! I needed more prep and a more streamlined scenario. I would definitely run it again, and it could clearly be run as a mini-campaign and remain fun the whole time, and there's a huge universe of stuff to steal from.

keeping track of the (dumb) things I do (body)

by rjbs, created 2013-11-25 22:39

Last week, I was thinking about how sometimes I do something I have to do and then feel great, and sometimes I do something I have to do and then feel lousy. I decided I should keep track of what I do and how it makes me feel. (I have some dark predictions, but am trying to hold off until I have more recorded.) To do this, I needed a way to record the facts, and it needed to be really, really easy to use. I'd never take the time to say "I did something" if it was a hassle.

I decided I wanted to run commands something like this:

  rjbs:~$ did some code review on DZ patches :)

So, I did some code review and it made me happy. Then I thought of some embellishments:

  rjbs:~$ did some code review on DZ patches :) +code ~45m ++

I spent about 45 minutes doing it, it was code, and this was an improvement to my mood from when I started. There's a problem, though: :) isn't valid shell. I solved this by making did with no arguments read from standard input. I also renamed did to D. I also think I might make it accept emoji, so I could run:

  rjbs:~$ D haggled with mortgage provider 😠 +money

Later, I'll write something to build a blog post template from a week's work, maybe. I'm still not sure whether I'll keep using this. I need to get into the habit, and I'm not sure how, although connecting it to Ywar might help.

Anyway, the code is a real mess right now, and it kind of stinks, but D is on GitHub in case it's of interest to anyone.

in search of excellent conference presentations (body)

by rjbs, created 2013-11-18 18:24
tagged with: @markup:md journal

At OSCON this past year, I was a just little surprised by the still-shrinking Perl track. What really surprised me, though, was the entirely absent Ruby track. I tried to figure out what it meant, and whether it meant anything, but I didn't come to any conclusions. Even if I'd more carefully collected actual data, I'm not sure I could've made any really useful conclusions.

Instead, I came to a flimsier, wobblier conclusion: the Perl track could have more, better talks that would appeal to more people, including people from outside of Perl. I spoke to some OSCON regulars about this and nobody told me that I was deluded. When I got home, I asked a few people whether they'd ever considered coming to give a talk at OSCON. I got a few replies something like this:

I hadn't really, but maybe I should. What would I talk about, though? Talking about stuff I do in Perl wouldn't make sense, because OSCON isn't a Perl conference.

OSCON is an interesting conference. It's ecumenical — or it could and should be. In practice, though, it can be a bit cliquish. I was disappointed when I first saw lunch tables marked as the "Python table" or "JavaScript table." I was told (and believe) that people asked for this sort of thing as a way to find people with the same interests, but I think that one of the most interesting things about OSCON is the ability to talk shop with people whose shop is quite unlike your own. It leads to interesting discoveries.

This only works, though, if you really talk about what you really do. If I said, "Well, I filter and route email with a lot of Perl and a little C," nobody's going to learn anything interesting from me. On the other hand, I could tack on a few more sentences about the specific problems we encounter and how we get past them. "High performance, highly configurable email filtering is stymied by the specific 'commit' phases of SMTP, so we've had to spend a little time figuring out how to do as much rejecting as early as possible, but everything else as late as possible." Once you're talking about specific problems, people can relate, even if they don't know much about the domain.

Hearing about interesting solutions to problems can often help me think about new possible solutions to my own problems, so what I like is to hear people talk about their specific solutions to specific problems. I seek these talks out. I've basically given up on talks like "The Ten Best Things About Go" or "A Quick Intro to Clojure." They can be interesting, but generally I find them wishy-washy. They're not compelling enough to get me to commit to doing serious work in a new language, and they don't discuss any single problem in enough detail to inspire my to rethink things.

So I think that, in general, talks about really specific pieces of software are the best, and that means talks about software in Perl (or Python or Bash or Go...) because that shows the actual solution that was made. Most of these talks, I think, would be interesting to all sorts of people who don't use the underlying language or system. If you work on an ORM in Python, would a talk on DBIx::Class be interesting? Yes, I think it could be. Could a talk on q.py be useful for just about anybody who debugs code? Yes. And so on.

I'm really hoping to see some interesting real-problem-related talks show up this year, and plan to go to whichever ones look the most concrete. I also hope to give some talks like that. Talks like that are my favorite to give, and I look forward to spending more time talking about solving real problems than talking about abstractions.

OSCON's call for participation tends to come out in January. That should be plenty of time to think about our most interesting solved problems!

moving my homedir into the 21st century (body)

by rjbs, created 2013-11-14 23:26
tagged with: @markup:md journal

Over the last few weeks, I've done a bit of pair programming across the Internet, which I haven't done in years. It was great! Most of this was with Ingy döt Net and Frew Schmidt.

As is often the case, the value wasn't only in the work we did, but in the exchange of ideas while doing it. I got to see both Ingy and Frew using their tools, and it made me want to steal from them. It also helped me get a handle on what things I didn't want to change in my own setup, and why. It's definitely something I'd like to do more often.

Both Ingy and Frew were using tmux, the terminal multiplexer. tmux is a lot like GNU screen, which I've been using for at least fifteen years. If you're not using either one, and you use a unix, you really ought to start! They help me get a lot of my work parallelized and simplified. I first learned of tmux a few years ago when I learned that several members of the Moose core dev team has started using it instead of screen. I tried to switch at the time, but it didn't work out. It crashed too much, its Solaris support seemed spotty, and basically it got in my way. Now, inspired by looking at what Ingy and Frew were doing, I felt like trying again. I sat down and read most of the tmux book and was convinced in theory. Although I don't like every difference between screen and tmux, there were clear benefits.

Then I got to work actually switching, which meant producing a tolerable .tmux.conf. I started with the one I'd made years before and slowly added to it as I read more about tmux's features. It's clear that I've got more improvements to make, but they're going to require a few months of using my current config to figure things out.

When I paired with Ingy, we used PairUp, his instant pairing environment. Basically, you provision a Debian-like VM using whatever system you want (we were using RackSpace, but I tried it with EC2, also) and, with one command, create a useful environment for pairing in a shared tmux session. We didn't actually work on anything. Intead, he showed me PairUp and we encountered enough foibles along the way that we got to pair on fixing up the pairing environment. It was fun.

I saw a lot of the tools he was using, as we went, and one of them was his dotfile manager. I've seen a lot of dotfile managers, although I've never really switched to using one. Instead, I was using a fairly gross hack of my own, using GNU make to install my dotfiles. The tool that Ingy was using, ... was interesting enough to get me to switch. I've converted almost all of my config repositories to using it, and I feel good about this.

... isn't a huge magic change in how to look at config files, and that's why I like it. It's also not just "your dotfiles in a repo." It's got two bits that make it very cool.

First, it is configured with a list of repositories containing your configuration:

dots:
- repo: git@github.com:sharpsaw/loop-dots.git
- repo: git@github.com:rjbs/rjbs-dots.git
- repo: git@github.com:rjbs/rjbs-osx-dots.git
- repo: git@github.com:rjbs/vim-dots.git
- repo: rjbs@....:git/rjbs-private-dots.git

Each one of these repositories is kept in sync in ~/.../src, and the files in them are symlinked into your home directory. Any file in the first repo takes precedence over files in later repositories, so you can establish canonical behaviors early and add specialized ones later.

The second interesting bit is provided by the loop-dots repository above. It sets up a number of config files (like .zshrc and .vimrc to name just two) that loop over the rest of the dots repositories, sourcing subsidiary files. So there's a global .zshrc, but almost the only thing it does is load the .zshrc files of other repositories. This makes it very simple to divide up your config files into roles. I can have a rjbs-private-dots that just adds on my "secret data" to my normal dot files. At work, I'll have an rjbs-work-dots that sets up variables needed there.

Finally, there's another key benefit: each repository is basically just a bunch of dot files in a repo, even though ... is more than that. If I ever decide that ... is nuts, bailing out of using it is very simple. I don't need to convert lots of things out of it, I just need to replace the ... program with, say, cp.

I'm only about a week into this big set of updates, but so far I think it's going well. Of course, time will tell. I haven't yet updated my Linode box, where I do quite a lot of my work, to use my ... config. Tomorrow…

Office Mode DEFCON (body)

by rjbs, created 2013-11-07 22:29
tagged with: @markup:md games journal

I picked up DEFCON a few months ago on Steam. It's a game inspired by the "big nuclear war boards" we saw in movies like Dr. Strangelove or, closer to the mark, WarGames. Each player controls a section of the world. The game starts with a few very short bits of placing units and quickly turns into a shooting war. Players launch fighters, deploy fleets, and eventually sound out bombers, subs, and ICBMs. The game looks gorgeous.

I was intrigued by "office mode." In this mode, the game is time limited, runs in a window, and stays mostly out of your way. My understanding was that it would be a good fit to run while working my day job. I'd just check in on it once in a while to issue new orders, but mostly I could ignore it. After all, a lot of time was sure to be missiles flying through the air. I got in touch with some friends to organize a game, and Florian Ragwitz and I gave it a shot today.

Unfortunately, it wasn't quite what I expected. The first few phases of the game were quite rapid-fire and required a good bit of my brain for about twenty minutes. After that, things calmed down, but not enough. It was not a great background activity. I couldn't, for example, check in for five minutes at a time between pomodori.

On the other hand, it was fun. I guess I should spend a bit more time fighting AIs, though, because Florian utterly destroyed me. I think I had one weapon hit a target, doing a fair bit of damage to Naples. Meanwhile, he destroyed about half the population of the USSR. (Florian was playing as Europe, and I was the USSR. Strangely, Europe, not the USSR, controls Kiev, Warsaw, and Dnipropetrovsk.)

Mazes & Minotaurs (body)

by rjbs, created 2013-10-30 23:06
tagged with: @markup:md journal mnm rpg

Over a decade ago, Paul Elliott wrote a tiny piece of counterfactual history called The Gygax/Arneson Tapes. It recounts the history of the world's most famous role-playing game, Mazes & Minotaurs, in which the players take on larger-than-life Greek-style heroes in Sword and Sandal adventures.

A while later, the amazing Olivier Legrand "dug up and published" the original 1972 rules for Mazes & Minotaurs. Of course, in reality he wrote it. All of it. It's a complete, good, playable RPG written based on a little half page of inspiration, also inspired by the little brown books of D&D.

Then, later, he produced the 1987 "revised" edition. This gives us the three core books you'd expect: the player's manual, the Maze Master's guide, and the creature compendium. Later came the M&M Companion, Viking & Valkyries (an alternate setting), and perhaps most amazingly of all, Minotaur Quarterly, an excellent magazine of add-on material for RM&M. Of course, sometimes it included "republished" articles from the days of OM&M.

The whole set of books is well done. They're all written as if the false history is true, and with a bit of tongue in cheek, but they're still good, playable games.

For about a year and a half, give or take, I ran a modified M&M game and it went well. I might run it again some day, either in that same setting or in the canonical Mythika, if I get around to watching a bunch more peplum films. I advise all fellow old school RPG fans to give M&M a look.

modules seeking homes (body)

by rjbs, created 2013-10-24 22:26
last modified 2013-10-24 22:26

I don't use Module::Install or Module::Starter anymore. For the most part, I don't think anyone should. I think there are better tools to use instead.

That said, if you really like using them, I have some plugins that I am no longer interested in maintaining:

  • Module::Install::AuthorTests
  • Module::Install::ExtraTests
  • Module::Starter::Plugin::SimpleStore
  • Module::Starter::Plugin::TT2

That's all I have to say.

Dist::Zilla v5 will break and/or fix your code (body)

by rjbs, created 2013-10-20 10:51
last modified 2013-10-20 21:24

Preface

When I wrote Dist::Zilla, there were a few times that I knew I was introducing encoding bugs, mostly around Pod handling and configuration reading. (There were other bugs, too, that I didn't recognize at the time.) My feeling at the time was, "These bugs won't affect me, and if they do I can work around them." My feeling was right, and everything was okay for a long time.

In fact, the first bug report I remember getting was from Olivier Mengué in 2011. He complained that dzil setup was not doing the right thing with encoding, which basically meant that he would be known by his mojibake name, Olivier Mengué. Oops.

I put off fixing this for a long time, because I knew how deeply the bugs ran into the foundation. I'd laid them there myself! There were a number of RT tickets or GitHub pull requests about this, but they all tended to address the surface issues. This is really not the way to deal with encoding problems. The right thing to do is to write all internal code expecting text where possible, and then to enforce encode/decode at the I/O borders. If you've spent a bunch of time writing fixes to specific problems inside the code, then when you fix the border security you need to go find and undo all your internal fixes.

My stubborn refusal to fix symptoms instead of the root cause left a lot of tickets mouldering, which was probably very frustrating for anybody affected. I sincerely apologize for the delay, but I'm pretty sure that we'll be much better off having the right fix in place.

The work ended up getting done because David Golden and I had been planning for months to get together for a weekend of hacking. We decided that we'd try to do the work to fix the Dist::Zilla encoding problems, and hashed out a plan. This weekend, we carried it out.

The Plan

As things were, Dist::Zilla got its input from a bunch of different sources, and didn't make any real demand of what got read in. Files were read raw, but strings in memory were … well, it wasn't clear what they were. Then we'd jam in-memory strings and file content together, and then either encode or not encode it at the end. Ugh.

What we needed was strict I/O discipline, which we added by fixing libraries like Mixin::Linewise and Data::Section. These now assume that you want text and that bytes read from handles should be UTF-8 decoded. (Their documentation goes into greater detail.) Now we'd know that we had a bunch of text coming in from those sources, great! What about files in your working directory?

Dist::Zilla's GatherDir plugin creates OnDisk file objects, which get their content by reading the file in. It had been read in raw, and would then be mucked about with in memory and then written back out raw. This meant that things tended to work, except when they didn't. What we wanted was for the files' content to be decoded when it was going to be treated as a string, but encoded when written to disk. We agreed on the solution right away:

Files now have both content and encoded_content and have an encoding.

When a file is read from disk, we only set the encoded content. If you try reading its content (which is always text) then it is decoded according to its encoding. The default encoding is UTF-8.

When a file is written out to disk, we write out the encoded content.

There's a good hunk of code making sure that, in general, you can update either the encoded or decoded content and they will both be kept up to date as needed. If you gather a file and never read its decoded content before writing it to disk, it is never decoded. In fact, its encoding attribute is never initialized… but you might be surprised by how often your files' decoded content is read. For example, do you have a script that selects files by checking the shebang line? You just decoded the content.

This led to some pretty good bugs in late tests, hitting a file like t/lib/Latin1.pm. This was a file intentionally written in Latin-1. When a test tried to read it, it threw an exception: it couldn't decode the file! Fortunately, we'd already planned a solution for this, and it was just fifteen minutes work to implement.

There is a way to declare the encoding of files.

We've added a new plugin role, EncodingProvider, and a new plugin, Encoding, to deal with this. EncodingProvider plugins have their set_file_encodings method called between file gathering and file munging, and they can set the encoding attribute of a file before its contents are likely to be read. For example, to fix my Latin-1 test file, I added this to my dist.ini:

[Encoding]
filename = t/lib/Latin1.pm
encoding = Latin-1

The Encoding plugin takes the same file-specifying arguments as PruneFiles. It would be easy for someone to write a plugin that will check magic numbers, or file extensions, or whatever else. I think the above example is all that the core will be providing for now.

You can set a file's encoding to bytes to say that it can't be decoded and nothing should try. If something does try to get the decoded content, an exception is raised. That's useful for, say, shipped tarballs or images.

Pod::Weaver now tries to force an =encoding on you by @Default

The @Default pluginbundle for Pod::Weaver now includes a new Pod::Weaver plugin, SingleEncoding. If your input has any =encoding directives, they're consolidated into a single directive at the top of the document… unless they disagree, in which case an exception is raised. If no directives are found, a declaration of UTF-8 is added.

For sanity's sake, UTF-8 and utf8 are treated as equivalent… but you'll end up with UTF-8 in the output.

You can probably stop using Keedi Kim's Encoding Pod::Weaver plugin now. If you don't, the worst case is that you might end up with two mismatched encoding directives.

Your dist (or plugin) might be fixed!

If you had been experiencing double-encoded or wrongly-encoded content, things might just be fixed. We (almost entirely David) did a survey of dists on the CPAN and we think that most things will be fixed, rather than broken by this change. You should test with the trial release!

Your dist (or plugin) might be broken!

...then again, maybe your code was relying, in some way, on weird text/byte interactions or raw file slurping to set content. Now that we think we've fixed these in the general case, we may have broken your code specifically. You should test with the trial release!

The important things to consider when trying to fix any problems are:

  • files read from disk are assumed to be encoded UTF-8
  • the value given as content in InMemory file constructors is expected to be text
  • FromCode files are, by default, expected to have code that returns text; you can set (code_return_type => 'bytes') to change that
  • your dist.ini and config.ini files must be UTF-8 encoded
  • DATA content used by InlineFiles must be UTF-8 encoded
  • if you want to munge a file's content like a string, you need to use content
  • if you want to munge a file's content as bytes, you need to use encoded_content

If you stick to those rules, you should have no problems… I think! You should also report your experiences to me or, better yet, to the Dist::Zilla mailing list.

Most importantly, though, you should test with the trial release!

The Trial Release

You'll need to install these:

…and if you use Pod::Weaver:

Thanks!

I'd like to thank everyone who kept using Dist::Zilla without constantly telling me how awful the encoding situation was. It was awful, and I never got more than a few little nudges. Everyone was patient well beyond reason. Thanks!

Also, thanks to David Golden for helping me block out the time to get the work done, and for doing so much work on this. When he arrived on Friday, I was caught up in a hardware failure at the office and was mostly limited to offering suggestions and criticisms while he actually wrote code. Thanks, David!

writing OAuthy code (body)

by rjbs, created 2013-10-14 10:56

I've written a bunch of code that deals with APIs behind OAuth before. I wrote code for the Twitter API and for GitHub and for others. I knew roughly what happened when using OAuth, but in general everything was taken care of behind the scenes. Now as I work on furthering the control of my programmatic day planner, I need to deal with web services that don't have pre-built Perl libraries, and that means dealing with OAuth. So far, it's been a big pain, but I think it's been a pain that's helped me understand what I'm doing, so I won't have to flail around as much next time.

I wanted to tackle Instapaper first. I knew just what my goal automation would look like, and I'd spent enough time bugging their support to get my API keys. It seemed like the right place to start. Unfortunately, I think it wasn't the best service to start with. It felt a bit like this:

Hi! Welcome to the Instapaper API! For authentication and authorization, we use OAuth. OAuth can be daunting, but don't worry! There are a lot of libraries to help, because OAuth is a popular standard!

By the way, we've made our own changes to OAuth so that it isn't quite standard anymore!

For one thing, they require xAuth. Why? I don't know, but they do. I futzed around trying to figure out how to use Net::OAuth. It didn't work. Part of seemed to be that no matter what I did, the xAuth parameters ended up in the HTTP headers instead of the post body, and it wasn't easy to change the request body because of the various layers in play. I searched and searched and found what seemed like it would be a bit help: LWP::Authen::OAuth.

It looked like just what I wanted. It would let me work with normal web requests using an API that I knew, but it would sign things transparently. I bodged together this program:

use JSON;
use LWP::Authen::OAuth;

my $c_key     = 'my-consumer-key';
my $c_secret  = 'my-consumer-secret';

my $ua = LWP::Authen::OAuth->new(oauth_consumer_secret => $c_secret);

my $r = $ua->post(
  'https://www.instapaper.com/api/1/oauth/access_token', [
    x_auth_username => 'my-instapaper-username',
    x_auth_password => 'my-instapaper-password',
    x_auth_mode     => 'client_auth',

    oauth_consumer_key    => $c_key,
    oauth_consumer_secret => $c_secret,
  ],
);

print $r->as_string;

This program spits out a query string with my token and token secret! Great, from there I can get to work writing requests that actually talk to the API! For example, I can list my bookmarks:

use JSON;
use LWP::Authen::OAuth;

my $c_key     = 'my-consumer-key';
my $c_secret  = 'my-consumer-secret';

my $ua = LWP::Authen::OAuth->new(
 oauth_consumer_secret => $c_secret,
 oauth_token           => 'my-token',
 oauth_token_secret    => 'my-token-secret',
);

my $r = $ua->post(
  'https://www.instapaper.com/api/1/bookmarks/list',
  [
    limit => 200,
    oauth_consumer_key    => $c_key,
  ],
);

my $JSON = JSON->new;
my @bookmarks = sort {; $a->{time} <=> $b->{time} }
                grep {; $_->{type} eq 'bookmark' }
                @{ $JSON->decode($r->decoded_content) };

for my $bookmark (@bookmarks) {
  say "$bookmark->{time} - $bookmark->{title}";
  say "  " . $JSON->encode($bookmark);
}

Great! With this done, I can get my list of bookmarks and give myself points for reading stuff that I wanted to read, and that's a big success right there. I mentioned my happiness about this in #net-twitter, where the OAuth experts I know hang out. Marc Mims said, basically, "That looks fine, except that it's got a big glaring bug in how it handles requests." URIs and OAuth encode things differently, so once you're outside of ASCII (and maybe before then), things break down. I also think there might be other issues you run into, based on later experience. I'm not sure LWP::Authen::OAuth can be entirely salvaged for general use, but I haven't tried much, and I'd be the wrong person to figure it out, anyway.

Still, I was feeling pretty good! It was time, I decided, to go for my next target. Unfortunately, my next target was Feedly, and they've been sitting on my API key request for quite a while. They seem to be doing this for just about everybody. Why do they need to scrutinize my API key anyway? I'm a paid lifetime account. Just give me the darn keys!

Well, fine. I couldn't write my Feedly automation, so I moved on to my third and, currently, final target: Withings. I wanted code to get my last few weight measurements from my Withings scale. I pulled up their API and got to work.

The first roadblock I hit was that I needed to know my numeric user id, which they really don't put anyplace you can find it. I had to dig for about half an hour before I found it embedded in a URL on one of their legacy UI pages. Yeesh!

After that, though, things went from tedious to confusing. I was getting directed to a URL that returned a bodyless 500 response. I'd get complaints about bogus signatures. I couldn't figure out how to get token data out of LWP::Authen::OAuth. I decided to bite the bullet and figure out what to do with Net::OAuth::Client.

As a side note: Net::OAuth says "you should probably use Net::OAuth::Client," and is documented in terms of it. Net::OAuth::Client says, "Net::OAuth::Client is alpha code. The rest of Net::OAuth is quite stable but this particular module is new, and is under-documented and under-tested." The other module I ended up needing to use directly, Net::OAuth::AccessToken, has the same warning. It was a little worrying.

This is how OAuth works: first, I'd need to make a client and use it to get a request token; second, I'd need to get the token approved by the user (me) and turned into an access token; finally, I'd use that token to make my actual requests. While at first, writing for Instapaper, I found Net::OAuth to feel overwhelming and weird, I ended up liking it much better when working on the Withings stuff. First, code to get the token:

use Data::Dumper;
use JSON 2;
use Net::OAuth::Client;

my $userid  = 'my-hard-to-find-user-id';
my $api_key = 'my-consumer-key';
my $secret  = 'my-consumer-secret';

my $session = sub {
  state %session;
  my $key = shift;
  return $session{$key} unless @_;
  $session{$key} = shift;
};

my $client = Net::OAuth::Client->new(
  $api_key,
  $secret,
  site               => 'https://oauth.withings.com/',
  request_token_path => '/account/request_token',
  authorize_path     => '/account/authorize',
  access_token_path  => '/account/access_token',
  callback           => 'oob',
  session            => $session,
);

say $client->authorize_url; # <-- will have to go visit in browser

my $token = <STDIN>;
chomp $token;

my $verifier = <STDIN>;
chomp $verifier;

my $access_token = $client->get_access_token($token, $verifier);

say "token : " . $access_token->token;
say "secret: " . $access_token->token_secret;

The thing that had me confused the longest was that coderef in $session. Why do I need it? Under the hood, it looks optional, and it can be, but it's easier to just provide it. I'll come back to that. Here's how you use the program:

When you run the program, authorize_url generates a new URL that can be visited to authorize a token to be used for future requests. The URL is printed to the screen, and the user can open the URL in a browser. From there, the user should be prompted to authorize access for the requesting application (as authenticated by the consumer id and secret). The website then redirects the user to the callback URL. I gave "oob" which is obviously junk. That's okay because the URL will sit in my browser's address bar and I can copy out two of its query parameters: the token and the verifier. I paste these into the silently waiting Perl program. (I could've printed a prompt, but I didn't.)

Now that the token is approved for access, we can get an "access token." What? Well, the get_access_token method returns a Net::OAuth::AccessToken, which we'll use something like an LWP::UserAgent to perform requests against the API. I'll come back to how to use that a little later. For now, let's get back to the $session callback!

To use a token, you need to have both the token itself and the token secret. They're both generated during the call to authorize_url, but only the token's value is exposed. The secret is never shared. It is available, though, if you've set up a session callback to save and retrieve values. (The session callback is expected to behave sort of like CGI's venerable param routine.) This is one of those places where the API seems tortured to me, but I'm putting my doubts aside because (a) I don't want to rewrite this library and (b) I don't know enough about the problem space to know whether my feeling is warranted.

Anyway, at the end of this program we spit out the token and token secret and we exit. We could instead start making requests, but I always wanted to have two programs for this. It helps me ensure that I've saved the right data for future use, rather than lucking out by getting the program into the right state. After all, I'm only going to get a fresh auth token the first time. Every other time, I'll be running from my saved credentials.

My program to actually fetch my Withings measurements looks like this:

use Data::Dumper;
use JSON 2;
use Net::OAuth::Client;

my $userid  = 'my-hard-to-find-user-id';
my $api_key = 'my-consumer-key';
my $secret  = 'my-consumer-secret';

my $session = sub {
  state %session;
  my $key = shift;
  return $session{$key} unless @_;
  $session{$key} = shift;
};

my $client = Net::OAuth::Client->new(
  $api_key,
  $secret,
  site               => 'https://oauth.withings.com/',
  request_token_path => '/account/request_token',
  authorize_path     => '/account/authorize',
  access_token_path  => '/account/access_token',
  callback           => 'oob',
  session            => $session,
);

my $token   = 'token-from-previous-program';
my $tsecret = 'token-secret-from-previous-program';

my $access_token = Net::OAuth::AccessToken->new(
  client => $client,
  token  => $token,
  token_secret => $tsecret,
);

my $month_ago = $^T - 30 * 86_400;
my $res = $access_token->get(
  "http://wbsapi.withings.net/measure"
  . "?action=getmeas&startdate=$month_ago&userid=$userid"
);

my $payload = JSON->new->decode($res->decoded_content);
my @groups =
  sort { $a->{date} <=> $b->{date} } @{ $payload->{body}{measuregrps} };

for my $group (@groups) {
  my $when   = localtime $group->{date};
  my ($meas) = grep { $_->{type} == 1 } @{ $group->{measures} };

  unless ($meas) { warn "no weight on $when!\n"; next }
  my $kg = $meas->{value} * (10 ** $meas->{unit});
  my $lb = $kg * 2.2046226;
  printf "%s : %5.2f lbs\n", $when, $lb;
}

This starts to look good, to me. I make an OAuth client (the code here is identical to that in the previous program) and then make an AccessToken. Remember, that's the thing that I use like a LWP::UserAgent. Here, once I've got the AccessToken, I get a resource and from there it's just decoding JSON and mucking about with the result. (The data provided from the Withings measurements API is a bit weird, but not bad. It's certainly not as weird as many other data I've been given by other APIs!)

I may even go back to update my Instapaper code to use Net::OAuth, if I get a burst of energy. After all, the thing that gave me trouble was dealing with xAuth using Net::OAuth. Now that I have my token, it should just work… right? We'll see.

Microscope (body)

by rjbs, created 2013-10-09 16:58
tagged with: @markup:md games journal rpg

A few years ago I heard about the game Microscope and it sounded way cool. In summary: it is.

It is in some ways like a role-playing game, but in other ways it's something else entirely. When you play Microscope, you're not telling the story of a few character, and you're not trying to solve a puzzle. You're building a history on a large scale. It's meant for building stories on the scale of decades, centuries, or millennia.

The game starts with a few things being decided up before play really begins:

  • what's the general theme of the history being built?
  • what things are out of bounds
  • what things are explicitly allowed

From the start, play rotates. It is a game, although it's a game without victory conditions. Each round of the game, each player makes one or two moves from the short list of possible moves. The possible moves, though, are all of great importance to the final outcome. Basically, each player may:

  • declare the occurance of a player-described epoch anywhere within the timeline
  • add a major event to an existing epoch
  • invite the rest of the table to narrate a specific scene within the timeline

As with many other story-building games, once a fact is established, it cannot be contradicted. Since there's not really any challenge to getting your facts onto the table, the game is entirely co-operative. There is no fighting over the story allowed. Instead, there's a rule for suggesting "wait, before you write that down, maybe it would be cooler if…"

I only managed to play Microscope once, but it went pretty well. I think after two or three more games, it would be great fun.

I had originally wanted to start a regular set of Microscope games. Whoever committed to each round would show up first for a game of Microscope, establishing a setting. At the end of the session, the players could pick a point (or points) within the history where they'd like to play a traditional RPG, and then we'd have three sessions of that. It struck me as likely to be a ton of fun, but I'm not sure I can really wrangle up players for it. Here's the pitch I wrote myself:

Monthly Microscopy

Microscope is a game of fractal history building. When you play Microscope, you start with a big picture and you end with a complex history spanning decades or centuries. Microscope is a world-building game.

My plan is to play Microscope over and over, building new world, and then running traditional tabletop games in those worlds.

Every month, we'll play a game of Microscope. The big picture will be determined before we play, so everyone who shows up will have at least some idea what to expect. (Knowing the big picture only gets you so far in Microscope, though!)

At the end of the game, we'll have our setting described by a set of genre boundaries and specific facts about the world. We'll have to figure out, now, what kind of RPG we want to play in that world. When during the timeline does it take place? Who are the characters? These are answered by bidding.

At the end of each session, each player in attendance gets five points. If it's your first game, you get twenty. At the end of every Microscope game, players can suggest scenarios for the month's RPG, and then bid on a winning suggestion using their points. Each player may bid as many of his or her points across as many of the suggestions as he or she would like. The bids are made in secret, and all bid points are used up.

There will be three post-Microscope sessions each month. They might form a mini-campaign, or they might be three unrelated groups of characters, as determined by the winner of the plot auction.

Each game will be scheduled at least a week in advance, but won't have a fixed schedule. Times and days will move around to be friendly to different time zones and schedules. Microscope games will be played with G+ Hangouts and Docs. RPG sessions will be played on Roll20 — but we might use Skype for voice chat if their voice chat remains as problematic as it's been.

prev page
next page
page 1 of 51
1255 entries, 25 per page