rjbs forgot what he was saying

not logged in (root) | by date | tagcloud | help | login

RSS feed entries

collapse entry bodies

I went to the Perl QA Hackathon in Rugby! (body)

by rjbs, created 2016-04-26 22:45
last modified 2016-04-29 08:22

I've long said that the Perl QA Hackathon is my favorite professional event of the year. It's better than any conference, where there are some good talks and some bad talks, and some nice socializing. At the Perl QA Hackathon, stuff gets done. I usually leave feeling like a champ, and that was generally the case this time, too.

I flew over with Matt Horsfall, and the trip was fine. We got to the hotel in the early afternoon, settled in, played some chess (stalemate) and then got dinner with the folks there so far. I was delighted to get a (totally adequate) steak and ale pie. Why can't I find these in Philly? No idea.

steak and ale pie!!

The next day, we got down to business quickly. We started, as usual, with about thirty seconds of introduction from each person, and then we shut up and got to work. This year, we had most of a small hotel entirely to ourselves. This gave us a dining room, a small banquet hall, a meeting room, and a bar. I spent most of my time in the banquet hall, near the window. It seemed like the easiest place to work. Although there were many good things about the hotel, the chairs were not one of them! Still, it worked out just fine.

The MetaCPAN crew were in the dining room, a few people stayed at the bar seating most of the time, and the board room got used by various meetings, most of which I attended.

The view over my shoulder most of the time, though, was this:

getting to work!

Philippe wasn't always there with that camera, though. Just most of the time.

I think my work falls into three categories: Dist::Zilla work, meeting work, and pairing work.


Two big releases of Dist::Zilla came out of the QAH. One was v5.047 (and the two preceeding it), which closed about 40 issues and pull requests. Some of those just needed application, but others needed tests, or rework, or review, or whatever. Amusingly enough, other people at the QAH were working on Dist::Zilla issues, so as I tried to close out the obvious branches, more easy-to-merge branches kept popping up!

Eventually I got down to things that I didn't think I could handle and moved on to my next big Dist::Zilla task for the day: version six!

My goal with Dist::Zilla has been to have a new major version every year or two, breaking backward compatibility if needed, to fix things that seemed worth fixing. I've been very clear that while I value backcompat quite a lot in most software, Dist::Zilla will remain something of a wild west, where I will consider nothing sacred, if it gets me a big win. The biggest change for v6 was replacing Dist::Zilla's use of Path::Class with Path::Tiny. This was not a huge win, except insofar as it lets me focus on knowing and using a single API. It's also a bit faster, although it's hard to notice that under Dist::Zilla's lumbering pace.

Karen Etheridge and I puzzled over some encoding issues, specifically around PPI. The PPI plugin had changed, about two years ago, to passing octets rather than characters to PPI, and we weren't sure why. Karen was convinced that PPI did better with characters, but I had seen it hit a fatal bug that using octets avoided. Eventually, with the help of git blame and IRC logs, we determined that the problem was... a byte order mark. Worse yet, a BOM on UTF-8!

When parsing a string, PPI does in fact expect characters, but does not expect that the first one might be U+FEFF, the space character used at offset 0 in files to indicate the UTF encoding type. Perl's UTF-16 encoding layers will notice and use the BOM, but the UTF-8 layer will not, because a BOM on a UTF-8 file is a bad idea. Rather than try to do anything incredibly clever, I did something quite crude: I strip off leading U+FEFF when reading UTF-8 files and, sometimes, strings. Although this isn't always correct, I feel pretty confident that anybody who has put a literal ZERO WIDTH NO-BREAK SPACE in their code is going to deserve whatever they get.

With that done, a bunch of encoding issues go away and you can once again use Dist::Zilla on code like:

my $π = 22 / 7;

This also led to some fixes for Unicode text in __DATA__ sections. As with the Path::Tiny change, a number of downstream plugins were affected in one way or another, and I did my best to mitigate the damage. In most cases, anything broken was only working accidentally before.

Dist::Zilla v6.003 is currently in trial on CPAN, and a few more releases with a few more changes will happen before v6 is stable.

Oh, and it requires perl v5.14.0 now. That's perl from about five years ago.


I was in a number of meetings, but I'm only going to mention one: the Test2 meeting. We wanted to discuss the way forward for Test2 and Test::Builder. I think this needs an entire post of its own, which I'll try to get to soon. In short, the majority view in the room was that we should merge Test2 into Test-Simple and carry on. I am looking forward to this upgrade.

Other meetings included:

  • renaming the QAH (I'm not excited either way)
  • using Test2 directly in core for speed (turns out it's not a big win)
  • getting more review of issues filed on Software-License

A bit more about that last one: I wrote Software-License, and I feel I've done as much work on it as I care to, at least in the large. Now it gets a steady trickle of issues, and I'm not excited to keep doing it all myself. I recruited some helpers, but mostly nothing has come of it. I tried to rally the troops a bit to encourage more regular review just leading to each person giving a +1 or -1 on each pull request. Otherwise, Software-License is likely to languish.


I really enjoy being "free floating helper guy" at QAH. It's something I've done a lot ever since Oslo. What I mean is this: I look around for people who look frustrated and say, "Hey, how's it going?" Then they say what's up, and we talk about the issue. Sometimes, they just need to say things out loud, and I'm their rubber duck. Other times, we have a real discussion about the problem, do a bit of pair programming, debate the benefits of different options, or whatever. Even when I'm not really involved in the part of the toolchain being worked on, I feel like I have been able to contribute a lot this way, and I know it makes me more valuable to the group in general, because it leaves me with more of an understanding of more parts of the system.

This time, I was involved in review, pairing, or discussion on:

  • fixing Pod::Simple::Search with Neil Bowers
  • testing PAUSE web stuff with Pete Sergeant
  • CPAN::Reporter client code with Breno G. de Oliveira
  • DZ plugin issues with Karen Etheridge
  • Log::Dispatchouli logging problems with Sawyer X
  • PAUSE permissions updates with Neil Bowers
  • PAUSE indexing updates with Colin Newell (I owe him more code review!)
  • improvements to PAUSE's testing tools with Matthew Horsfall
  • PPI improvements with Matthew Horsfall

...and probably other things I've already forgotten.

more hard work

Pumpking Updates

A few weeks ago, I announced that I'm retiring as pumpking after a good four and a half years on the job. On the second night of the hackathon, the day ended with a few people saying some very nice things about me and giving me both a lovely "Silver Camel" award and also a staggering collection of bottles of booze. I had to borrow some extra luggage to bring it all home. (Also, a plush camel, a very nice hardbound notebook, and a book on cocktails!) I was asked to say something, and tried my best to sound at least slightly articulate.

Meanwhile, there was a lot of discussion going on — a bit at the QAH but more via email — about who would be taking over. In the end, Sawyer X agreed to take on the job. The reaction from the group, when this was announced, was strong and positive, except possibly from Sawyer himself, as he quickly fled the room, presumably to consider his grave mistake. He did not, however, recant.

perl v5.24.0

I didn't want to spend too much time on perl v5.24.0 at the QAH, but I did spend a bit, rolling out RC2 after discussing Configure updates with Tux and (newly-minted Configure expert) Aaron Crane. I'm hoping that we'll have v5.24.0 final in about a week.

Perl QAH 2017

I'm definitely looking forward to next year's QAH, wherever it may be. This year, I had hoped to do some significant refactoring of the internals of PAUSE, but as the QAH approached, I realized that this was a task I'd need to plan ahead for. I'm hoping that between now and QAH 2017, I can develop a plan to rework the guts to make them easier to unit test and then to re-use.

Thanks, sponsors!

The QAH is a really special event, in that most of the attendees are brought to it on the sponsor's dime. It's not a conference or a fun code jam, but a summit paid for by people and corporations who know they'll benefit from it. There's a list of all the sponsors on the event page, including, but not limited to:

Raise a glass to them!

Dist::Zilla v6 is here (in trial format) (body)

by rjbs, created 2016-04-24 04:38

I've been meaning to release Dist::Zilla v6 for quite a while now, but I've finally done it as a trial release. I'll make a stable release in a week or two. So far, I see no breakage, which is about what I expected. Here's the scoop:

Path::Class has been dropped for Path::Tiny.

Actually, you get a subclass of Path::Tiny. This isn't really supported behavior. In fact, Path::Tiny tells you not to do this. It won't be here long, though, and it only needs to work one level deep, which it does. It's just enough to give people downstream a warning instead of an exception. A lot of the grotty work of updating the internals to use Path::Tiny methods instead of Path::Class methods was done by Kent Frederic. Thanks, Kent!

-v no longer takes an argument

It used to be that dzil test -v put things in verbose mode, dzil test -v Plugin put just that plugin in verbose mode, and dzil -v test screwed up because it decided you meant test as a plugin name, and then couldn't figure out what command to run.

Now -v is all-things-verbose and -V is one plugin. It turns out that single-plugin verbosity has been broken for some time, and still is. I'll fix it very soon.

Deprecated plugins deleted

I've removed [Prereq] and [AutoPrereq] and [BumpVersion]. These were long marked as deprecated. The first two are just old spellings of the now-canonically-plural versions. BumpVersion is awful and nobody should use it ever.

PkgVersion can generate "package NAME VERSION" lines

So, now you can avoid deciding how to assign to $VERSION and add the version number directly to the package declaration. This also avoids the need to have any room for blank lines in which to add $VERSION.

Dist::Zilla now requires v5.14.0

Party like it's 2011.

the "credit the last uploader" problem (body)

by rjbs, created 2016-02-12 09:16
last modified 2016-02-12 09:40
tagged with: @markup:md cpan journal perl

First, a refresher…

At its simplest, the CPAN is a bunch of files and an index. The index directs you from package names to the files that contain the latest authorized release of that package. Everything else builds on top of that.

If you want to publish Foo::Bar to the CPAN, you need to use PAUSE. PAUSE manages users and permissions, authenticates users, accepts uploads, and then decides how and whether to index them. To make those indexing decisions, first PAUSE analyzes an uploaded file to see what packages it contains. Then it compares those packages to the permissions of the uploading user. If the user has permission, and if the uploaded package is later-versioned than the existing indexed package, the package is indexed.

I have skipped some details, but I believe that for the purpose of everything else I'm going to write about, this is a sufficient explanation.

To get permissions on a package that isn't indexed at all, you upload it. Then you have permissions. If you want to work with a package that already exists, the person who uploaded it needs to give you permission. There are two kinds of permission:

  • first-come; you're the person who first uploaded it, or the person to whom that person has handed over the keys; there is only one first-come user per package; you can upload new versions and you can assign and revoke co-maint permissions
  • co-maint: you are permitted to upload new versions, but you may not alter the permissions of the package

The Complaint

When you view code on MetaCPAN or search.cpan.org, one of the most visible details is the name (and avatar) of the last user to have uploaded that package. This creates a strong impression that this is the contact point for the package. Sometimes, this is true, or true enough. On the other hand, sometimes it's not, and that's a problem. It may be that the last person to upload the library only did so as a one-off act, or that they were a member of the team working on a project years ago when it was last released. Now, though, they will be boldly listed as the contact person.

Here's a scenario:

  • in 2002, a library, Pie::Packer is uploaded by Alice and is popular for a while
  • in 2008, Bob finds a bug and finds that Alice isn't really working on Perl anymore; Bob offers to do a release for just this bug fix
  • Alice gives Bob co-maint on Pie::Packer
  • Bob uploads Pie::Packer v1.234, the only release he ever plans to make
  • from 2008 through 2016, Bob is sent requests for help with Pie::Packer

Bob can't just pass on permissions to stop it. He can give up permissions, but he'll still be the last uploader.

You might object: "Alice should have given Bob first-come! Then he could pass along permissions!"

This is true. Maybe in 2010, Bob gives permissions to Charlotte... but now Charlotte is stuck in the same position. If nobody ever comes along to take it over, Charlotte can't usefully get out from under the distribution.

Half a Solution

In 2013, the QA Hackathon led to a consensus about a mechanism for permission transitions. It goes something like this:

  • give user "ADOPTME" co-maint to indicate that first-come permissions can be given to someone who wants them, and you don't need to be consulted
  • give user "HANDOFF" co-maint to indicate that you're looking to pass along first-come to someone else, but they should go through you

(The third magic user, "NEEDHELP," is not relevant to the topic at hand.)

Marking a library with ADOPTME or HANDOFF is useful in theory, but not in practice, because it's almost impossible to know that it has happened. Yesterday, I filed a bug about making ADOPTME/HANDOFF visible on MetaCPAN, and I think it's critically important to making the ADOPTME/HANDOFF worth having.

So, why is this section headed "half a solution"?

Because this solution helps you if you have first-come, but not if you have co-maint. Imagine poor Bob, above, in 2016. By this point, Alice has moved off the grid and can't be contacted. Bob can't mark the dist as ADOPTME. He can ask the PAUSE admins to do so, but that's it. It's also a bit a burden to put onto the PAUSE admins, who may not know whether Bob has really made a good faith effort to contact Alice.

The final remaining problem is this: There is no escape hatch for someone who has co-maint permissions and wants to get out from under the shadow of an unwanted upload.

The Simplest Thing That Could Possibly Work

This problem could be solved by adding a "GitHub Organizations"-like layer to PAUSE… but I think there's a much, much simpler mechanism.

We should always treat the first-come owner as the authoritative source, including when displaying a distribution on the web. MetaCPAN Web should stop showing the name and image of the latest uploder as prominently, and should show the first-come user instead. The same goes for search.cpan.org and other such sites. MetaCPAN already has a place for listing other contributors, which should contain the last uploader. Adding note like "last upload by BOB" seems okay, too, but the emphasis should be on connecting the distribution with the one person who can actually make decisions about its future.

Understanding sudoers(5) syntax

by rjbs, created 2015-12-24 11:53
tagged with: reference sudo

The Great Infocom Replay: Infidel (body)

by rjbs, created 2015-10-22 23:31

These replay write-ups get shorter and shorter as I go. I think it's because I'm growing more and more confident in what I like and don't like, and what I will and won't spend my time on. Infidel has a nice setup. I liked the setting, the starting plot, and the way the game got started. Soon enough, though, I got a "you're quite thirsty" message, and I groaned.

I found water and food and decided I'd stick to it. I could suck it up and deal with hunger puzzles. Then I got to maze. Forget it!

There really was only one puzzle on the way to the maze, at least on the route that I took to get there. Maybe the rest of the puzzles in the game are great, but I'm not going to find out. At least, not by playing the game. That's part of my problem with the replay, now. Because I'm willing to stop playing a game when I see that there's a maze, I don't see all the good stuff on the other side of it. If it was just a maze, I might pull out a map and skip the maze, but the maze and the hunger puzzle together are lethal.

I'm going to have to find some transcripts of full game plays, maybe.

The biggest killer to my enjoyment of games with these elements is that I find myself making a map with no real immersion, just planning out how to solve the puzzles faster on my next run. I complained about this [last time, talking about Enchanter]. I much prefer when I can make the map while I play, never worrying about figuring out a critical path. (And yet, I love Suspended.)

Next up is Sorcerer.

a big ol' Catalyst upgrade (body)

by rjbs, created 2015-09-30 23:45
last modified 2015-10-01 17:18

At work, a fair bit of our web stuff is Catalyst. That's not just the user-facing website, but also internal HTTP services. For a long time, we were stuck on v5.7012, from late 2007. That's pre-Moose (which was 5.8000) and pre-Plack (which was 5.9000). It wasn't that we didn't want to upgrade, but it was a bunch of work and all the benefits we'd see immediately were little ones. It was going to free us up for a lot of future gain, but who has the time to invest in that?

Well, I'd been doing little bits of prep work and re-testing over time, and once in a while I'd see some plugin I could update or feature I could tweak, but for the most part, I'd done nothing but repeatedly note that upgrading was going to take work. A few weeks ago, I decided to make a big push and see if I could get us ready to upgrade. This would mean upgrading over eight years of Catalyst… but also Moose. We were running Moose v1.19, from late 2010.

The basic process involved here was simple:

  1. make a local::lib directory for the upgrade
  2. try to install Catalyst::Devel and Catalyst::Runtime into it
  3. sort out complications
  4. eventually, run tests for work code
  5. sort out complications
  6. deploy!

So, obviously the interesting question here is: what kind of stuff happened in steps 3 and 5?

Most of this, especially in step 3, was really uninteresting. Both Catalyst and Moose will tell you, when you upgrade them, that the upgrade is going to break old versions of plugins you had installed. So, you upgrade that stuff before you move on. Sometimes, tests would fail because of unstated dependencies or bugs that only show up when you try using a 2015 modules on top of a 2007 version of some prereq. In all of this I found very little that required that I bug Catalyst devs. There was one bug where tests didn't skip properly because of a silly coding mistake. Other than that, it was mostly an n-step process of upgrading my libraries.

The more complicated problems showed up in step 5, when I was sorting out our own code that was broken by the update. There wasn't much:

  • plugins written really fast and loose with the awful Catalyst "plugins go into your @ISA" mechanism
  • encoding issues
  • suddenly missing actions (!)
  • deployment issues

In general, we fixed the first by just dropping plugins that we no longer needed. The only plugin that really mattered was the one that tied Catalyst's logging system to our internal subclass of Log::Dispatchouli::Global, and that was replaced by roughly one line:

Pobox::Web->log( Pobox::Web::Logger->new );

So, we killed off a grody plugin and replaced it with a tiny wrapper object. Win!

I also had to make this change to our configuration, which seemed a bit gratuitous, but the error message was so helpful that I couldn't be too bothered:

-view: 'View::Mason'
-default_view: 'View::Mason'
+view: 'Mason'
+default_view: 'Mason'

Encoding issues ended up being mostly the same. We dropped the Unicode plugin and then killed off one or two places where we were fiddling with encodings in the program. Honestly, I'm not 100% sure how Catalyst's old and new behavior are supposed to compare, but the end result was that we made our code more uniformly deal in text strings, and the encoding happened correctly at the border.

The missing actions were a bigger concern. What happened?!

Well, it turned out that we had a bunch of actions like this:

sub index : Chained('whatever') Args(0) { ... }

These were meant to handle /whatever, and worked just fine, because in our ancient Catalyst, the index subroutine was still handled specially. In the new version, it was just like anything else, so it started handling /whatever/index. The fix was simple:

sub index : Chained('whatever') Args(0) PathPart('') { ... }

Deployment issues were minor. We were using the old Catalyst::Helper scripts, which I always hated, and still do. Back in the day, and in fact before Catalyst::Helper existed, Dieter wrote what I considered a much superior system internally called Catbox… but we never really polished it up enough for general use. I regenerated the scripts, but this was a bit of a pain because we'd made internal changes, and because the helper script generator doesn't act nicely enough when your repo directory name doesn't quite match your application name. I got it worked out, but it didn't matter much, because of Plack!

I had been dying to get us moved to Plack for ages, and once everything was working to test, I replaced the service scripts with wrappers around plackup. I mentioned this to #plack and got quoted on plackperl.org:

"Today, I finished a sizable project to upgrade almost all of our web stuff to run on Plack. Having done that, everything is better!"

It was true. I replaced old Catalyst::Engine::HTTP::Prefork with Starman and watched the low availability reports become a trickle. I've moved a few things to Starlet since. (I couldn't at the time because of a Solaris-related bug, since fixed in Server::Starter.)

The Moose upgrades were similarly painless. The main change required was in dealing with enum types, which now required that anonymous enumerations had their possible values passed in in a reference, rather than the old, ambiguous list form. Since I was the one who, years ago, pushed for that change, I couldn't get upset by having to deal with it now.

All told, I spent about three work days doing the upgrade and then one doing the deploy and dealing with little bugs that we hadn't detected in testing. It was time well spent, and now we just have one last non-Plack service to convert… from HTML::Mason.

The Great Infocom Replay: Enchanter (body)

by rjbs, created 2015-09-23 23:28

I think I'm officially giving up on beating Enchanter, but it has been a pretty interesting experience as far as my big replay goes. It's not because the game is great, but because it has allowed me to get a better handle on what I don't like about the early Infocom games.

I found the writing to have the same sort of charm as other Infocom games. It's very economical, but sometimes succesfully both wry and whimsical. It has a lot of the same problems as other games of the period, though. The two that gutted me: hunger and mazes. "You are getting hungry" is one of my least favorite things to see in a game. It is a guarantee that I will end up dead and have to figure out how to replay the game in fewer moves, next time. Similarly awful is the feeling of leaving "The Transparent Room" only to find myself in "The Transparent Room" with a slightly different descrpition. It means I'm going to waste half an hour figuring out a maze using the "gather a bunch of items and drop them in different rooms" technique. It's not fun.

I managed to make a pretty good map, though, by marking all the unknown exits and the continually restoring to speed run through each one. This, I think, is when I realized that I was not going to love Enchanter. The idea of the game is fine. I like the magic in it. I just didn't feel like it was much of a story, because I kept having to go back and do it all over again. I realized that this is my problem. I want these games to be stories, not puzzles, but they are fundamentally puzzles, with a veneer of story around them.

On the other hand, I can look at my two favorite Infocom games (so far): Suspended and A Mind Forever Voyaging. Suspended is almost pure puzzle. The story is barely there. At least, that's how it seems to me, although I'm not sure why. When I have to restart in Suspended, I don't feel like I'm breaking apart a story with a rising action and climax. It's just a puzzle box. A Mind Forever Voyaging is almost pure story. There are a few puzzles, but they're pretty simple. For the most part, the game is a world that you explore, and the puzzles are there to motivate you to do so.

When I made this realization, I really accepted the new way I was playing Enchanter: I didn't try to enjoy the story anymore, I just cataloged roomed and objects, trying to piece together the critical path for the game in my head in a big flow diagram. This suddenly seemed like the right way to play the game, and I thought I might try to finish it, since I had this new handle on it. Then again, I thought, I wasn't having much fun. Given that it has taken me two and a half years to get through nine game replays, it seemed foolish to spend longer on this one than I'd enjoy.

Next up, Infidel.

I won a NAS! (body)

by rjbs, created 2015-09-13 23:38
tagged with: @markup:md hardware journal

Last year, I bought a Synology ds214play NAS. I posted about my horrible data migration, wherein I lost a whole ton of data, entirely because I was a bonehead. Despite that pain, I absolutely loved the Synology NAS. It frequently impressed me with how much it could do, how well it did it, and how easy it was to do. Even after I moved all of my media to it, all my old backups, and started using it as a Time Machine backup destination, I had a good terabyte of space left.

Obviously, that meant I spent the whole last year thinking, "I should get a bigger NAS!" I seriously considered doing so, figuring out how much I could sell the old one for. At every turn, though, I'd remind myself that this was totally crazy. I had no need for a bigger NAS. It would cost money, take time to move to, and have no actual benefit. Maybe when I got low on space, it would make sense, but planning too far in advance for that would just mean that I'd pay too much for the drives.

On the other hand, when I saw an Engadget raffle to win a Synology ds415play, I broke my usual habit of ignoring all online contests as unwinnable scams. I figured it couldn't hurt, especially since I gave a tagged email address. I followed Synology on Twitter and visited their Facebook page to earn two extra entries. Then, I forgot about it. A few days later, though, I got an email telling me that I won. I put on my best skeptical face, but a few days later I got some legal papers from FedEx, and not too much later, I got the NAS!

It was a four bay NAS with four 4T WD reds. That's 12T of storage in a RAID5. What would I do with all that space? Who cares, it was free! Unfortunately, it arrived the day before I left for YAPC::Asia, so I didn't have a lot of time to get things cut over. Remembering how badly things worked out when I rushed last time, though, I decided to wait until I got back. It ended up taking me much longer than I wanted to transition, but now that I'm done, everything is great.

Problems I encountered:

The default instructions for migration involve yanking the drives from the old NAS and inserting them to the new one. If anything went wrong there, I'd be hosed, so first I wanted to make backups. I didn't have any drives large enough to store all the data, though. I couldn't just yank the drives from the new NAS, because my USB enclosures couldn't address 4T at once, and I'm sure I could've done something more complicated, but ugh. Instead, I thought, I'd just use the network. After all, the two NASes were on a gigabit ethernet link.

I did an rsync from one device to the other, using the Synology backup service. It took almost exactly twenty-four hours. This is Synology's suggestion when you're doing a network migration. Unfortunately, once I finished the rsync and tried to restore the data onto the working space of the new NAS, it told me "no backups detected." Waah? I asked Synology support, and their response was, "please email us the admin password to your NAS." I was pretty uninterested in this and asked for a second opinion. Eventually they suggested something else, sort of half-heartedly, but it took days to hear back, and by then I'd already taken action. In fact, I'd taken the action they suggested, and it worked.

With the rsync to the backup area done, all I had all the files on the new NAS. I just copied things into place, did a couple chowns, migrated only the system settings, and I was done. Maybe it would have been more convenient to have had the restore work, but it couldn't have been much easier. Once I'd fixed the file owners, only one thing didn't work: Time Machine. OS X just refused to believe that the "sparse bundle" disk image was valid. I couldn't find any good explanation, so I decided that it was probably a problem with extended attributes "or something." I blew away the new copy and rsync-ed again, this time using OS X's patched rsync (and -S). It worked. Why? I'm not quite sure, but who cares, right?

While getting to this point, I had a bunch of false starts and actually fully copied my data from the old to new NAS more than once. Each time, this was my fault, and nobody else's. The best reason for recopying everything was to change my RAID type. The device arrived formated in SHR-1, which is an enhanced RAID5. I reconfigured it as SHR-2, which is more like RAID6. Sure, it cost me 4T of space, but now I can lose two drives. This is a much more useful benefit for me. If I was going to spend a year longing for a four-bay NAS, it should've been because I could have two drive redundancy, not for a lot of space that I didn't need.

I also had fun yanking and re-inserting a drive. I knew it would work, and then it took ages to recheck the data integrity, but it's just fun, and I have no regrets.

I'm looking forward to getting even more out of my Synology. It can run quite a few useful network services, and I'm only using one or two of them. A Synology really does work at providing all the "cloud" services that a typical user might want, but privately in your own house. There are a few such services that I use that I'm looking to stop using. I will post on those successes as they occur.

Engadget did not ask me to say anything about the contest or the device, nor did anybody else. I really do like the Synology NAS line. Thanks for choosing such a great winner, Engadget!

YAPC::Asia, days 3-4 (body)

by rjbs, created 2015-09-06 22:58
tagged with: @markup:md journal travel yapc

YAPC actually only runs two days, or three if you count "RejectConf" on day zero. So, this entry is not really about YAPC::Asia, but about what I did between the end of the conference and my trip home.

I woke up at the YAPC hotel, got some breakfast (fish and okayu!) and packed my bags. The plan was to visit Nikkō with Shawn, Dustin, and Karen, and my first step was to get to Kita-Senju Station, one of the busiest stations in the Tokyo Metro system. This was easy (although I was flagged for not having paid properly on my last trip, and had to pay more). The complicated part came when I had to find Shawn and Dustin, then Karen, and then get seated. Karen and I ended up in a different car from Shawn and Dustin, each of whom was seated in his own car. I was seated directly in front of Karen, but moved back to be able to speak with her. Doing this seemed to put the conductor in a very awkward situation. He apologetically reseated us at the rear of the nearly empty train, although nobody ever seemed to take the seat I'd taken, or really to seat near where we'd been at all. I was curious, but it seemed best not to ask.

Nikkō was a pretty little town. Although its population is ninety thousand, it seemed very much like a one-street town. We dropped our bags at the hotel and headed up the hill toward the temples and shrines. Partway there we stopped to get lunch, but after the party in front of us was admitted to the restaurant, the "closed" sign came out. "Sorry," said the restaurateur. We moved on and stopped into a little two-table mom-and-pop place where I had a totally okay bowl of udon.

lunch in Nikko

After lunch, we finished our trek uphill and eventually got to the site. My first order of business was to down a bottle of Pocari Sweat. Sure, it was grey, overcast, and drizzly out, but it was also hot and muggy, and I was dehydrating even though it I was walking through a mist. After that, I took some time to look at the site, and it was quite impressive. We were briefly worried when we saw that one of the main buildings had been entirely surrounded by a temporary and very ugly building during restoration. Fortunately, though, it was the only one. We continued on into the complex and found dozens of beautiful buildings and paths.

Unfortunately for me, I had no idea what we were seeing. I had skipped out on the ¥500 audio program, and all the signs were in Japanese. I had it in my head that it was all Buddhist, and specifically related to Nichiren, but I later realized that I had confused Nikkō the town with Nikkō the priest. Although this was an embarassing confusion, I was at least pleased that my memory had held on to any of this material! In fact, the site is a combination of Buddhist and Shinto structures, which is pretty reflective of Japanese religious tradition. It was sometimes clear what was what, but at other times I had no idea just what I was seeing. If I went back, I'd get the audio guide.

Nikko shrines and buildings

One thing that I didn't capture well in any of my photos is the verticality of the site. Between sets of buildings there were many long flights of stairs. The stairs were slick, wet stone without handrails. Once or twice, someone above us would slip or drop something, and there'd be a moment of panic as I wondered whether we were all about to go down like tenpins. Fortunately, it never happened. It felt like we'd made it quite a ways up, but there was more to go. Eventually, we decided we'd gone high enough, and headed back down. That was just as harrowing, but a bit less exhausting.

On the way back down the hill toward our hotel, I decided that I wanted to get ice cream, but we never saw anything that captured our interest. I was hoping for something slightly more interesing than random soft serve. In the end, I ended up getting dango, which I'd seen on the way up, and was puzzled by. I had no idea what it was, even when I ordered it. I think that what I got was [mitarashi dango](Mitarashi dango). Once I tasted it, I understood. It was three balls something made from rice, a lot like mochi, painted with something very much like the sauce on unagi. It was quite tasty, although I think I would've preferred it without the sauce, or with something sesame-based. I'd definitely order it again, though.

Also on the way back, we saw monkeys! There was a group of many six monkeys, one of them carrying a baby on its belly, and they crossed the road on the electrical wires and vanished onto the nearby rooves. Woah.

Karen headed back to Tokyo, and the rest of us went to our room. We were all exhausted. There was a brief chance that we'd be social when I went looking for a beer vending machine, but it turned out the hotel vending machine only had soft drinks. I thought maybe I'd try to get just a little more work done, but bad luck struck: my laptop was busted again. Just like several weeks earlier, the keyboard and trackpad were no longer recognized at all. I couldn't log in or do anything else. I made sure I got my 3DS and iPad charged for my upcoming trip back.

I fell asleep at some absurdly early hour and woke up around 3:30, when Shawn also woke up and snuck out to do some walking around. I stayed in bed to try to get some more sleep, but had no luck. Eventually, I headed to the Tobu-Nikko station to head into Tokyo and toward Narita. I got my ticket and made it to Tokyo station without human assistance, and I felt quite proud of that! (To be fair, I got a bit muddled getting from Kita-senju to Tokyo station, but I did oaky.)

My goal was to deposit my bags at a coin locker in Tokyo station, to visit Tokyu Hands and the Pokémon store, and then to head to Narita. The problem was that while both shops gave some basic instructions on how to find them, they were both located in huge shopping centers, and those two shopping centers were connected to one another. Nearly all the signs were in Japanese. GPS was no help. All the coin lockers were taken, so I was hauling my luggage everywhere. I nearly gave up on finding anything, but eventually I accomplished my missions: gifts for Martha and Gloria… and for me. It's almost two weeks later, and Martha is still her Ditto everywhere, though, so it was worth it.

The trip to the airport was fine. Once there, I had lunch with Marylou, who was flying out near the same time. We had tempura. I've never been a big fan of tempura, but all the tempura that I had on this trip made me think I should give it another chance. I killed a lot of time in the ANA Narita lounge, but I found it pretty underwhelming compared to the Air Canada lounge I'd visited in Toronto. Then again, I'd later find myself in a different Air Canada lounge in Toronto that was a dump comapred to the Narita lounge. Each one, though, amused me with its beverage selection. In Tokyo, they had pitchers full of Pocari Sweat. In Toronto, they had cans of Fruitopia, which I thought had been lost to history fifteen years ago.

It seems like this is probably the last YAPC::Asia in Tokyo, at least at this scale, or at least for a while. That's too bad, because each one I went to was excellent, and I was always honored to be asked and delighted to accept. The up side might be that some other amazing place picks up the YAPC::Asia name. How about Manila? Hong Kong? Taipei (again)? I'll be interested to see what happens next, and whether the next YAPC::Asia is another "festival for engineers," or a more purely Perl-o-centric conference. All I hope, though, is that the attendees have a good time, learn some useful things, and leave with some good feelings for the people and maybe for the language, too. I suppose I also wouldn't mind an invitation to speak!

Visiting Japan was excellent, as it was on my previous visits. The best thing about YAPC::Asia in Tokyo being over is that the next time I go to Tokyo, it will have to be as a holiday, with my family, with nothing to do but enjoy the city. As long as I can manage to get onto Tokyo time, I think that will come quite easily.

Until then, I'll have to make do with the notebooks I brought back from Tokyu Hands, the recipe for okayu that I found on Google, and the really excellent Terakawa Ramen in Philly's Chinatown.

YAPC::Asia, day 2 (body)

by rjbs, created 2015-09-06 17:16
last modified 2015-09-06 17:18
tagged with: @markup:md journal travel yapc

I woke up early again on the 22nd, some time before dawn, and did some preparation for the conference. It was my day for speaking, and I wanted to make another pass through my slides. Eventually, it was six thirty and I decided to figure out what I could do for breakfast other than Starbucks. I went to look up "breakfast near Sunroute Ariake Hotel" and found a bunch of reviews in which people praised the hotel's breakfast. Marylou had said it was just soup and bread, and that's when I realized my error: I had trusted someone from Pittsburgh.

It turns out that the other hotel restaurant, on the second floor, had a full buffet. It was just like the one in Shinjuku, but with a better selection and worse music. I'd been sad that there was no fish on the menu in Shinjuku, but had enjoyed the all-Beatles playlist. There was an endless supply of broiled Pacific saury at Ariake, but the music was all music box versions of 1980's hits. I think I listened to a ten-minute long version of "Take My Breath Away" while eating. I ate plenty of fish, a lot more okayu, and probably too much of other things. I am a big fan of the hotel breakfast buffet!

At the conference, I got to work adding a few sections to my slides. I especially wanted to discuss how some of the Unicode changes worked with Japanese text, but I didn't get to add everything that I'd wanted. The short version of it was this: \b isn't very useful for Japanese text, and '\b{wb}` isn't much better.

Before I finished my last-minute editing, it was time to meet with the translators. In previous years, I'd been asked to supply my slides a week or two in advance. The YAPC volunteer translators would then, a few days later, send me a text file of Japanese subtitles, which I'd add to my slides (at the airport). I viewed this as an amazing service, and a lot of work that probably deserved more thanks than I remembered to give.

This year, though, the team went well beyond that. Instead of getting slides subtitled, they had live simultaneous translation provided by professional translators. I had a scheduled meeting, an hour long, an hour before my talk. I figured this was sort of a window of time: at some point during it, I'd be called in for a little while, and that would be that. Was I ever mistaken! I had been asked, a few weeks ahead of the conference, to provide my slides. I sent them what I usually give for my slides: a build-by-build PDF. Each addition of a bullet point was a new page, so you could easily step through the talk to see how it would be presented.

When I got to my meeting, the translators had a printout. I was aghast! It was something like four hundred pages, white on black. The translators were somewhat stunned, too. "Are you really going to do this many slides?"

Fortunately, we cleared things up pretty quickly. They'd made printouts to annotate with notes to help with the more technical details. The translators were technical, but they needed information on some details that they weren't familiar with. (This made things fun. One of them asked me, "Why would you ever want hex float literals?" and I got to explain.) They also inflate my ego a little by telling me that they'd watched a video of my talk and found me to be an interesting speaker. I wasn't quite sure what to think, though, when they elaborated that it was because I "liked to trick, and lie to, the audience." I understood, though, that they were a little worried about my constant nonsensical digressions and jokes. They were also worried by my incorrigable punning. They also were intrigued by the word "backwhack" for "backslash." It was a fun meeting.

The meeting actually took almost the full hour, and left me with just enough time to clean up the slides that I'd left in progress, drink some water, and otherwise get prepared to speak. I think the talk went well. I tried to avoid too many verbal embellishments and puns. You can decide how I did for yourself, because the talk is up on YouTube. I was really curious how the translation went, because even at my slowed-down speed, I spoke fairly quickly. So far, all I've heard were good things. It must have been hard work. Later, someone told me that the each translator would only do a small amount of work each day, because it was so taxing. I bet!

After that, I was here and there, chatting, recovering, and otherwise enjoying the hallways. Bento box lunch was provided again, and this time speakers were given the choice of several fancier boxes. I picked one more or less at random, and it was tasty.

conference day 2: bento

After lunch, I saw Miki and Marty give a talk on how containers work, including "how to write your own containers in Perl." It was excellent, and was among my favorite kinds of talks. It took a topic that many people know about and then explained the underpinnings that most people don't understand. I'm a big fan of these talks, because they demystify things that programmers use all the time without really understanding. Using things you don't understand can be convenient, but it makes it really hard to act rationally when things go wrong. I think I'll probably re-watch this talk on YouTube sometime when I'm more alert than I was in Tokyo.

Marylou gave her talk on posture, which was good, but I wish I could've seen things being demonstrated from way, way in the back. I was reminded that the Japanese seem to be excellent at squatting, and that maybe I should work on that myself. After that, I caught the second half of Jonathan Worthington's talk on concurrency in Perl 6, which was quite interesting. Brad Fitzpatrick gave a talk on profiling and optimization with Go, which did a good job of showing off the Go tooling, much of which I hadn't seen before. It was impressive.

I attended the lightning talks, the same as I had the day before. I was almost totally unable to follow them, save for a few with enough English on the slides. Despite that, they were great fun to witness. The energy level was way, way higher than has become the average level at YAPC::NA, and most talks drew boisterous laughter. I wish I could've followed along more, but it was great to see everyone have such a good time. I haven't done a lightning talk in years. I better change that next year, assuming we have a YAPC::NA. Still no venue! (I'm voting for Philadelphia!)

The closing remarks came and went, mostly going over my head, and everybody left. I lagged behind, though, with a bunch of other foreign attendees, chatting and planning dinner. While we waited, the conference volunteers (numbering about sixteen thousand, as I recall) had a final meeting and did some group photos. It turned out that there were one or two staff shirts left unclaimed, and one of them was given to me! I have to say, it's almost certainly the best conference shirt I've ever gotten. I don't have any photos of me in it, at the moment, so here's one of the volunteer corps:

the YAPC::Asia volunteer staff

As for dinner, I was adamant that I wanted to eat ramen. Marcel seemed a little reticent to commit to eating in the bay, since the food scene didn't seem to be much of a scene, but we ended up sticking together, probably because I looked like I could not possibly be gotten to do anything else. I think we had a good time, though, and I'm not sure it would've happened without Marcel, who helped figure out how to get where we were going once we were at the right GPS location. (Several times in my trip, the GPS got me to a place, where I'd realize that it was more or less just getting me to the right block, and after that I'd have a million cubic meters to search through.)

Marcel, Marylou, Casey, and I made it to the food court of Aqua City Odaiba, a big shopping mall in walking distance of the conference. It was just a bit of a hike, but not bad. It was hot, though, and I drank several bottles of Pocari Sweat on the way there (and later on the way back). We all got ramen. I think Casey and Marylou got slightly tastier ramen than Marcel and I got, but I wasn't bothered, because the ramen I got was unlike any ramen I'd had previously. I also had some beer, which was a good accompaniment. It was Asahi Super Dry, which tickled me, because I'd just read about the "Dry Wars" that erupted in the Japanese beer scene in the late 80's. Super Dry is a totally acceptable Pilsner, which I enjoyed but didn't find outstanding. The idea of a "wars" fought over any title it might hold amused me. Of course, we had our Cola Wars, so I can't claim any sort of cultural superiority on our beverage pacifism.


(Note also my 3DS in that photo. Tokyo was a non-stop Street Pass fest. Every time I checked, I had ten or so new tags! I cleared a lot of levels in plaza games, I can tell you!)

On the way back, we stopped to see the giant Gundam statue near another shopping center in the bay. It was neat, but I was more interested in making another trip to Tokyu Hands, because there was one there. Sadly, it had closed about ten minutes earlier, so I contented myself with getting some shots of the robot before heading back to the hotel.

YAPC::Asia was over! YAPC::Asia has always seemed very short to me, as a conference. As a non-speaker of Japanese, there are a lot of events that I can't really take part in, and a lot of people in the hallway whom I can't accost and chat up. I know, from being on the other side of that, that it's hard to make someone feel comfortable in, let alone part of, a group that's quite literally foreign to them. I think that the organizers and attendees of YAPC::Asia did an excellent job of it, but in the end, there is a gulf left between me and the mass of attendees, and I think that contributes to things seeming to go by so far. It's hard to milk every minute as I might do in YAPC::NA. I hope to keep this in mind in the future, though, and to keep trying to make would-be outsiders feel welcome when I can.

I'll make one more post about my trip, covering my post-YAPC activities, in the next few days.

YAPC::Asia, day 1 (body)

by rjbs, created 2015-09-04 23:24
last modified 2015-09-06 20:38
tagged with: @markup:md journal travel yapc

I woke up really early on the 20th and did my best to kill time. I called home, I reviewed my slides and re-packed my bag. Part of my goal was to delay until breakfast was served, so I could eat something before heading to Ariake for the conference. I wanted to see whether the okayu had been relabeled, too! Around 6:20, though, I couldn't stand any more waiting, and I headed out. This way, I figured, I'd avoid rush hour.

It almost worked. Tokyo's rush hour is well-known for being crazy, and I'd be departing from Shinjuku station, one of the busiest stations in the city. I got to the station just after one train left and decided to wander around, looking for a bite to eat. I didn't have much luck, and headed back to wait for the next train, about a half hour. While I waited for the next train to Ariake, other trains came and went, and each one was busier than the last. By the time my train came, rush hour was clearly arriving. Still, I got on and even managed to get a seat pretty quickly.

I got off at Kokusai-Tenjijō, along with a huge crowd of men in identical suits. They all headed directly to Tokyo Big Sight while I headed to the hotel. I was amazed to see the conference center so close, since I'd totally missed it on my trip down before. Of course, then it had been dark and I'd been exhausted. This time, you couldn't miss it.

Tokyo Big Sight

I checked in, ran into Marylou, and headed to the conference to register and find something to eat. Marylou reported that the hotel's breakfast was only soup and bread. We ended up getting something at Starbucks, which was not great, but was food. We ran into Liz and Wendy, who got us to the actual venue. Big Sight is huge, and there was some danger that we'd just wander around aimlessly until someone took pity on us. Fortunately, that didn't happen.

Actually, YAPC::Asia had already prepared us for this, so it wouldn't have happened anyway. They not only posted step by step instructions on how to get to the conference area, they posted a video. This blew my mind. It walked you through getting from the main entrance to the conference area. You could pull it up on your phone and follow along and eventually you'd be there! The conference area was pretty busy. The volunteers were getting things ready, the attendees were showing up, and I was just sitting around zoning out. I wasn't too worried about getting registered first, and eventually I got my stuff and we got started.

I saw Larry's opening talk, which I'd seen before at FOSDEM and Salt Lake City. I wondered how many of the people in the room had read Lord of the Rings and the Hobbit, which got me wondering: how great would it be to give a talk based on an elaborate Soto vs. Rinzai metaphor? Well, I think it would be great, but I think I'd be one of very few people to enjoy it. Also, I'd have to relearn a lot of the things I've forgotten over the past fifteen years.

Anyway, after Larry's talk came Kelsey Hightower talking about Kubernetes, which was quite interesting and included a lot of flawless live demonstration. This was the first of many talks on containers at YAPC::Asia. I wish I'd been more awake for it! Sadly, I spent a lot of the day half awake. There was a provided lunch, at least in some quantity, and Karen and Marylou and I managed to get in just before they ran out. We had bento boxes, and they were tasty. I also got my hands on a Pepsi Refresh Shot, which was basically a five ounce can of Pepsi's version of Jolt Cola. I had a few of these during the conference. My consumption of these was definitely way, way below my Pocari Sweat intake, though.

After lunch, it was time for Matz's talk… in Japanese! Of course it was, but for some reason I'd held out hope that there would be English. Carlos, who I'd meet later, told me I could go find the same talk in English online, but I have yet to do that. I will, I will…

Casey gave a talk on distributed teams, which reminded me that Hackpad exists! I need to figure out whether I'd find it useful the way I used to find similar things useful. My guess is that since I haven't missed those very much, I'll live without Hackpad. Still, might be fun.

After that, I tried to stay social in the common areas, and time flew by. Pretty soon, it was time for the conference dinner. There was a huge spread, never-ending beer, and a really high ceiling, so the acoustics were good for conversation. I think I ate about four pieces of ziti and drank a fair bit of beer, but I think I was too jetlagged for it to have any effect on me. (There's a thought to file away for future trips, I guess!)

At previous YAPC::Asias, I found the dinner somewhat difficult, because it wasn't easy for me to go chat up random attendees, and I was very rarely appraoched by anyone there. This year, for whatever reason, that didn't seem to be a problem. I spoke with quite a few attendees, both local and foreign, and had a really good time. One attendee, who told me he was mostly an Android programmer, asked me about Perl 6. "I hear it's got some backward incompatibilities."

This kind of question would be inconceivable at YAPC::NA. Everybody is there because it's a Perl conference. YAPC::Asia isn't, really. It's a technical conference with a strong Perl heritage. One way to think about it is that it's a very good general tech conference for software engineers, but which has a much better chance of having a bunch of Perl content compared to a conference run based on trends. Going to a conference with lots of people from outside your usual circle is a good idea. If they're experts in the kind of thing you're not, even better!

This reminds me of the stories I've heard about people who up and decide to go attend a conference for psychologists or architects. Maybe I should do that, someday, too.

After dinner, a group of us headed back to the hotel, intended to have another drink before bed, but the hotel bar was full of people dancing in a circle and beating drums. I took it as a sign, went upstairs, and collapsed.

YAPC::Asia, day 0 (body)

by rjbs, created 2015-09-01 08:54
tagged with: @markup:md journal travel yapc

(Where's day -1? Well, I left home on the 18th (day -2) and got to Tokyo on the 19th (day -1), but since I didn't sleep between the two, they formed one virtual day for me. Day -1 was lost, like tears in the rain.)

I woke up way, way too early on the 20th. At the latest, it was around four o'clock. I tried to lie in bed, very still, pretending to sleep, but eventually I got sick of it and got up. I would repeat this pattern every day for the rest of my trip. I did morning stuff and spent some time reviewing my plans for the next few days. Eventually, breakfast was available and I went to eat some.

I ate a lot of breakfast, especially okayu. Okayu is the Japanese version of congee, rice porridge. It's what you give people who are sick or, apparently, heavily jetlagged. I ate a lot of it while in Japan, usually with a big helping of kelp. It was probably the best new breakfast food I've had in a long time. The only problem I had with it was the labeling. The okayu was in a big serving vessel labeled "gruel." I sent a polite email to the hotel explaining that only prisoners and Victorian-era orphans eat gruel, and to my great delight, they replied that they would change the label immediately.

My plan for the day was to get a day out on the town, doing stuff and keeping busy in an attempt to adjust my body clock. My body clock did not get adjusted, but I had a good time, anyway. I met up with Marty and Karen for lunch at Joël Robuchon's L'Atelier, where I'd failed to get lunch on my last trip to Tokyo. It was excellent. Marty cautioned me that theirs was not the best foie gras available in Tokyo, but I ordered it anyway, mostly because I am a risotto lover. I had a great meal!

At this point, I was absolutely stuffed. I'd had a big breakfast followed by a three course lunch. Karen said, "Well, I was going to suggest ice cream, but…" and I said, "let's do it!" We went to Snow Picnic, a liquid-nitrogen-using ice cream joint in Nakano. Karen and I had tried to go to National Geographic Travel's "second best place in in the world to get ice cream" in 2013, but found it closed for good, so she'd looked up where to go instead on this trip. We were both excited for it… but then we found that Snow Picnic was closed for vacation! Instead, we went to Daily Chikyo, a funny little soft service place in the basement of a shopping arcade. It was good, especially because it was so hot and humid.

Speaking of the heat and humidity, I should mention the vending machines. Everybody jokes about the many weird vending machines in Tokyo, but what I don't think people realize is how many there are. You can't walk more than a block without seeing one (and probably more) in most places. Some of them have soda, and some have water, but they all have tea, weird fruit drinks, and some kind of sports drink. The sports drinks (labeled "ion drinks") are a lot like Gatorade, but seem to come only in one flavor ("white") and have almost no sugar. I drank enormous amounts of Pocari Sweat, the most common sports drink around. It was so hot and gross that I could feel myself dehydrating all the time, and the vending machines of Japan were my constant ally in the fight against heat stroke.

We looked at a lot of little shops at Nakano Broadway, where Karen found some post cards and I wisely decided against buying a ¥18,000 Batman statue. We went to Book Off, a huge bookstore. Finally, we ended up at Tokyu Hands. I've heard a lot of people tell me how I should really go to Akihabara on my trips to Japan. It's a huge center for electronics and other of-interest-to-geeks stuff. I've gone, and it's cool. I think in general I find Tokyu Hands much more fun, though. It's like a giant combination of A.C. Moore and Staples, plus a bunch of other random stuff. I treated myself to some nice notebooks.

paper-oh quadro outside

Around five, Karen had to get back home. I got myself back to the hotel, did a little reading, and called home. Through an extreme effort of will I managed to stay up until eight o'clock, but that was that, and I fell asleep. I realized, that night, that the reason I'd managed to get onto Tokyo time in the past was that I was staying with Marty and Karen, and would stay up late every night chatting. If I find myself headed back to Tokyo again, sometimes, I'll have to make sure I've got evening plans at some kind of venue where it would be rude to fall asleep.

It was good to get to sleep, anyway, and the next morning I'd be up early (too early) to get down to Ariake for YAPC!

YAPC::Asia, day -2 (body)

by rjbs, created 2015-08-31 23:26
last modified 2015-09-06 20:38
tagged with: @markup:md journal travel yapc
YAPC: :Asia starts on August 20th with "day zero," with a talks that didn't make the main two days. I probably won't be there for much of that, since it's mostly Japanese content. Despite two prior YAPCs in Tokyo, I still can't understand Japanese. Go figure!

It's Tuesday the 18th, for me, and I'm headed to Tokyo, where I'll arrive on the 19th. Time zones, man.

I'll write up the conference after it's over, but I thought I'd kill some time on the plane by writing up the trip so far. It has been some good travel!

Last night we watched two episodes of Scream (so far: it's okay). Got to bed around eleven, with my alarm set for 4:45. As usual, because I knew I had to get up early, I slept terribly, waking up again and again, wondering whether I had overslept. (This was silly, Gloria would have made sure I got up, but it wasn't under my control!) I got a quick shower, did my last minute packing, and we were out the door. We stopped at Wawa and I got a breakfast sandwich on cinnamon French toast. American breakfast is a fine thing.

The bus ride was uneventful. I played some Animal Crossing. I finally had to get a haircut. I had to take the 5:20 bus to get to Newark on time, but that got me there almost three hours early. Security for Terminal A was the longest I have ever seen. That killed some time. I dozed a little. I played a little Minecraft with the kid. Mostly, though, for two hours, I walked back and forth through the food court, scowling at the ridiculous airport prices. Seven dollars for a soft pretzel! I showed them, though. I didn't buy a thing! That'll teach'm.

The flight to Toronto was short and uninteresting. They offered a free snack: two lemon crackers. I'd have scowled at that, too, but they were pretty good.

At YYZ, I had to go through customs, even though I wasn't planning to leave the airport. Nothing surprising there, but after customs, I had to walk down a long corridor, passing above the food court. I began to think I'd have to wait to eat on my flight… but then I saw the Air Canada lounge. I'm gonna go ahead and say it: the Maple Leaf Lounge in YYZ terminal E is my new favorite lounge. I had some pho and two gin and tonics (and some other stuff). It was big and roomy and clean and modern and I would not mind getting stuck there for a couple hours sometime.

(Dear Fate: this is not an invitation.)

When I went to board, the gate agent took away my boarding pass and vanished for a minute. When she came back, she handed me a new crisp boarding pass. "Complimentary upgrade today, sir." I can't remember the last time I flew business class, but it's been over ten years, and it wasn't as nice as this seat. It's a recliner, it's comfortable, and it has the best adjustable foot rest I've ever seen. The seat next to me is empty.

Then! Then! I poked around the in-flight entertainment system — which, by the way, is a heck of a lot better than that crap they give you in coach — and there was a documentary about A Tribe Called Quest. I watched that and then I ate some steak.

Hopefully on the rest of the flight I'll get some work done and get some sleep. I've got this fantasy idea that I'll adjust in-flight to Tokyo time. Probably not, but trying will give me something to do. I think that ATCQ documentary was the only movie on the list that I'd want to watch without Gloria.

(At this point, I closed my laptop and I never got back to writing any more in this entry until August 31st. This should be read as a good review of YAPC and my time in Tokyo. Of course, it didn't help that my laptop died on my last night there, and I just got it back today.)

Shortly after arrival in Narita, I met up with Casey West, his girlfriend Manda, and his former co-worker Marylou. We'd originally planned to make most of the trip into the city together, but when the dust settled, Casey and Manda were gone. I promised to help Marylou find her hotel, but this required first getting to my hotel where my wifi hotspot was waiting for me. This wasn't too difficult, but my hotel turned out to be more of a walk from Shinjuku station than I'd expected, and we were both pooped. Only through the strange reserves of strength that show up after 36 hours of wakefulness (plus a plate of sushi) did I stay awake long enough to get her to Ariake and then me back to Shinjuku.

I stayed at the Shinjuku Granbell, and my room was absolutely tiny. I was surprised by how much I enjoyed that. The bed was comfortable (and very, very firm). The water pressure was high. The smallness seemed to say, "Get out of here and do something else." This photo does not do justice to its size.

my tiny hotel room

I got back to the hotel pretty late and had a nice (but short) FaceTime call home. I was pleased, too, to be getting to bed so late. Surely, I thought, I'd be able to sleep until a reasonable hour the next day. Well…

trust no one (body)

by rjbs, created 2015-08-14 18:06
tagged with: @markup:md journal security

At work recently moved from our own office space to a coworking space. Bryan said, "remember to lock you laptop screen when you're not using it." I said, "I use Mobile Mouse so I can lock it with a hot corner from across the room."

He asked, "How does Mobile Mouse connect?"

The importance of the question was obvious. I knew it was wi-fi, and the wi-fi is shared with the rest of the coworkers. Surely anything that can remote control my computer will be a secure connection, right? Right? The docs said nothing, so I fired up a packet sniffer.

$ sudo tcpdump -i en0  -w mouse.packet port 9090
[ connect with Mobile Mouse, mouse around a little ]
$ strings mouse.packet

What did I find? Here's a sample:

[ a bunch of base64-looking stuff; I think it's the Dock icon images ]

There's my phone's device name, the password, my laptop's name, and a bunch of other identifying information. Anybody who sniffed the network for a while could find this traffic and then remote-control my laptop when I looked away. (Or, more amusingly, when I wasn't looking away.)

I asked the makers of Mobile Mouse why they didn't use a secure connection, and whether they would. They said, "Well, it's really intended for a secure local network, but we'll think about adding this feature." Still, they link to people who review this device as a presentation remote. This sounds like a recipe for at least hilarity, if not disaster. "Hey, the consultant is presenting with his phone on the guest wi-fi. Let's sniff it!"

My point here is not that Mobile Mouse is bad software. It's really good software with this one enormous flaw. My point is that nobody really cares about protecting you except for, hopefully, you. You had better pay attention!

git-apprehend-area - git man page generator

by rjbs, created 2015-06-16 22:15
tagged with: git

no wrong way to play (body)

by rjbs, created 2015-05-24 22:03
last modified 2015-05-25 19:03
tagged with: @markup:md dnd journal

I am always baffled by the neverending stream of remarks of the form, "you people are playing D&D wrong." Here's one that particularly bugged me, today:

What SlyFlourish should be saying, here, is "hurts my ability to bring in and keep new players who care about the things I care about." Some players like playing in a very fatal environment. People play all kinds of games "on hard" on purpose, even games they haven't mastered on easy. And anyway, having a lot of character death doesn't make D&D harder, it just makes it different, because you don't win in D&D. And anyway, if you want to have victory conditions in D&D, that's cool, too.

It makes me crazy to think that people are being told, "you can't bring in or maintain new players if you let beginner characters die often." There are tons of games that work this way, and succeed in growing. I know: I have run some of them. Obviously, you have to know what your players expect, and what will make them unhappy. Part of this is asking, and part of this is establishing expectations up front. I make it clear that characters in my games die a lot, and that this is not about player failure, but about the kind of game I run. We still have fun. I have also played in games where the game master has gone out of their way to prevent character death when it seemed really justified, because they felt it would make the player unhappy. I still had fun.

I'm not a big fan of D&D 3E, but I thought its Dungeon Master's Guide II was great because it talked about how to establish and maintain a game based on what the players want. That's how you make a game succeed, after all: you figure out what you all think will be fun, try it, and then iterate on that. That's why "this is bad for players" makes me crazy. It's bad for some players. Or "I don't know how to do this in a way that players will like," which is a totally okay thing to be true. There's plenty of stuff I can't do, even though players might like it, and so I avoid it, because it would be bad.

The whole thing reminds me of [an episode of Parks and Recreation]. Ron "Mustache Guy" Swanson and Leslie "Amy Poehler" Knope have competing scouting groups. Leslie's group focuses on singing songs, baking cookies, and pillow fights. Ron's group struggles to build shelter and find something to eat. He tells his scouts, "We have one activity planned: not getting killed." By the end of the episode, all the scouts in Ron's group have defected to Leslie's, because they don't think Ron's group is fun. Leslie wins, Ron loses.

There is where a lesser show would end, but Parks and Rec is better than that. Leslie takes out an ad in the paper, calling for the kinds of kids who would like Ron's kind of camp.

Are you tough as nails? Would you rather sleep on a bed of pine needles than a mattress? Do you find video games pointless and shopping malls stupid? Do you march to the beat of your own drummer? Did you make the drum yourself? If so, you just might have what it takes to be a Swanson. Pawnee's most hardcore outdoor club starts today. Boys and girls welcome.

Then, some kids show up and they are excited to become Swansons. There is more than one way to be a scout.

So, here is my advice: ask your new or potential players what they want, or tell them what to expect. Or do both. Don't give up on what you like just because someone told you it was a niche style or that you'd be unable to retain players.

perl has built-in temp files (body)

by rjbs, created 2015-05-22 11:36
last modified 2015-05-23 19:31

I use temporary files pretty often. There are a bunch of ways to do this, and File::Temp is probably the most popular. It's pretty good, but also pretty complicated. A big part of this complication is that it's meant to keep your filename around until you're done with it, and to let you pick its name and location. Often, though, I don't need these features. I just need a place to stream a whole bunch of data that I'll seek around in later, or maybe just stream back out. In other words, instead of holding a whole lot of data in memory, put it in a file.

See, if you're going to put data in a file, then close it, then ask some other program to operate on it, it almost certainly needs a name. You might open that program and pipe data into it, but it's often much easier to just give it a filename of a file on disk. If you don't need that, though, the filename is totally extraneous. In fact, it just gets in the way by making it possible to leak disk usage. A filename is a reference to storage in use, just like an open filehandle is. Just like you can leak a RAM by leaving a reference to a variable in global scope, you can leak storage by leaving a name on the filesystem. That RAM will come back when your program dies, but the storage will wait until you erase the filesystem!

On most platforms, you can't create a truly anonymous filehandle, but you can do the next best thing: you can create a named file on disk, hang on to the filehandle, and immediately unlink the name. When your program terminates, there will no longer be any reference to the data on disk, and it can be freed.

Perl even makes this easy to do:

open my $fh, '+>', undef or die "can't create anonymous storage: $!";

This creates a file in your temporary directory (either $TMPDIR or /tmp or your current directory) with a name like "PerlIO_TQ50Oh" and then immediately unlinks it. The magic comes from the use of an undefined value as the filename. That mode, +>, is nothing special. It just means "create the file, clobbering anything that's in the way, and open it read-write." Now you can write to it, seek backward, and then read from it. This feature has been there since 5.8.0! If you can't use it because of your perl version, you have my sympathy!

Of course, maybe I'm weird in being able, ever, to make do with temporary files like these. I don't think so, though. When I asked on IRC recently, whether I was missing some reason that it wasn't more common, almost every single response was, "Woah, I never heard of that feature."

Now you have!

less worse command line editing in the perl6 repl (body)

by rjbs, created 2015-05-13 09:56
last modified 2015-05-13 10:38

I've been doing more puttering about with perl6 lately. One of my chief complaints has been that the repl is pretty lousy, keyboard-wise. There's no history, so I do a lot of copy and paste, and there's no way to move left non-destructively. If you notice a typo at the beginning of your line, you're stuffed.

This isn't that surprising. Rakudo doesn't ship with a readline implementation, and that's totally reasonable. You have to install something to make it go, and the common suggestion is Linenoise. It's easy to install with Panda, the perl6 package manager. Panda is installed with rakudobrew, if you're using it. If not, you can just clone panda and follow the instructions.

After that: panda install Linenoise

I hit a couple problems getting it to work subsequent to that. Some of them are fixed. (Update: It looks like all of this has now been fixed if you use rakudobrew and install a fresh moar! tadzik++ FROGGS++) For one, it tried to load liblinenoise.so, even though OS X dynamic libraries are generally .dylib files. That's fixed in the repo, and panda installs from the repo.

On the other hand, the OS X dynamic loader needs some help getting paths right. I had to fix the installed library's identity and register a path with the MoarVM binary so that it would look for installed dynamic libraries.

For the first:

cd ~/perl6/share/perl6/site/lib
install_name_tool -id $PWD/liblinenoise.dylib liblinenoise.dylib

That sets the library file's idea of its own identity to its installed location, rather than its build location.

For the second:

install_name_tool -add_rpath $PWD $(which moar)

...which adds the cwd to the moar binary's library resolution path.

Now when I run perl6, I get a repl with somewhat working line editor. I can go back and forward in the line with ^B and ^F, and I back and forward in history with ^P and ^N. Unfortunately what I can't do is use the arrow keys. I know, I know, I should probably be avoiding them, because I'm a Vim user. Too bad, I'm used to it, except when in Vim.

Strangely enough I found that the arrow keys work under tmux. It turns out that my iTerm2 profile was set to default to "application keypad mode." Why? No idea. To turn it off, I went to Preferences → Profiles → (my default) → Keys → Load Preset… → xterm Defaults.

The simple test to see what was going on was to hit ^V then ←. If I saw \eOD, I was in keypad mode. The right thing to see was \e[D.

Now I can edit my repl entry line easily! There's also very rudimentary tab completion, but frankly I'm not much of a tab completer. I just wanted to be able to fix my typos. (I like to pretend that my typos all come from bumpy bus rides, but sadly that's just not true.)

Although I did a little bit of my own digging to figure the above out, almost all the real answers came from geekosaur++ and hoelzro++ and others on #perl6. Thanks, friends!

use disposable ramdisks! (body)

by rjbs, created 2015-05-08 10:42

Recently I wrote about my dumb CPAN metafile analyzer, and how I'd tried to keep it fast. One of the things I tried to speed it up was creating a ramdisk for all of the archive extraction. The speed boost in this case turned out to be low, but it isn't always. (Also, I inexplicably used a journaling filesystem .) When you're doing a ton of file operations, the difference between physical storage and in-memory can be huge.

It was useful for another reason, though: I was running the program on an OS X system with a case-insensitive filesystem. A few tarballs on the CPAN have case conflicts, which would cause errors in analysis. Instead of running against /tmp, I set up my program to build a case-sensitive filesystem on a ramdisk and use that. It's easy, here's the code:

use 5.20.0;
use warnings;
package Ramdisk;
use Process::Status;

sub new {
  my ($class, $mb) = @_;

  state $i = 1;

  my $dev  = $class->_mk_ramdev($mb);
  my $type = q{Case-sensitive Journaled HFS+};
  my $name = sprintf "ramdisk-%s-%05u-%u", $^T, $$, $i++;

  system(qw(diskutil eraseVolume), $type, $name, $dev)
    and die "couldn't create fs on $dev: " . Process::Status->as_string;

  my $guts = {
    root => "/Volumes/$name",
    size => $mb,
    dev  => $dev,
    pid  => $$,

  return bless $guts, $class;

sub root { $_[0]{root} }
sub size { $_[0]{size} }
sub dev  { $_[0]{dev}  }

  return unless $$ == $_[0]{pid};
  system(qw(diskutil eject), $_[0]->dev)
    and warn "couldn't unmount $_[0]{root}: " . Process::Status->as_string;

sub _mk_ramdev {
  my ($class, $mb) = @_;

  my $size_arg = $mb * 2048;
  my $dev  = `hdiutil attach -nomount ram://$size_arg`;

  chomp $dev;
  $dev =~ s/\s+\z//;

  return $dev;

So, you can call:

my $disk = Ramdisk->new(1024);

…and get an object representing a gigabyte ramdisk. Its root method tells you where it's mounted, and when the object is garbage collected, the filesystem is unmounted and the device destroyed. This means that for any code that's going to use a tempdir, you can write:

  my $ramdisk = Ramdisk->new(...);
  local $ENV{TEMPDIR} = $ramdisk->root;

There's overhead to making the ramdisk, but it's not programmer overhead, and that's the important part. All you have to do is figure out whether it's worth it.

I didn't put my ramdisk code on the CPAN, because there's already Sys-Ramdisk, which does nearly the same job. I didn't use it because I "just wrote mine" because I thought it would be faster than finding an existing solution. It's probably a better replacement for what I wrote, because it probably wasn't written in twenty minutes at a bar.

my dumb CPAN "meta analyzer" (body)

by rjbs, created 2015-04-30 00:08

Just about exactly five years ago, I wrote a goofy little program that walked through all of the CPAN and produced a CSV file telling me what was used to produce most dists. That is: it looked at the generated_by field in the META files and categorized them. Here's what the first report, from April 11, 2010, looked like:

  generator             | dists | authors | %
  ExtUtils::MakeMaker   | 7864  | 2193    | 39.49%
                        | 5273  | 2228    | 26.48%
  Module::Install       | 3149  |  465    | 15.81%
  Module::Build         | 3104  |  618    | 15.59%
  Dist::Zilla           |  475  |   64    | 2.39%
  ExtUtils::MY_Metafile |   25  |    3    | 0.13%
  __OTHER__             |   20  |    8    | 0.10%
  software              |    5  |    1    | 0.03%

Over time, I puttered around with it, but mostly I just ran it once in a while to see how things changed. (The above data is actually a truncation. The "other" category is all the generators used by fewer than 5 dists.)

Here's what the data look like for last month, only generators with at least 100 dists:

  generator                    | dists | authors | %
  ExtUtils::MakeMaker          | 10419 | 2997    | 34.27%
  Dist::Zilla                  |  6225 |  836    | 20.48%
                               |  4807 | 2299    | 15.81%
  Module::Build                |  3931 |  918    | 12.93%
  Module::Install              |  3549 |  622    | 11.67%
  __OTHER__                    |   792 |  189    | 2.61%
  Dist::Milla                  |   225 |   54    | 0.74%
  Dist::Inkt::Profile::TOBYINK |   141 |    3    | 0.46%
  The Hand of Barbie 1.0       |   106 |    1    | 0.35%
  Minilla/v2.3.0               |   104 |   55    | 0.34%
  Minilla/v2.1.1               |   103 |   46    | 0.34%

The program that generates this data is (now) pretty fast. It generates all the data for a minicpan in about fifteen minutes. Generating the above table takes a second or two.

Back in 2010, all the data went into a CSV file, but now it goes into an SQLite database. It's faster, easier to query, and can store a bunch of data that would've been a real drag to put into CSV. For example, there's a table that tells me prerequisites. I can write a pretty simple program to print a dependency tree. This can show you all kinds of stuff. For example, installing the should-be-lightweight Sub::Exporter ends up having to bring in Module::Build because constant, part of core, is also on CPAN. If you ever have to upgrade to the CPAN version, you'll find that it has a configure-time requirement on Module::Build, despite not really needing it.

Maybe we'll fix that in the next CPAN release, due this week...

The data generated isn't perfect, but it's still darn useful. It's very similar, in fact, to Adam Kennedy's old CPANDB library, which I used for similar things.

My code isn't on the CPAN yet, because it's sort of a mess. It was a gross hack for years and only now am I trying to make it sort of semi-reusable. Give it a try yourself! You can clone the CPAN-Metanalyzer GitHub repo, edit meta-gen.pl to point to your own repo, and have a look at the messy results.

I'll probably write more about some of the fun of implementing this in the next week or so. Until then, have fun!

the 2015 Perl QA Hackathon in Berlin (body)

by rjbs, created 2015-04-22 20:20
last modified 2015-04-30 15:52

I spent last week in Berlin, at the 2015 Perl QA Hackathon. This is an annual event where a group of various programmers involved with the "CPAN toolchain" and closely related projects get together, hash out future plans, write code that is hard to get written in "free time," and communicate in person the hard that is stuff to communicate over IRC and email.

I always find the QA Hackathon to be very invigorating, as I get to really dig into things that I know need work. That was the case this year, too. I was a bit slow to start, by by the last day I didn't really want to stop for food or socialization, and I spent dinner scowling at my laptop trying to run new code.

The hackathon is an invitational event, and I was fortunate to be invited and even more fortunate that my travel and lodging was covered in part by The Perl Foundation and in part by the conference sponsors, who deserve a set of big high fives: thinkproject!, amazon Development Center, STRATO AG, Booking.com, AffinityLive, Travis CI, Bluehost, GFU Cyrus AG, Evozon, infinity interactive, Neo4j, Frankfurt Perl Mongers, Perl 6 Community, Les Mongueurs de Perl, YAPC Europe Foundation, Perl Weekly, elasticsearch, LiquidWeb, DreamHost, qp procura, mongoDB, Campus Explorer

Pre-Hackathon Stuff

I arrived in Berlin ahead of time, which helped me overcome jetlag. Happily enough, I was on the same flight as David Golden, Matthew Horsfall (+1), and Tatsuhiko Miyagawa. We chatted and sat around at Newark and on the way to the hotel in Berlin. More than that, Matthew and I set up an ad hoc network on the plane and chatted using netcat. Eventually I wrote an incredibly bad file transfer tool and we swapped some code and music. It was stupid, but excellent. We vowed to do better on the way back.

Once we'd dropped our bags at the hotel, we went out for a walk around the neighborhood. We saw a still-standing hunk of the wall, considered walking to the big TV tower, and then gave up and got currywurst. It was okay. After that, we caught a train back to the hotel, checked in, and crashed for a little while.

the wall

We found a totally decent little restaurant and got some dinner. Eventually, I ended up in bed and felt pretty normal the next morning!

The next day, we did more exploring, this time seeing more of the sorts of things you're meant to see when you visit Berlin.

Brandenburg Gate

I wish we'd had more time (and, more importantly, energy) to really explore the Tiergarten. There were quite a few things listed on the map that I wanted to see, but I was barely up for the little exploration we did. In my defense, the Tiergarten is ⅔ the size of Central Park!

The hackathon got started that night with the arrival dinner. I was delighted to get goulash and spätzle. Peter Rabbitson was nonplussed. "Getting excited about spätzle is like getting excited about… grits!"

"Mmmm," I said, "grits!"

Topics of Discussion

Last year, we skipped having any long debates about how to move things forward, and I was delighted. The previous round of them, in Lancaster, had been gruelling, and I expected much worse from them this year. Instead, I ended up thinking that they were pretty much okay. I think there were a number of reasons, although chief among them, I think that there were fewer people, and most of them had more actual stake in the problems being discussed. David Golden did an excellent job keeping them on topic and moving forward.

I won't get too deep into how these went, because I know that others will do so better than I would. Neil Bowers has already written up some of the final day's discussions on the river that is CPAN.

In brief, we talked about:

  • whether, why, and how to move Test::Builder changes forward
  • what promises we want to extract from each other as maintainers of the toolchain
  • the possibility of a CPAN::Meta::Spec v3
  • how we can promote a sense of responsibility and community among CPAN authors with many downstream dependents

I felt pretty good about how these talks went, for the most part. It remains to be seen how the planned actions pan out, but I'm hopeful on all fronts.

I'm going to throw in one negative thing, though, which came up not just during the big meetings, but all the time.

I get it. People have feelings about a written and enforceable code of conduct. They might feel uncomfortable that they'll accidentally get themselves kicked out, or they might resent it, or all kinds of other things. In reality, though, a conference's code of conduct only comes up in three situations:

  1. when you have to accept it to attend
  2. when someone actually behaves very badly
  3. when endless trivializing jokes are made about it

If everybody would shut up about it, and just take to heart that technical conferences are places that should be free of abuse and harrassment, I think things like a written code of conduct would be generally unnoticed except by people in need of relief for actually being abused. Instead, there constant jokes about anyone making a comment with an unintended double meaning. "Hey, you're violating my cock!" Yes, phrased like that, because I guess it's hilarious that "code of conduct" acronymizes to sound like a slang phrase for a penis.

It's gross, without even being funny to make up for it.

Dist::Zilla Hacking

I actually try not to spend too much time on Dist::Zilla stuff at the QA Hackathon. I don't think of it as part of the toolchain, precisely, and to me it's very much a personal project that some other people happen to like. "Seriously," I say, "you can use it, and it's great, but it's not making you any real guarantees. Check its work!" So, using time at the QA hackathon seems a bit disingenuous.

This year, though, I planned to spend a day on dealing with its backlog, because there were quite a few things where I felt that toolchain and QA things could be helped. As it turned out, though, very little actually went through.

I merged David Golden's work to make release_status plugin-driven. This is in furtherance of implementing a hands-off semver-like setup. Hopefully we'll see something new on that front in the future. That was a couple minutes of work re-reading code I'd already reviewed.

Most of the dzil time that I spent was on making it possible to build your to-be-released dist with one perl but then test it with another one. This was suggested by Olivier Mengué when I half-jokingly suggested bumping the minimum perl version for Dist::Zilla to 5.20. Someone said it would interfere with Travis CI testing, and Olivier said, "what if you could test with a perl other than $^X?"

I implemented it, and also the ability to set a built environment (for things like PERL5LIB) and it worked. The only problem was that … well, I just didn't feel like it was the right way forward.

If you want to test with five perls, you'll still have to run the whole building workflow five times, because of how Dist::Zilla works. I see a way out of that, but it wasn't a good project for a single day of unplanned work. It turned out that there were a few problems like this. It ended up feeling like a good idea that just didn't pan out. I'll probably leave the branch around, though. It might be useful eventually anyway. Maybe I could use it to use a new perl to "cross-compile" for deployment on an older environment, like at work.

PAUSE hacking

Most of my time, as seems traditional at this point, was spent on PAUSE. I went into the hackathon with one big goal: I wanted to get PAUSE running on Plack. Until now, PAUSE has been running on mod_perl. All other potential objections to mod_perl aside, there's a problem: it means that testing the PAUSE website requires a running mod_perl (and, thus, Apache) instance. What a pain!

I'd really wanted to work on this, and every once in a while I would have a look at the code and think about how I'd do it, but never very hard. When I finally sat down to do the work, I started to think that it was going to be a nightmare. I didn't think I could get it done in time. I despaired. I said, "I think we should look at just starting over, but I know that's usually a terrible idea." There was some agreement that I might be right from other attendees.

Then, this happened:

Amazing! Now, "he has it" may have been a mild overstatement, but not really. He showed me the site working right there on the second day. Charsbar just needed to keep hacking at the details. By the end of the fourth day, it was live! You can log in and upload to pause2!

Speaking of logging in, that's where I made my first actual bit of progress. PAUSE passwords have long been crypted with crypt, meaning the long-obsolete DES algorithm. I'd been meaning for quite a while to switch it over to bcrypt. The commits to switch PAUSE to bcrypt took a few goes, but it got done. To get your password rehashed, you just need to log in. If you've uploaded since the weekend, you've already been rehashed!

I fixed recursive symlinks killing reindexing.

Andreas König and I looked into the strange way that perl-5.18.4 still showed up in the CPAN index for things that perl-5.20.2 should've owned. This led to two small changes: we fixed bizarre error messages that appeared when you couldn't get your Foo module indexed because of somebody else's Bar.pm file; we allowed versions with underscores to be indexed if (and only if) the module was only available through perl, and was found in the latest perl release. This shouldn't really happen, but when it does, better to index than to not!

I did a lot of work, through all this, on improving the TestPAUSE library that's used to test the indexer. It makes it almost trivial to test the indexer. You can write code like this:

It starts a brand new PAUSE, uploads a file, indexes all the recent uploads, and gives you a "result" object that can do things like tell you what mail was sent, what's in the various index files, what's in the database, what got written to the logs, and what's in the PAUSE-data git repository.

It was really worth working on, in part because it let me write the (still under development) build-fresh-cpan tool. This tool takes a list of cpanid/dist pairs and uploads them to a new fake CPAN, indexing them as often as you specify. It's great for seeing what the indexer would do on a dist you haven't uploaded, or after a potential bugfix. I've been looking at using it to do a historical replay of subsets of the CPAN. It's clearly going to be useful for testing improvements to the indexer over time.

I've still got a few tasks left over that I'd like to get to, including warnings to users who don't have license or metafile data. I'd also like to implement the "fast deletion" feature to let you delete files from your CPAN directory sooner than three days from now — and charsbar's Plackification work should make that so much easier!


A few years after I published Dist::Zilla, I wanted to see how many indexed distributions had been built by it. I wrote a really crummy script using David Golden's excellent CPAN::Visitor to crawl through all of the CPAN and report on what seemed to have been used to build the dist. Every once in a while, I'd re-run the script to get newer data, occasionally making a small improvement to the code.

The big problem, though, was its speed. It took hours to run. This was, in part, because it ran in strict series. I'd tried to parallelize it one night, but it was a half-hearted attempt, and it didn't quite work. At the QAH, though, a task came up for which it was going to be well-suited. The gang voted to begin requiring a META file of some sort for indexing, and I wanted to see how many dists already had one, and of what kind. My script would provide this! I got it running in parallel, taking it down to about 90 minutes of runtime. David Golden remarked that he'd just been writing something similar, and had it running in about 75 minutes.

It was on!

We both worked on our code off and on while solving other problems, trading bug reports and optimizations. Maybe it wasn't very cutthroat, but I felt motivated to improve the code more and more. Last I checked, we both had our indexers gathering tons more information and running (across the whole of the CPAN index) in fifteen minutes.

I'm pretty sure we can both get it faster, yet, too.

I have my CPAN-Metanalyzer, in very rough form, up on GitHub. I'll probably keep working on it, though, and use it instead of CPANDB in the ancient code I once had using that.


Early on, Miyagawa asked whether we could provide structured data from CPAN::Meta::Requirements objects. These are the things that tell you what versions of what modules your code needs to be tested, built, run, and so on. You could only get the string form out, leading to re-parsing stuff like >= 4.2, < 5, != 4.4, which is a big drag. I suggested an API and that maybe I could do it the next day. The next day...

The API was even exactly what I'd suggested. Woah!

I also found some really weird-o problems. I had my meta-analyzer looking for valid or invalid META.json files, and I found an extremely high number of bogus ones. Why? Well, there were two common cases:

What?! I still have no idea where this came from, and I couldn't produce this behavior by re-testing old releases of JSON::PP. There was no report of behavior like this in changelogs. I don't know!

Part of the problem is that you can't tell from a dist just which JSON emitter was used to make its META.json. I've got a trial release of CPAN-Meta that adds an x_serialization_backend to store this data. Too bad I didn't think of this nine months ago! (Most or many of these errors, if not all, were in September 2014.) Hopefully that will go into production in about a week.

The Trip Home

After the end of the QAH, we got dinner and went back to the hotel. We tried to go to the conference space (betahaus in Berlin) but it was locked up with some of our hackers still inside. This had actually happened to me one night: without somebody with keys around, you could get stuck inside. I'm told that some people had to jump the fence. I was luckier than that.

At the hotel we sat around, talked about what we'd gotten done, and I did just a little more coding on my CPAN crawler. Eventually, though, I got to bed so that I could get up bright and early for the flight home. We left the hotel around seven.

I didn't really try to sleep on the plane. Instead, I poked at code that I could think about with my sleep-deprived brain. I ran an ngircd IRC daemon on my laptop and chatted with some fellow travelers. I watched the entire Back to the Future trilogy. It was a decent flight. (Except for lunch. Lunch was really lousy.) Only yesterday, I found out that there's been a sudden crackdown on people doing suspicious computer things on the plane. "Look out for plane hackers!" I can only imagine what would've happened if we had, as I considered doing, set up Mumble to voice chat during the flight.

Now that I'm home, I've got plenty of other stuff to catch up on, but I'm trying to keep my momentum going. I've got plenty of my usual post-QAH energy, and a fair number of remaining things to do, and I think I can actually do them.

We don't yet know where QAH 2016 will be, but I hope to make it there!

checkpoint charlie

Q&A with Larry Wall (body)

by rjbs, created 2015-04-08 21:48
tagged with: @markup:md journal perl yapc

At the Perl conference, YAPC::NA, there will be a ninety minute question and answer session with Larry Wall.

Larry Wall is the creator of Perl, rn, patch, and other revolutionary monstrosities. He's smart, funny, and full of interesting things to say. I will be acting as host, asking him questions. My goal is to have great questions that coax the most interesting answers possible out of him.

You can help!

If you've got a topic or question you'd like asked, file an GitHub issue or comment on an existing issue. If you'd rather I keep your amazing question close to my chest, or if you don't want to use GitHub, you can send me an email at rjbs@cpan.org.

notes on drawing programming for children (body)

by rjbs, created 2015-03-21 23:10
tagged with: @markup:md journal programming

Once in a while, my daughter asks me to teach her programming. We've done a number of little things together, including some Python, some Scratch, and other things. When I was trying to think of what I enjoyed doing with computers around that age, one of the things I remembered was drawing. We did turtle graphics with Logo in my school, and it was nice to get instant and visual feedback of what the program did. I thought this seemed like a fun idea.

I had looked at using Scratch for this, but it didn't seem likely to work out. Scratch is neat, in some ways, but terribly limited in others. I decided we'd stick to Logo. I downloaded ACSLogo, the only gratis OS X implementation of Logo I could find. At first, it went well. We drew some stuff. She drew a vampire girl.

vampire girl

I wanted, next, to show her how to do some of the more commonly-seen tricks, like rotated squares making a flower. This is simple: you define a function and call it in a loop. Then I realized that ACSLogo had neither of these things. Or, worse, it had functions, but they weren't defined in the program text. You have to bring up an inspector pane, edit them across several different GUI widgets... it's a mess.

Fine, I said, forget it. There are other languages specialized for drawing stuff! I decided to teach her PostScript. We'd already played with RPN calculators, so we were halfway there, right? She was a natural. I showed her this little box-drawer:

    250 225 moveto 275 225 lineto
    275 200 lineto 250 200 lineto
    250 225 lineto

...and she said "moveto, and lineto, but don't use goto!" I don't even know where she picked this up. Kids these days!

Anyway, she drew a skull, and it was awesome:

vampire girl

Those three squares were going to be a lesson. After we did one square, we could turn it into a square subroutine! Only as I began to say this out loud did I remember that like only the most backward of languages, PostScript routines did not have parameter lists. This is pretty obvious, of course, but I hadn't thought of it until I got there.

A routine to compute a²+b² would look something like:

/sum2sq { dup mul swap dup mul add }

If you wanted to actually get "a" and "b" to use, you'd write something like (and please forgive the fact that I will get this wrong, probably):

/sum2sq {
  2 dict
  /b exch def
  /a exch def
  a a mul
  b b mul

So, you declare that the next two definitions are local, then you define named routines that return the values you've popped off the stack, in reverse order, of course... this is not something you want to bother teaching a second-grader who just wants to draw a cool skull.

(It gets a lot worse, if you care about the program, actually. The above program re-alloctes a new local dictionary every time you call the routine. In reality, you'd want to move that definition outside the routine, then swap the routine's definition in and out of the dictionary stack. But then it isn't re-entrant! This gets easier if you're using Level-2 PostScript, but my reference is only for the original version with no garbage collection.)


I seriously considered writing a PostScript preprocessor to allow for something like:

/sum2sq(a b) { ... }

...but this was bordering on madness. PostScript has so many other drawbacks to begin with that I took this as a sign.

I looked for another version of Logo, found one, bought it, and found that it, too, had no procedure definition. I got a refund. Thanks, Apple Store!

Finally, I learned that Python's Tk system comes with turtle graphics. We'd already done some Python, so this was familiar! It was a lot easier to work with, we could write named functions, and the kid was pleased that TextMate had good syntax highlighting for it. She made the long-overdue flower:

vampire girl

Next up, I'd like to make it easier for her to run the program without a bunch of terminal nonsense and "hit a key to continue" stuff. Still, so far so good. She tells me that when she gets older, she'll be a better programmer than I am, because her programs "won't have bugs."

Ah, innocence!

RPG Recap: Beyond the Temple of the Abyss, 2015-03-14 (body)

by rjbs, created 2015-03-14 22:28

Saturday, 4th day of the Frost Moon, 937

Knash, Brenda, Mumford, and Wim — their host — connected the party's scrying screen to the village's antenna, and Wim got in touch with his superior: an androgynous floating head. Wim gave the head a quick summary of his situation, and they agreed that the likely solution would be to enact "the contingency plan." The head said it would communicate again in twelve hours.

Wim didn't really know what the contingency plan was, when asked. He just figured it would be something very likely fatal to the "atheist" who'd been hanged in the village center. He started packing up, and told the party they might want to do some looting.

The party did loot, loading up on food, dry goods, and a few expensive knick-knacks. They finally found a use for the heavy rainbow-colored coins they'd been carrying: a building in the village housed a machine that would process the coins into anything, it seemed, that could fit in its output space. They conjured up explosives, a magical staff, and healing potions.

They raided the library, and found a catalog listing numerous hard to find arcane librams (Hebram's Thesis on Abdesius' Commonplace Book, for example!) but almost all the books had been destroyed by Halewin, the hanged man. All that was left were some novels: Sense and Centurions, To Love a Wizard, Finnegan's Wake, The Cyborg's Lament, and other works of no interest to the adventurers.

In the library, and indeed all through the town, they found lifelike statues in various states of horror or surprise. Wim confirmed that these were the former villagers, turned to stone by Halewin's gaze.

While looking at the village graveyard, outside its walls, the gang saw that the scrying screen was pulsing, and fetch Wim. Wim came to communicate, but the call wasn't the floating head, but one of the goblins who'd expected the party to show up and fight the vampire. While he laid into the party, though, the floating head did contact Wim. With Wim out of the village, the plan was ready to be enacted immediately. The presence of other humans in the village didn't sway the head.

A dome of blue light began to form over the village walls, and the party members inside them hightailed it to, and over, the fence, while the horses struggled to make it to the gates, spurred on by bowshots from across the fence. Only Burdoc the cook, already weakened by an accident the previous day, didn't make it.

The contingency plan seemed to involve deploying a lumbering metal death machine, which emerged from beneath a pile of stones and engaged Halewin with powerful blasts of energy. Halewin was injured, but survived. The horses made it to the gate, but by then the dome blocked all but a few feed of the exit. Knash, then Rago, made daring rolls back and forth under the dome to salvage the party's bags. Quite a bit was left behind, but the most important things were saved.

A massive explosion occurred within the dome, and as the group regathered to decide how to proceed, it collapsed, and movement within suggested that Halewin had survived. The party fled east.

why I'm not sold on YubiKey OTP (body)

by rjbs, created 2015-03-07 14:19
last modified 2015-03-07 16:59
tagged with: @markup:md journal

[ update: I added a bit of an update at the end, in which I find that my fundamental worries were wrong, because the system is less convenient than I hoped! So it goes. ☹ I decided to post this anyway, because the thoughts were worth thinking, so maybe somebody else will find them interesting to see. Or not. Who even reads this thing? ]

Lots of people and sites now urge (or even require) you to set up two-factor authentication. This usually means something that emulates those old LCD RSA SecurID key fobs you used to see on the university administrators' keyrings, if you attended my college when I was there. (Have we met?) The idea here is, in part, that even if someone intercepts your login once, they will no longer be able to use it later. You may have one password (probably "kittens" or "123456") that you use over and over, but the other password — the second factor — can only be used once.

There's more going on here, but I think this is a safe simplification.

In reality, most sites use TOTP, so the password is valid many times, but only within a small window, usually between 30 and 90 seconds. A man-in-the-middle attack can get the full set of credentials, but they're only valid within a small period of time, so they can't be held in escrow for a delayed attack.

TOTP is built on top of HOTP, in which there's a counter that's incremented with each use. TOTP just replaces the incrementing counter with a timestamp. Otherwise, HOTP can be a bit trickier to use: the verifier needs to know the counter being verified, or at least the minimum counter not previously seen, to prevent replay attacks. TOTP just replaces the counter with the current time. Both are built on a shared secret which is shared only between the remote server and the user.

The hassle of TOTP, to once again grossly oversimplify, is that we tend to use a device other than our computer to store the shared secret. We put it on our phone, or a little keyfob, or the like. When we log in, we need to look at that, slide to unlock, find the site in a list, and then type in the six digits onto the web page. This is why we're often quick to click "trust this computer for a month."

A YubiKey is a little USB HID device that will enter your OTP for you. When you tap it, it emits a string of characters (because it's pretending to be a keyboard) and then enter. Those characters are your one-time password. You don't need to unlock your phone, swipe things, or any of that. You just tap your USB port. Great!

Now, there are complications. Since your YubiKey doesn't know what site you're logging into, it needs to use a single secret. It doesn't have a clock, so it uses (something analagous to) HOTP. Remember that when you're using counter-mode OTP like HOTP, you need to know the minimum unused counter to prevent replay attacks. To allow sites to synchronize their "last seen counter" data, there's a "YubiCloud" server. They way you're expected to verify an OTP string is to send it to this server, which will reply with a yea/nay. If the OTP is cryptographically valid, but for an already-validated counter, it will be rejected. Once you've verified the OTP, you can't use it again. This is secure.

The problem is that not everybody wants to use a server run by some third party as part of their system. This is a pretty understandable desire, and it can be met: there are several implementations of verification servers that one can run to do local verification without talking to the cloud. You can run your own server, and if YubiCloud is down, you can still verify. You can also feel secure that if YubiCloud's logs are leaked, no one will learn the usage pattern of your users.

See, your users could be identified, more or less, from those logs. Because your YubiKey device has a single secret, your "user id" is visible across all the sites where you use it.

So, now we've got a world where many sites are running their own local verification servers. This creates a new problem: they are not synchronizing their "last seen counter" data. Imagine that a user has accounts with two different services, A and B. They log into A every day and into B rarely.

If site A is compromised, the OTPs used to log into it can be held in escrow to be used against B later. The fact that they have been verified already won't matter, because A and B do not synchronize their last seen counter data. If, like many users, this user is using the same email address and password, then there is nearly no security present. (Remember, everybody: don't use the same password everywhere.)

As more sites run their own YubiKey OTP, the threat of having your OTP held in escrow increases. The precomputed attack on TOTP isn't mitigated by avoiding counters. It's avoided by having shared secrets on a per server basis.

I haven't performed a very careful reading of the specifications involved here, but from what I can tell, my concerns are warranted. I would be delighted to hear that I am mistaken. I don't think that this is a critical exploit that makes the whole system worthless, but I do think it indicates that per-server-secret TOTP is more secure than YubiKey. Of course, it's also less convenient. Isn't that always the way?

YubiKey devices now also support FIDO U2F. That's based on private key cryptography. The server issues a random challenge which the attacker must sign, so playback is of negligible use, given challenges over a large enough space. There's more to it than this, but it seems clearly to be a more secure system. Unfortunately, it's also more complex for services to implement. Isn't that always the way, too?

[update] I think what I'm finding is that since a YubiKey validation server is acting as a key escrow, you can't actually use one key against more than one server. You need to pick which one server you'll work against. This swings things back towards secure, but away from convenient. I will continue looking for documentation on the subject.

prev page
next page
page 1 of 83
2069 entries, 25 per page