It’s been well over three years since I had a real blog post on this domain, but it’s time to come back and maybe be something better. I’ve ported over all of the real posts from the old Wordpress blog. Going forward this site is running Plerd, by the always worth watching Jason McIntosh. It’s Yet Another static blog from markdown genertor, but it’s intended workflow mathes cleanly with mine and I like its default themes. You can give his blog a follow at Fogknife.

I use Resilio Sync to copy the generated site up to my VPS, where I serve the static content. (Think Dropbox, but only going to your own systems.)

Edit: It’s worth pointing out that freezing versions in libraries is not generally recommended (though this is a matter of community contention). Further publishing anything to the public registry with a shrinkwrap causes problems for people with private registry mirrors as the shrinkwrap encodes the public registry url. Obviously, if your organization has decided to freeze its libraries then publishing shrinkwraps to your own private registry like npme or as private modules is quite alright.

Some people also feel that shrinkwraps MUST be checked into git, so that you can recreate published archives from the git repository. Some do not. Obviously you need to fit things to your workflow and your release practices.

npm’s shrinkwrap feature provides a key benefit when it comes to supporting published code– it fixes all of the dependency versions for all of your dependencies AND THEIR dependencies. This means that you can be sure that your end users are using exactly the same software as you are.

But shrinkwrap can also be frustrating to use because as long as the npm-shrinkwrap.json file is laying around npm won’t behave the same as it usually does.

What this does, is only include the npm-shrinkwrap.json in the published artifact. This means that your development will get newer versions of your dependencies to test, while your users will get exactly what you tested with at the time you published.

How To

“` $ npm install -g npm-script … $ npm install —save-dev in-publish rimraf … $ npm-script set prepublish “in-publish && npm shrinkwrap || in-install” set: prepublish $ npm-script set postpublish “rimraf npm-shrinkwrap.json” set: postpublish $ npm publish

your-module@1.0.0 prepublish … in-publish && npm shrinkwrap || in-install

npm WARN shrinkwrap Excluding devDependency: in-publish npm WARN shrinkwrap Excluding devDependency: rimraf wrote npm-shrinkwrap.json + your-module@1.0.0

your-module@1.0.0 postpublish . rimraf npm-shrinkwrap.json


(Originally posted to the npm blog, the source is on Github.)

Hi everyone! I’m the new programmer at npm working on the CLI. I’m really excited that my first major project is going to be a substantial refactor to how npm handles dependency trees. We’ve been calling this thing multi-stage install but really it covers more than just installs.

Multi-stage installs will touch and improve all of the actions npm takes relating to dependencies and mutating your node_modules directory. This affects install, uninstall, dedupe, shrinkwrap and, obviously, dependencies (including optionalDependencies, peerDependencies, bundledDependencies and devDependencies).

The idea is simple enough: Build an in-memory model of how we want the node_modules directories to look. Compare that model to what’s on disk, producing a list of steps to change the disk version into the memory model. Finally, we execute the steps in the list.

The refactor gives several needed improvements: It gives us knowledge of the dependency tree and what we need to do prior to touching your node_modules directory. This means we can give simple errors, earlier, much improving the experience of this failure case. Further, deduping and recursive dependency resolution are then easy to include. And by breaking down the actual act of installing new modules into functional pieces, we eliminate the opportunity for many of the race conditions that have plagued us recently.

Breaking changes: The refactor will likely result in a new major version as we will almost certainly be tweaking lifecycle script behavior. At the very least, we’ll be running each lifecycle step as its own stage in the multi-stage install.

But wait, there’s more! The refactor will make implementing a number of oft-requested features a lot easier– some of the issues we intend to address are:

  • Progress bars! #1257, #5340
  • Automatic/intrinsic dedupe, across all module source types #4761, #5827
  • Errors if we can’t find compatible versions MUCH earlier, before any changes to your node_modules directory have happened #5107
  • Better diagnostics when peerDependencies produce impossible to resolve scenarios.
  • Better use of bundledDependencies
  • Recursively resolving missing dependencies #1341
  • Better shrinkwrap #2649
  • Fixes some icky edge cases [#3124], #5698, #5655, #5400
  • Better shrinkwrap support, including updating of shrinkwrap file when you use —save on your installs and uninstalls #5448, #5779
  • Closer to transactional installs #5984

So when will you get to see this? I don’t have a timeline yet– I’m still in the part of the project where everything I look at fractally expands into yet more work. You can follow along with progress on what will be its pull request

If you’re interested in that level of detail, you may also be interested in reading @izs’s and @othiym23’s thoughts.

Abraxas is an end-to-end streaming Gearman client and worker library for Node.js. (Server implementation coming soon.)

Standout features:

  • Support for workers handling multiple jobs at the same time over a single connection. This is super useful if your jobs tend to be bound by external resources (eg databases).
  • Built streaming end-to-end from the start, due to being built on gearman-packet.
  • Most all APIs support natural callback, stream and promise style usage.
  • Support for the gearman admin commands to query server status.
  • Delayed background job execution built in, with recent versions of the C++ gearmand.

Things I learned on this project:

  • Nothing in the protocol stops clients and workers from sharing the same connection. This was imposed by arbitrary library restrictions.
  • In fact, the plain text admin protocol can be included cleanly on the same connection as the binary protocol.
  • Nothing stops workers from handling multiple jobs at the same time, except, again, arbitrary library restrictions.
  • The protocol documentation on is out of date when compared to the C++ gearmand implementation– notably, SUBMIT_JOB_EPOCH has been implemented. I’ve begun updating the protocol documentation here:

Because everything is a stream, you can do things like this:


Or as a promise:

client.submitJob('toUpper', 'test string').then(function (result) {
    console.log("Upper:", result);

Or as a callback:

client.submitJob('toUpper', 'test string', function(error, result) {
    if (error) console.error(error);
    console.log("Upper:", result);

Or mix and match:

process.stdin.pipe(client.submitJob('toUpper')).then(function(result) {
    console.log("Upper:", result);

MySQL’s ROUND has different behavior for DECIMALs than it does for FLOATs and DOUBLEs.

This is documented. The reason for this is not discussed but it’s important. ROUND operates by altering the type of the expression to have the number of decimal places that it was passed. And this matters because the type information associated with a DOUBLE will bleed… it taints the rest of the expression:

We’re going to start with some simple SQL:

mysql> SELECT 2.5 * 605e-2;
| 2.5 * 605e-2 |
|       15.125 |
1 row in set (0.00 sec)

Here 2.5 is a DECIMAL(2,1) and 605e-2 a DOUBLE, and the result is a DOUBLE. That’s all well and good…

But let’s try rounding 605e-2.

mysql> SELECT 2.5 * ROUND(605e-2,2);
| 2.5 * ROUND(605e-2,2) |
|                 15.12 |
1 row in set (0.00 sec)

So… what’s going on here? The round part of the expression shouldn’t have changed its value. And in fact, it hasn’t, calling ROUND(605e-2,2) returns 6.05 as expected. The problem here is that the type of ROUND(605e-2,2) is DOUBLE(19,2) and when that’s multiplied by 2.5 the resulting expression is still DOUBLE(19,2). But the number of decimals on a float is for display purposes only– internally MySQL keeps full precision… we can prove that this way:

mysql> SELECT ROUND(2.5 * ROUND(605e-2,2),3);
| ROUND(2.5 * ROUND(605e-2,2),3) |
|                         15.125 |
1 row in set (0.00 sec)

So yeah… MySQL let’s you increase precision with ROUND– Postgres is looking mighty fine right now.

Here’s a brief survey of node.js Gearman modules. I’ll have some analysis based on this later.

gearmangofullstack/gearman-nodesmith, gearmanhq2011-05-024
gearman-streamClever/gearman-streamazylman, templaedhel2014-03-210
Previously named gearman_stream, uses gearman-coffee
Uses node-gearman
Fork of gearman with no changes except name
Fork of node-gearman
gearman-coffeeClever/gearman-coffeergarcia, azylman, jonahkagan2013-03-192

The “every” command is that I wrote, inspired by the unix “at” command.  It adds command to your crontab for you, using an easier to remember syntax.  You can find it on github, here:

I was reminded because of this article on cron for perl programmers who are unix novices:

Here’s how you’d write their examples using “every”:

$ every minute perl /path/to/
*/1 * * * * cd "/home/rebecca";  perl /path/to/

$ every 5 minutes perl /path/to/
*/5 * * * * cd "/home/rebecca";  perl /path/to/

$ every hour perl /path/to/
49 */1 * * * cd "/home/rebecca";  perl /path/to/

$ every 12 hours perl /path/to/
49 */12 * * * cd "/home/rebecca";  perl /path/to/

What’s more, there’s no need to specify the path to Perl, because unlike using crontab straight up, it will maintain your path.  Even better, you can use relative paths to refer to your script, eg:

$ every monday perl

This works because every ensures that it executes from the place you set it up.  Just like “at” it uses all of the same context as your normal shell.


This is just a little hack of mine to make it trivial for me to reflect any directory on my server as a website, either with a name I specify or a hash. Handy for all sorts of things, I initially created it to give myself an easy way to view remote coverage reports that generated to HTML. It’s also a nice way to view HTML docs bundled with a package, or any other random HTML you come across.

How it works

As part of setup, we create a file based apache rewrite map that rewrites slugs off of our domain based on rules from a text file. These text files are super simple, just the slug followed by a space and then what to rewrite to.

With the setup out of the way, we have a very simple shell script that uses Perl to figure out the absolute path from your relative one and uses openssl to generate a hash from that. It uses the hash as the slug if you don’t specify one.  Once it’s appended these to the rewritemap file it tells you what your new URL is.

The example in the repo obviously isn’t generic, it refers to a host I control, but that’s easily editable.  This is less software package and more stupid sysadmin hack.

Beyond the standard 80 and 443 to handle web traffic, Android also needs 5222 (Jabber) and 5228 (allegedly Google Marketplace, but needed for a phone to fully connect to the network and have functioning Google Talk).

Mail is also likely needed too of course, with SMTP 25 and 995, POP 110 and 993, and IMAP 143 and 465. For some setups you may also need LDAP, 389 and 636. Exchange needs 135 and in some esoteric configurations, NNTP with 119 and 563.

Cage7.jpg I’ve been a busy little bee lately, and have published a handful of new CPAN modules— I’ll be posting about all of them, but to start things off, I bring you: AnyEvent::Capture

It adds a little command to make calling async APIs in a synchronous, but non-blocking manner easy. Let’s start with an example of how you might do this without my shiny new module:

 use AnyEvent::Socket qw( inet_aton );

 my $cv = AE::cv;
 inet_aton( 'localhost', sub { $cv->send(@_) });
 my @ips = $cv->recv;
 say join ".", unpack("C*") for @ips;

The above is not an uncommon pattern when using AnyEvent, especially in libraries, where your code should block, but you don’t want to block other event listeners. AnyEvent::Capture makes this pattern a lot cleaner:

use AnyEvent::Capture;
use AnyEvent::Socket qw( inet_aton );

my @ips = capture { inet_aton( 'localhost', shift ) };
say join ".", unpack("C*") for @ips;

The AnyEvent::DBus documentation provides another excellent example of just how awkward this can be:

use AnyEvent;
use AnyEvent::DBus;
use Net::DBus::Annotation qw(:call);

my $conn = Net::DBus->find; # always blocks :/
my $bus  = $conn->get_bus_object;

my $quit = AE::cv;

$bus->ListNames (dbus_call_async)->set_notify (sub {
   for my $name (@{ $_[0]->get_result }) {
      print "  $name\n";


With AnyEvent::Capture this would be:

use AnyEvent;
use AnyEvent::Capture;
use AnyEvent::DBus;
use Net::DBus::Annotation qw(:call);

my $conn = Net::DBus->find; # always blocks :/
my $bus  = $conn->get_bus_object;

my $reply = capture { $bus->ListNames(dbus_call_async)->set_notify(shift) };
for my $name (@{ $reply->get_result }) {
   print "  $name\n";

We can also find similar examples in the Coro documentation, where rouse_cb/rouse_wait replace condvars:

sub wait_for_child($) {
    my ($pid) = @_;

    my $watcher = AnyEvent->child (pid => $pid, cb => Coro::rouse_cb);

    my ($rpid, $rstatus) = Coro::rouse_wait;

Even still, for the common case, AnyEvent::Capture provides a much cleaner interface, especially as it will manage the guard object for you.

sub wait_for_child($) {
    my ($pid) = @_;

    my($rpid, $rstatus) = capture { AnyEvent->child (pid => $pid, cb => shift) };