It’s been well over three years since I had a real blog post on this domain, but it’s time to come back and maybe be something better. I’ve ported over all of the real posts from the old Wordpress blog. Going forward this site is running Plerd, by the always worth watching Jason McIntosh. It’s Yet Another static blog from markdown genertor, but it’s intended workflow mathes cleanly with mine and I like its default themes. You can give his blog a follow at Fogknife.
I use Resilio Sync to copy the generated site up to my VPS, where I serve the static content. (Think Dropbox, but only going to your own systems.)
Edit: It’s worth pointing out that freezing versions in libraries is not generally recommended (though this is a matter of community contention). Further publishing anything to the public registry with a shrinkwrap causes problems for people with private registry mirrors as the shrinkwrap encodes the public registry url. Obviously, if your organization has decided to freeze its libraries then publishing shrinkwraps to your own private registry like npme or as private modules is quite alright.
Some people also feel that shrinkwraps MUST be checked into git, so that you can recreate published archives from the git repository. Some do not. Obviously you need to fit things to your workflow and your release practices.
npm’s shrinkwrap feature provides a key benefit when it comes to supporting published code– it fixes all of the dependency versions for all of your dependencies AND THEIR dependencies. This means that you can be sure that your end users are using exactly the same software as you are.
But shrinkwrap can also be frustrating to use because as long as the npm-shrinkwrap.json file is laying around npm won’t behave the same as it usually does.
What this does, is only include the npm-shrinkwrap.json in the published artifact. This means that your development will get newer versions of your dependencies to test, while your users will get exactly what you tested with at the time you published.
“` $ npm install -g npm-script … $ npm install —save-dev in-publish rimraf … $ npm-script set prepublish “in-publish && npm shrinkwrap || in-install” set: prepublish $ npm-script set postpublish “rimraf npm-shrinkwrap.json” set: postpublish $ npm publish
your-module@1.0.0 prepublish … in-publish && npm shrinkwrap || in-install
npm WARN shrinkwrap Excluding devDependency: in-publish npm WARN shrinkwrap Excluding devDependency: rimraf wrote npm-shrinkwrap.json + your-module@1.0.0
your-module@1.0.0 postpublish . rimraf npm-shrinkwrap.json
“`
(Originally posted to the npm blog, the source is on Github.)
Hi everyone! I’m the new programmer at npm working on the CLI. I’m really excited that my first major project is going to be a substantial refactor to how npm handles dependency trees. We’ve been calling this thing multi-stage install but really it covers more than just installs.
Multi-stage installs will touch and improve all of the actions npm takes relating to dependencies and mutating your node_modules directory. This affects install, uninstall, dedupe, shrinkwrap and, obviously, dependencies (including optionalDependencies, peerDependencies, bundledDependencies and devDependencies).
The idea is simple enough: Build an in-memory model of how we want the node_modules directories to look. Compare that model to what’s on disk, producing a list of steps to change the disk version into the memory model. Finally, we execute the steps in the list.
The refactor gives several needed improvements: It gives us knowledge of the dependency tree and what we need to do prior to touching your node_modules directory. This means we can give simple errors, earlier, much improving the experience of this failure case. Further, deduping and recursive dependency resolution are then easy to include. And by breaking down the actual act of installing new modules into functional pieces, we eliminate the opportunity for many of the race conditions that have plagued us recently.
Breaking changes: The refactor will likely result in a new major version as we will almost certainly be tweaking lifecycle script behavior. At the very least, we’ll be running each lifecycle step as its own stage in the multi-stage install.
But wait, there’s more! The refactor will make implementing a number of oft-requested features a lot easier– some of the issues we intend to address are:
So when will you get to see this? I don’t have a timeline yet– I’m still in the part of the project where everything I look at fractally expands into yet more work. You can follow along with progress on what will be its pull request
If you’re interested in that level of detail, you may also be interested in reading @izs’s and @othiym23’s thoughts.
Abraxas is an end-to-end streaming Gearman client and worker library for Node.js. (Server implementation coming soon.)
https://www.npmjs.org/package/abraxas
Standout features:
Things I learned on this project:
Because everything is a stream, you can do things like this:
process.stdin.pipe(client.submitJob('toUpper')).pipe(process.stdout);
Or as a promise:
client.submitJob('toUpper', 'test string').then(function (result) { console.log("Upper:", result); });
Or as a callback:
client.submitJob('toUpper', 'test string', function(error, result) { if (error) console.error(error); console.log("Upper:", result); });
Or mix and match:
process.stdin.pipe(client.submitJob('toUpper')).then(function(result) { console.log("Upper:", result); });
MySQL’s ROUND has different behavior for DECIMALs than it does for FLOATs and DOUBLEs.
This is documented. The reason for this is not discussed but it’s important. ROUND operates by altering the type of the expression to have the number of decimal places that it was passed. And this matters because the type information associated with a DOUBLE will bleed… it taints the rest of the expression:
We’re going to start with some simple SQL:
mysql> SELECT 2.5 * 605e-2; +--------------+ | 2.5 * 605e-2 | +--------------+ | 15.125 | +--------------+ 1 row in set (0.00 sec)
Here 2.5 is a DECIMAL(2,1) and 605e-2 a DOUBLE, and the result is a DOUBLE. That’s all well and good…
But let’s try rounding 605e-2.
mysql> SELECT 2.5 * ROUND(605e-2,2); +-----------------------+ | 2.5 * ROUND(605e-2,2) | +-----------------------+ | 15.12 | +-----------------------+ 1 row in set (0.00 sec)
So… what’s going on here? The round part of the expression shouldn’t have changed its value. And in fact, it hasn’t, calling ROUND(605e-2,2)
returns 6.05 as expected. The problem here is that the type of ROUND(605e-2,2)
is DOUBLE(19,2)
and when that’s multiplied by 2.5 the resulting expression is still DOUBLE(19,2)
. But the number of decimals on a float is for display purposes only– internally MySQL keeps full precision… we can prove that this way:
mysql> SELECT ROUND(2.5 * ROUND(605e-2,2),3); +--------------------------------+ | ROUND(2.5 * ROUND(605e-2,2),3) | +--------------------------------+ | 15.125 | +--------------------------------+ 1 row in set (0.00 sec)
So yeah… MySQL let’s you increase precision with ROUND– Postgres is looking mighty fine right now.
Here’s a brief survey of node.js Gearman modules. I’ll have some analysis based on this later.
Module | Github | Author | Last Commit | Open Issues | Tests | Docs | Client | Worker | Multi Server | Streams | Errors | Timeouts |
---|---|---|---|---|---|---|---|---|---|---|---|---|
gearman | gofullstack/gearman-node | smith, gearmanhq | 2011-05-02 | 4 | ☑ | ☑ | ☑ | ☐ | ☐ | ☐ | ☑ | ☐ |
gearman-stream | Clever/gearman-stream | azylman, templaedhel | 2014-03-21 | 0 | ☑ | ☑ | ☑ | ☑ | ☐ | ☑ | ☐ | ☐ |
Previously named gearman_stream, uses gearman-coffee | ||||||||||||
gearnode | andris9/gearnode | andris | 2013-02-25 | 1 | ☑ | ☑ | ☑ | ☑ | ☑ | ☐ | ☑ | ☐ |
gearmanode | veny/GearmaNode | veny | 2014-03-20 | 4 | ☑ | ☑ | ☑ | ☑ | ☑ | ☐ | ☑ | ☑ |
nodegears | enmand/nodegears | enmand | 2013-12-07 | 1 | ☐ | ☑ | ☑ | ☑ | ☐ | ☐ | ☐ | ☐ |
que | vdemedes/que | vdemedes | 2012-07-02 | 0 | ☑ | ☑ | ☑ | ☑ | ☐ | ☐ | ☐ | ☐ |
Uses node-gearman | ||||||||||||
gearman-js | mreinstein/gearman-js | mreinstein | 2013-11-03 | 4 | ☐ | ☐ | ☑ | ☑ | ☐ | ☐ | ☐ | ☐ |
gearman2 | sazze/gearman-node | ksmithson | 2013-09-17 | 0 | ☑ | ☑ | ☑ | ☐ | ☐ | ☐ | ☑ | ☐ |
Fork of gearman with no changes except name | ||||||||||||
node-gearman | andris9/node-gearman | andris | 2013-08-13 | 2 | ☑ | ☑ | ☑ | ☑ | ☐ | ☑ | ☑ | ☑ |
node-gearman-ms | nachooya/node-gearman-ms | nachooya | 2013-11-18 | 0 | ☑ | ☑ | ☑ | ☑ | ☑ | ☑ | ☑ | ☑ |
Fork of node-gearman | ||||||||||||
gearman-coffee | Clever/gearman-coffee | rgarcia, azylman, jonahkagan | 2013-03-19 | 2 | ☑ | ☑ | ☑ | ☑ | ☐ | ☐ | ☑ | ☐ |
magictoolbox/node-gearman | oleksiyk | 2012-12-03 | 0 | ☑ | ☐ | ☑ | ☑ | ☑ | ☐ | ☑ | ☑ |
The “every” command is that I wrote, inspired by the unix “at” command. It adds command to your crontab for you, using an easier to remember syntax. You can find it on github, here: https://github.com/iarna/App-Every
I was reminded because of this article on cron for perl programmers who are unix novices:
http://perltricks.com/article/43/2013/10/11/How-to-schedule-Perl-scripts-using-cron
Here’s how you’d write their examples using “every”:
$ every minute perl /path/to/Beacon.pl SHELL=/bin/bash PATH=/home/rebecca/bin:/opt/perl5/bin:/opt/perl5/perls/perl-5.16.2/bin:/opt/node/bin:/usr/local/bin:/usr/bin:/bin */1 * * * * cd "/home/rebecca"; perl /path/to/Beacon.pl $ every 5 minutes perl /path/to/Beacon.pl SHELL=/bin/bash PATH=/home/rebecca/bin:/opt/perl5/bin:/opt/perl5/perls/perl-5.16.2/bin:/opt/node/bin:/usr/local/bin:/usr/bin:/bin */5 * * * * cd "/home/rebecca"; perl /path/to/Beacon.pl $ every hour perl /path/to/Beacon.pl SHELL=/bin/bash PATH=/home/rebecca/bin:/opt/perl5/bin:/opt/perl5/perls/perl-5.16.2/bin:/opt/node/bin:/usr/local/bin:/usr/bin:/bin 49 */1 * * * cd "/home/rebecca"; perl /path/to/Beacon.pl $ every 12 hours perl /path/to/Beacon.pl SHELL=/bin/bash PATH=/home/rebecca/bin:/opt/perl5/bin:/opt/perl5/perls/perl-5.16.2/bin:/opt/node/bin:/usr/local/bin:/usr/bin:/bin 49 */12 * * * cd "/home/rebecca"; perl /path/to/Beacon.pl
What’s more, there’s no need to specify the path to Perl, because unlike using crontab straight up, it will maintain your path. Even better, you can use relative paths to refer to your script, eg:
$ every monday perl Beacon.pl
This works because every ensures that it executes from the place you set it up. Just like “at” it uses all of the same context as your normal shell.
This is just a little hack of mine to make it trivial for me to reflect any directory on my server as a website, either with a name I specify or a hash. Handy for all sorts of things, I initially created it to give myself an easy way to view remote coverage reports that generated to HTML. It’s also a nice way to view HTML docs bundled with a package, or any other random HTML you come across.
As part of setup, we create a file based apache rewrite map that rewrites slugs off of our domain based on rules from a text file. These text files are super simple, just the slug followed by a space and then what to rewrite to.
With the setup out of the way, we have a very simple shell script that uses Perl to figure out the absolute path from your relative one and uses openssl to generate a hash from that. It uses the hash as the slug if you don’t specify one. Once it’s appended these to the rewritemap file it tells you what your new URL is.
The example in the repo obviously isn’t generic, it refers to a host I control, but that’s easily editable. This is less software package and more stupid sysadmin hack.
Beyond the standard 80 and 443 to handle web traffic, Android also needs 5222 (Jabber) and 5228 (allegedly Google Marketplace, but needed for a phone to fully connect to the network and have functioning Google Talk).
Mail is also likely needed too of course, with SMTP 25 and 995, POP 110 and 993, and IMAP 143 and 465. For some setups you may also need LDAP, 389 and 636. Exchange needs 135 and in some esoteric configurations, NNTP with 119 and 563.
I’ve been a busy little bee lately, and have published a handful of new CPAN modules— I’ll be posting about all of them, but to start things off, I bring you: AnyEvent::Capture
It adds a little command to make calling async APIs in a synchronous, but non-blocking manner easy. Let’s start with an example of how you might do this without my shiny new module:
use AnyEvent::Socket qw( inet_aton ); my $cv = AE::cv; inet_aton( 'localhost', sub { $cv->send(@_) }); my @ips = $cv->recv; say join ".", unpack("C*") for @ips;
The above is not an uncommon pattern when using AnyEvent, especially in libraries, where your code should block, but you don’t want to block other event listeners. AnyEvent::Capture makes this pattern a lot cleaner:
use AnyEvent::Capture; use AnyEvent::Socket qw( inet_aton ); my @ips = capture { inet_aton( 'localhost', shift ) }; say join ".", unpack("C*") for @ips;
The AnyEvent::DBus documentation provides another excellent example of just how awkward this can be:
use AnyEvent; use AnyEvent::DBus; use Net::DBus::Annotation qw(:call); my $conn = Net::DBus->find; # always blocks :/ my $bus = $conn->get_bus_object; my $quit = AE::cv; $bus->ListNames (dbus_call_async)->set_notify (sub { for my $name (@{ $_[0]->get_result }) { print " $name\n"; } $quit->send; }); $quit->recv;
With AnyEvent::Capture this would be:
use AnyEvent; use AnyEvent::Capture; use AnyEvent::DBus; use Net::DBus::Annotation qw(:call); my $conn = Net::DBus->find; # always blocks :/ my $bus = $conn->get_bus_object; my $reply = capture { $bus->ListNames(dbus_call_async)->set_notify(shift) }; for my $name (@{ $reply->get_result }) { print " $name\n"; }
We can also find similar examples in the Coro documentation, where rouse_cb/rouse_wait replace condvars:
sub wait_for_child($) { my ($pid) = @_; my $watcher = AnyEvent->child (pid => $pid, cb => Coro::rouse_cb); my ($rpid, $rstatus) = Coro::rouse_wait; $rstatus }
Even still, for the common case, AnyEvent::Capture provides a much cleaner interface, especially as it will manage the guard object for you.
sub wait_for_child($) { my ($pid) = @_; my($rpid, $rstatus) = capture { AnyEvent->child (pid => $pid, cb => shift) }; $rstatus }