Openmoko imminent

Looks like OpenMoko's first mass-market product is almost here: Openmoko's Neo Freerunner is two weeks away from online purchasability.

I've been waiting for this phone for a long time. Completely open source software stack, completely open-source hardware (even the CAD files are CC-licensed and publically available). The thought of developing for this thing gets me excited in a way that Android (and certainly not the iPhone) never did. And any chance at striking back at the godawful state of the US cellphone market is very welcome.

ditz git integration plugin, in git

I've fleshed out ditz's plugin architecture and just added a plugin that ties it more closely to git. With this plugin enabled, you can tie issues to feature branches and automatically get a list of commits on that branch (until they're merged into master, at which point that becomes impossible, thanks to the magic of git).

Here's an example: Sup's configurable colors issue.

With these changes, Ditz is now firmly in the MVC camp. The models are created from yaml objects on disk; the views are an HTML renderer (using ERB) and a screen renderer (using puts technology), and the controller is the previously-mentioned operator.rb.

If you look at the plugin code you see that it need to modify all three of these components. It adds fields to the Issue and Config objects, it adds output to the HTML and screen views, and it adds commands to the controller. The fact that it can do this in a few lines of code is pretty sweet.


I've released a fairly preliminary version of git-wtf to my collection of Git tools. This is something I've been working on recently to help wean myself away from excessive gitk usage. From the description:

If you're on a feature branch, it tells you which version branches it's merged into. If you're on a version branch, it tells you which feature branches are merged in and which aren't. For every branch, if it's a tracking branch, it tells you which commits need to be pulled and which need to be pushed.

So basically if you find yourself with a ton of branches (which invariably happens if you use feature branches in Git) or you find that keeping track of branch state is generally hard, and that gitk is confusing as often as it is useful, this is the tool for you.

By default it assumes that any branches named "master", "next" or "edge" are version branches, and all other branches are feature branches. This is configurable, of course. It also warns, for tracked branches, if both the remote branch and the local branch have new commits, i.e. git pull would create a merge commit and you should rebase instead. If you don't care about this type of thing, this might be annoying.

The main thing addition I foresee in the near future is a warning if merging in a feature branch into a version branch would collapse two version branches. Something like: when merging a feature branch into a version branch, warn if the feature branch contains commits reachable from any version branch and not reachable from master.

Rethinking Sup

It's been clear to me for a while now that Sup has been trying to be two very different things at once, thus pleasing no one and irritating everyone. There's Sup the email client, which is kind of the standard view of things. And then there's Sup the service: a threaded, fielded, searchable, labelable view into your email.

Sup the email client is lacking in many ways, as many people have been very quick to point out to me. The most obvious of these is that it refuses to actually, you know, actually write back any state to your mailstore. Specifically, read/unread state is never written anywhere except its internal index. Furthermore, mailstore rescans of most any type are incredibly slow. These two features make using it in conjunction with other clients near impossible, which pretty much breaks one of the primary principles of tool design: don't break other tools. (Then there's also the problem of IMAP connections being terrifically slow and prone to crashes, but I lay most of that blame on IMAP being a crappy protocol and the Ruby IMAP libraries leaving a lot to be desired.)

Sup the service, on the other hand, suffers from the rather obvious flaw of not being exposed in any manner other than through Sup itself (and irb, I suppose).

I think the reason for this bizarre situation stems from my goal of fusing two very different things together: mutt and Gmail. Mutt is a client; Gmail is a service; Sup cherry-picks functionality, and lack of functionality, from both. Examples: I refused to have Sup write back to mailstores because Gmail didn't have to export to your local Maildir or mbox file, so why should I? (Well technically, I said I would accept patches that did that, but that I wouldn't be working on that feature myself. A fine distinction!) At the same time, I pooh-poohed the notion of a Sup server because mutt didn't have a server, and so why should Sup? And so on.

For Sup to evolve into something more useful than it is, and that appeals to a broader audience than it currently does, I believe it has to go down one of these routes completely. And I believe I know which one, and I believe this can be done without compromising the basic user experience, which I would be very reluctant to do because it has been lovingly tweaked over the years to be William's Ideal Email Experience.

The first option is to make Sup more of a client. In order to be a real email client, Sup must be able to interoperate with other clients. This means it has to write back all its state to the mailstores: read/unread status in whatever manner the mailstore supports, and probably something like all labels in a special header. It must also be able to do a full rescan in a fast manner, so that changes by other clients are reflected.

Right off the bat, that seems impossible, redundant with other software, and not that interesting. As I wrote in a sup-talk thread from a few months ago:
Sup is never going to be able to compete with programs like Mutt in terms of operations like "open up a mailstore of some format X, and mark a bunch of messages as read, and move a bunch of messages to this other mailstore." That's a tremendous amount of work to get right, get safe and get fast, and Mutt's already done it well, and I sure don't want to have to reimplement it.
Competing with mutt on grounds of speed, stability, and breadth of Mailstore usage is a recipe for fail. Ruby sure as shit ain't gonna come close to C for speed (at least until Rubinius gets LLVM working), and mutt's already hammered out all the quirkinesses with Exchange, etc.

But not only would it be impossible, it wouldn't be interesting. The things that make Sup valuable are the UI, the indexing and the flags, and those simple don't translate to external mailstores. Furthermore, Sup is aimed at the mailstores of the future (my present mailstores), which are so big that mutt can't handle them anyways.

So that leaves Sup as a service. And that's where things get interesting. But I'll save that for a later post.

Why do academics write shitty code?

Anyone who's ever been an academic code monkey knows that code written by pretty much anyone in academia sucks. As a group, they're notorious for the quality of their code. Which is weird. These guys are smart, and they know how to program. They just don't program very well.

My hypothesis—and this is based mostly on my own personal experience—is that this is because anyone who enjoys programming self-selects out of academia pretty quickly, because programming is way more fun than anything else. And if you don't find programming fun, you don't spend the time necessary to be a good programmer.

When I was in grad school (as recently as 2007!) I would, on a regular basis, skip class and skimp on homework to work on things like Sup. It's not that I didn't find my classes challenging or the material interesting. It's just that academic gratification paled in comparison to the immediate gratification of coding.

Once I realized this, it was like a big blinking sign saying "you don't belong here".

The type of person who does belong in grad school is the type of person who is smart and interested in things and gets obsessive, which are all attributes of good programmers, but who doesn't find programming that fun. For whatever reason. Consequently all their attention can be focused on research and course material, with none of the siren song of the computer to tempt them away. Except for, you know, the internets.

(Of course, I'm talking about CS grad school here. And I suspect things are more blurry with systems specialties. But for things like AI and NLP, it's definitely the case.)

So then, the top N signs that you don't belong in the CS PhD program:
  1. You wrote the machine status webapp.
  2. You wrote your own build system to run your experiments.
  3. You fantasize about rewriting TeX.
  4. You spend your time writing your own email application.
  5. You use make (or better yet, your own build system) to "build" the camera-ready versions of your papers.
Am I missing anything?

Trollop 1.8.1 released

Trollop 1.8.1 is out. This is a minor bugfix release, but 1.8, released a few weeks ago but not really advertised, adds new functionality, so I'm describing that here.

The new functionality is subcommand support, as seen in things like git and svn. This feature is actually trivial to use / implement: you give Trollop a list of stopwords. When it sees one, it stops parsing. The end. That's all you need.

Here's how you use it:
  1. Call Trollop::options with your global option specs. Pass it the list of subcommands as the stopwords. It will parse ARGV and stop on the subcommand.
  2. Parse the next word in ARGV as the subcommand, however you wish. ARGV.shift is the traditional choice.
  3. Call Trollop::options again with whatever command-specific options you want.
And that's it. Simple eh?

It continually amazes me how hard other people make option parsing. I think it's a holdover from their days of using C or Java. Take a look at synopsis for optparse — it's a ridiculous amount of work for something simple. Or better yet, look at the synopsis for CmdParse. Having to make a class for each command is a clunky Java-ism. I'm sorry, but it's true. Subclassing is the one option for specializing code in Java; in Ruby we can be far more sophisticated. Take a look at Ditz's operator.rb for an example of a subcommand DSL.

The Many Styles of Git

One of Git's defining characteristics is its extreme (some say "ridiculous") flexibility. Even with all the fancy porcelain on top, what you're get when you use Git is basically a general DAG builder for patches, and the ability to apply labels to points within.

It's interesting to see how this flexibility is put to use in practice. In my many years (ok, months) of Git usage, across a variety of projects, I've noticed several distinct styles of Git usage.

The most salient differences between styles are:
  1. How much they care about keeping the development history "beautiful", i.e. free of unnecessary merges. Git gives you two tools for adding your commit to a branch: merge and rebase. A rebase will always preserves linearity, a merge has the potential for introducing non-linearity. Some projects are fanatic about this. Linus has been known to reject code because there were too many "test merges" (see the git-rerere man page). Other projects don't care at all.
  2. How much they make use of topic branches. Some projects do the majority of development through them. Some do all development directly onto master, branching only for long-term divergent development.
  3. How new commits come into the system: patches to mailing lists, merges from remote branches performed by the maintainer, or commits directly into the central repo.

Each of these decisions results in a different style of development. The styles I've encountered in the wild are:

  1. The just-like-SVN approach. Example project: Rubinius. Individual contributers have a commit bit, or they don't. Everyone works from local clones. If you have a commit bit, you push directly to origin/master. Non-committers can post patches to a mailinglist or
    to IRC. There are some published branches, but they're for long-running lines of development that are eventually merged in and discarded. There's no real pickiness about merges in development history; rebasing is encouraged but not required.
  2. The Gitorious/Github approach. Example project: everything on those systems. Only the maintainer can commit to the central repository. Anyone can create a remote clone, push commits, and send a formal merge request through the system to the maintainer. All code addition (except for the maintainer's additions) are done through merges.
  3. The topic-based approach. Example projects: Git itself, the Linux kernel, Sup. Patches are submitted to the mailing list. The maintainer builds topic branches for each feature/bugfix in development and merges these into different "version branches", which
    correspond to different versions of the project such as stable/experimental/released version distinctions. Sub-maintainers are used when the project gets large, and their repositories are merged by the maintainer upon request.
  4. The remote topic branch approach. This was an experiment I tried with Ditz, and is roughly my attempt to do topic-based Git with Gitorious. In this approach, contributors, instead of submitting patches to a mailing list, maintain feature branches themselves. When a branch is updated, a merge request is sent to the maintainer, who
    merges the remote branch into a version branch.
I've listed the styles in order from least to most overhead. The just-like-SVN style requires very little knowledge of Git; at the other end of the spectrum, the topic-based approaches require a fair amount of branch managment. For example, care has to be taken that merging a topic branch into a version branch doesn't accidentally merge another version branch in as well. (This type of complexity spurred me to write tools like git show-merges and the soon-to-be-released git wtf.)

The advantage of the topic-based approaches, of course, is that it's possible to maintain concurrent versions of the same program at different levels of stability, and to pick and choose which features go where.

Which style is best for you depends on what you're trying to accomplish. Like all good tools, what you get out of Git depends on what you're willing to put into it, and that's a decision you'll have to make.

a ruby puzzle

Name this function:
  inject({}) { |h, o| h[yield(o)] = o; h }.values


1. It's a variant of a common stdlib function.
2. The name has 7 characters, one of which is an underscore.

A survey of my rubyist colleagues suggests this is a hard question. Much
harder than writing the function given the name, which took about 10

Preliminary Rubinius inliner benchmarks

I've done some very preliminary benchmarking on the inliner I've been hacking into Rubinius.

For the very simple case it can handle so far—guaranteed dispatch to self, fixed number of arguments (no splats or defaults), no blocks—here's what we get for 10m iterations of a simple function calling another simple function:

user system total real
uninlined-no-args 22.495877 0.000000 22.495877 ( 22.495978)
inlined-no-args 21.741561 0.000000 21.741561 ( 21.741581)
uninlined-4-args 27.742596 0.000000 27.742596 ( 27.742583)
inlined-4-args 24.593837 0.000000 24.593837 ( 24.593869)

So inlining results in a 3.5% speedup on method dispatch with no arguments, and a 12.8% speedup when there are four arguments.

Of course this is the very optimal case for the inliner. Guaranteed dispatch to self means that I don't even add any guard code, which would definitely slow things down. But this actually is a fairly common case that occurs whenever you use self accessors and any helper functions that don't have blocks or varargs.

And the real boost of inlining, presumably, is going to be in conjunction with JIT, since the CPU can pipeline the heck out of everything.

email spam status

For the past few years I've done something silly with my email: I've
accepted email for every address at, and then
filtered them for spam before display. This means that, as far as any
spammer is concerned, every email address they tried to send to was a direct hit. So there's been a snowball effect:
everything they tried worked, and those addresses stayed on their lists,
and every variant they tried worked, and made it to the lists, etc.

Of course I didn't see most of it, but it all made the trip from spammer
to mail server and over fetchmail to my poor home computer, which would
have spamassassin crank for 20 minutes every, oh, 25 minutes or so.

I've finally changed to a sane situation wich my mail server on a VPS
and exim4 calling spamassassin at accept time. I've also set up a bunch
of rules for which email addresses I accept. (Just any old string
doesn't cut it any more.)

The result: over the past 9 days I've rejected 209,605 emails as spam.
That's about 16.17 a minute, or a little more than one every 4 seconds.

How many have I accepted? Including false negatives, 2441, or one every
5 minutes. (I am on several high-volume mailinglists.)

That's a S/N ratio of 1.16%!

Hopefully as time goes by, the rejections will start trimming addresses
off spammers' lists, and that will improve somewhat. Until then... at
least it's not my home computer doing the work any more.

more internet fame

omg infoq mentioned my name.

it's a brand new blog

Managing that old Hobix blog was way more work than it should've been.
So, I've started over and outsourced the work to someone else. Let's see
how it goes.

Blog Archive