Showing only posts with topic "whistlepig" [.rss for this topic]. See all posts.

Whistlepig 0.2 released

Whistlepig 0.2 is out already. This time it should actually compile under OS X. I’m having a grand old time figuring out all the differences in flags that Ruby when compiling gems under different architectures. But not so grand that I want to learn automake/autoconf.

As part of making sure things were compiling on different platforms, I ran make test-integration and noticed that on my dual-boot laptop runs it at 8000k/s on Linux but only 6200k/s on OS X. I was expecting a difference in Linux’s favor, but not quite that large! As another point of reference, my mid-range Linux desktop gives me 9500 k/s.

Whistlepig 0.1 released

Today I released the very first version of Whistlepig, a minimalist realtime full-text search index.

Side projects apparently take a lot longer when you have a job and a baby, because it’s taken me over 6 months to get to the point where I have something releasable. And there are so many obvious improvements to make. But all known bugs are squashed, and it’s good enough to use, so, it’s out.

The README has a good description of what Whistlepig is, so here I thought I’d talk about the why. Why write yet another inverted index?

The unfortunate fact is that you have too many choices already: Lucene, obviously, and its derivatives like SOLR, and if you’re shy of the JVM, Xapian and Sphinx. Ferret used to be a good choice in the Ruby world until Dave Balmain absconded and no one had the cojones to maintain his code. I’ve used each of these things.

But they are all very heavy-weight solutions, and they all suffer from what I call the “TREC mentality”. In early TREC competitions, you were given a big, static corpus, which you indexed at your leisure, and then you were given a bunch of queries, which were all long descriptions of what documents someone was interested in. It would be something like “I am interested in documents about Mayan architecture, but only during the pre-conquistador period, and specifically I am not interested in such and such” and so on. These competitions were great in that they spurred advances in search engineering, but the result is that almost every inverted index implementation today is optimized for precisely the case of static corpora and large queries.

In the intervening 30 years, the use case for full-text search has far exceeded the library-science-style applications of the early TREC competitions. There are many applications where you don’t need tf-idf scores and the Okapi formula or even necessarily stemming. You just want recent things that match your query, and you value control and transparency over some kind of fuzzy natural language matching. Search in GMail (or Sup of, course!) comes to mind, or searching within the posts on this blog.

That’s one part of the reason why existing solutions are not ideal. The other part is that inverted indexes are so optimized for speed and for size that even little things like wanting documents from last to first can be drastically slower than using the standard ordering. For example, Sup wants documents in reverse chronological order; Xapian is fastest in increasing docid order; so we play crazy games to map dates to docids:

  DOCID_SCALE = 2.0**32
  TIME_SCALE = 2.0**27
  def assign_docid m, truncated_date
    t = (truncated_date.to_i - MIDDLE_DATE.to_i).to_f
    docid = (DOCID_SCALE - DOCID_SCALE /
      (Math::E**(-(t/TIME_SCALE)) + 1)).to_i
    while docid > 0 and docid_exists? docid
      docid -= 1
    docid > 0 ? docid : nil

This snippet is courtesy of Rich Lane, who should be credited in history books as the first person to find a use for a logistic curve in an email client.

If you try and use something like Xapian or Sphinx for these applications, you have to play games like that for performance. And when new documents arrive, you have to play further games to get them into the index sooner rather than later. And all the while you’re turning off 90% of the features anyways.

So that leads us to the world of realtime search, which explicitly values recent documents over older ones. It’s “realtime” where new documents arrive on the fly and must be made available to queries as soon as they arrive. If you’re in that situation, you typically also care more about more recent documents that older ones anyways. Those are the two tenets of realtime search: documents are available immediately, and recent documents are more important than older ones.

Whistlepig is my attempt to capture those two tenets in as few lines of code as possible, while still being reasonable performant. I do this by stripping away all the vestigial TREC functionality of relevance, ranking, sorting, tf-idf, etc. You get documents in LIFO order, and that’s it. Whistlepig doesn’t return anything besides the docid either: if you need something more than the id, you have to fit that into a separate store somewhere. It turns out if you throw that stuff away, you can accomplish the rest of the search problem without a tremendous amount of code. Like any C program, it’s 5% algorithm and 95% bookkeeping.

There is one wrinkle that I actually add to the model: I allow adding and removing labels from documents. Every other aspect of a document is fixed in Whistlepig—you can’t even delete it from the index once it’s been added—but labels are mutable. And of course you can intermingle labels with the other components of your query. Almost every realtime search application I can dream up would benefit from this functionality, so there you go.

My hope for Whistlepig is that it becomes the default choice for realtime search applications, especially in the Ruby world, which hasn’t had a good in-process search solution since Ferret bit the dust. And if I mysteriously disappear like Dave did, I also hope that the codebase is small enough and simple enough that taking it over doesn’t seem like a herculean effort.