Smoothing users’ votes

In a previous post I describe how you can cook up a Bayesian framework that results in IMDB’s so-called “true Bayesian estimate”, a formula which, on its face, doesn’t look particularly Bayesian.

As my astute commenters pointed out, this formula has many simpler interpretations without needing to invoke the B word. For example, it’s a linear interpolation between two values: where is our mean vote, is some smoothing target, is the smoothing weight. can be any function, as long as it increases with , stays between 0 and 1, and is 0 when is 0. Those constraints give you the right behavior: with no votes, your estimate is exactly; as you add votes, it approaches , and controls how fast that happens.

This formulation naturally leads to the following question: if I’m smoothing like this to deal with paucity-of-data issues, what value of should I pick? IMBD uses the , the global movie mean. Intuitively that makes sense, but is it the right choice?

What’s nice about the expression for above is that the behavior we’re most interested in is when , i.e. when there are no votes. In that case, , because of how I’ve constrained . So finding the best is equivalent to finding the best when .

Happily, we can answer the question of the best analytically, at least if we’re happy to imagining that there is a “true” value of the movie .

Given , we can define a loss function that describes how bad we think a particular value of is. But we don’t really know what is for any movie (if we did, we wouldn’t be bothering with any of this). So we can generalize that a step further and define a risk function quantifying our expected loss: the aggregate of the loss function across all possible values of , weighted by the probability of each value. This gives us the tool we really need to answer the question above: the that minimizes our risk is the winner.

In the absence of any specific notions about errors, we’ll use the standard loss function for reals, squared-error loss: . Then it’s just a matter of churning the crank:

We can drop that first term since we’re only interested in minimimizing this as a function of . To find the minimum:

Unsurprisingly, we see that the best estimate of under squared-error loss is the mean of the distribution of . Since we’re interested in the case where , this implies that the best value to use for is also the mean.

So IMDB’s choice of makes sense: the mean vote over all your movies is a great estimate of the mean of the distribution of .

A couple concluding points:

  1. This answer is specific to squared-error loss; if you plug in another loss function, the optimal value for might very well change. And you might actually have a specific model in mind for how “bad” mis-estimates are. Maybe over-estimates are worse than under-estimates, or something like that.
  2. The definition of the distribution of is actually completely vague above. In fact we don’t even talk about it; we just use it implicitly in our terms. So you should feel free to plug in (the mean of) whatever distribution you believe most accurately represents your product/movie/whatever. IMDB could arguable to better by plugging in per-category means, or something even fancier.
  3. IMDB is actually a particularly bad case because movie opinions are extremely subjective. If you’re serious about modeling very subjective things, we should be talking about multinomial models, Dirichlet priors, and the like.

But the take-home message is: in the absence of a specific loss function that you really believe, smoothing towards the mean isn’t just intuitive, it’s minimizing your risk.

What’s cooking in Sup next

The 0.7 release ain’t the only exciting Sup news. Here’s a list of interesting features that are currently cooking in Sup next, along with the associated branch name.

  • zsh completion for sup commandline commands, thanks to Ingmar Vanhassel. (zsh-completion)
  • Undo support for many commands, thanks to Mike Stipicevic. (undo-manager)
  • You can now remove labels from multiple tagged threads, thanks to Nicolas Pouillard, using the syntax -label). (multi-remove-labels)
  • Sup works on terminals with transparent backgrounds (and that’s fixed copy-and-paste for me too!), thanks to Mark Alexander. (default-colors)
  • Pressing ‘b’ now lets you roll buffers both forward and backward, also thanks to Nicolas Pouillard. (roll-buffers)
  • Duplicate messages (including messages you send to a mailing list, and then receive a copy of) should now have their labels merged, except for unread and inbox labels. So if you automatically label messages from mailing lists via the before-add-hook, that should work better for you now. (merge-labels)
  • Saving message state is now backgrounded, so pressing ‘$’ after reading a big thread shouldn’t interfere with your life. It still blocks when closing a buffer, though, so I have to make that work. (background-save)
  • Email canonicalization, also thanks to Nicolas Pouillard. The mapping between email addresses and names is no longer maintained across multiple emails. (dont-canonicalize-email-addresses)

The canonicalization one is a weird one. There’s been a long-standing problem in Sup where names associated with email addresses are saved and reused. Unfortunately many automated systems like JIRA, evite, blogger, etc. will send you email on behalf of someone else, using the same email address but different names. The issue was compounded because Sup decided that longer names should always replace shorter ones, so receiving some spam claiming to be from your address but with a random name would have all sorts of crazy effects.

Addresses are still stored in the index, both for search purposes, and for thread-index-mode. (Otherwise thread-index-mode has to reread the headers from the message source, which is slow.) Once thread-view-mode is opened, the headers must be read from the source anyways, so the email address is updated to the correct version.

So, incoming new email should be fine. Sup will store whatever name is in the headers, and won’t do any canonicalization.

For older email, you can update the index manually by viewing the message in thread-view-mode, and forcing Sup to re-save it, e.g. by changing the labels and then changing them back. Marking it as read, and then reading it, is an easy way to accomplish this, at least for read messages.

You can also make judicious use of sup-sync to do this for all messages in your index.

Sup 0.7 released

Sup 0.7 has been released.

You can read the announcement here

The big win in this release is that Ferret index corruption issues should now be fixed, thanks to an extensive programming of locking and thread-safety-adding.

The other nice change is that text entry will now scroll to the right upon overflow, thanks to some arcane Curses magic.

Sharing Conflict Resolutions in Git

Development of Sup is done with Git. Sup follows a topic branch methodology: features and bugfixes typically start off as “topic” branches from master, and are merged into an “integration”/“version” branch next for integration testing. After n cycles of additional bugfix commits to the topic branch, and re-merges into next, the topic branches are finally merged down to master, to be included in the next release.

I really like this approach because I think it evinces the real power of Git: that merges are so foolproof that I can pick and choose, on a feature-by-feature basis, which bits of code I want at each level of integration. That’s crazy cool. And users can stick to master if they want something stable, and next if they want the latest-and-greatest features.

The biggest problem I’ve had, though, is that long-lived topic branches often conflict with each other. This happens both when merging into next and when merging into master. I don’t think there’s a way around it; isolating features in this way has all the benefits above, but it also means that when they touch the same bits of code, you’ll get a conflict.

As a lazy maintainer, the biggest question I’ve had is: is there a way to push the burden of conflict resolution to the patch submitter? Is there a way for me to say: hey, your change conflicts with Bob’s. Can you resolve the conflict and send it to me?

One option I’ve considered is to have contributors to publish not only their feature branches, but their next branch as well. Assuming they aren’t mucking about with their next branch otherwise, if it contains just the merge commit, I can merge it into mine, and it should be a fast-forward that gets me the merge commit, conflict resolution and all.

But I don’t like that idea because, in every other case, I’m merging in the feature branches directly. Why should I suddenly start merging in next just because you have a conflict?

Furthermore, Sup primarily receives email contributions via git format-patch, and I do the dirty deed of sorting them into branches and merging things around. Requiring everyone to host a git repo iff they produce a conflicting patch seems silly. (And git format-patch, unfortunately, produces nothing for merge commits, even if they have conflict resolution changes. Maybe there’s a good reason for this, or maybe not. I’m not sure.)

After some effort, and some git-talk discussion, I have a solution. And no, it doesn’t involve sharing git-rerere caches. (Which it seems that some people do!)

For the contributor: once you have resolved the conflict, do a git diff HEAD^. This will output the conflict resolution changes. Email that to the maintainer along with your patch.

For the maintainer:

$ git checkout next
$ git merge <offending branch>
[... you have a conflict, yada yada ...]
$ git checkout next .
$ git apply --index <resolution patch filename>
$ git commit

Running git merge gets you to the point where you have a conflict. Running git checkout next . sets your working directory to the state it was before you merged. And git apply applies the resolution changes.

You lose authorship of the conflict resolution, but you can use git commit --author to set it.

I think the ideal solution would be for git format-patch to produce something usable in this case. I see some traffic on the Git list that suggests this is being considered, so hopefully one day this rigmarole will not be necessary.

No MathML in webkit

So apparently WebKit has no real MathML support. Empirically, it seems like you get some stuff like greek symbols, but things like sums and whatnot don’t appear. Oh well. Mac users, switch to Firefox, or ignore the math posts.

Trollop 1.13 released

I’ve released Trollop 1.13. This is a minor bugfix release. Arguments given with =’s and with spaces in the values are now parsed correctly. (E.g. --name="your mom".)

Get it with a quick gem install trollop.

Whisper 0.3 released

I’ve released Whisper 0.3. This is mostly a bugfix release, with generally better email support, including support for MIME multipart email.

How to do it:

  1. sudo gem install whisper --source http://masanjin.net/
  2. whisper-init <blog directory>
  3. Follow the instructions!

git-wtf dd706855 released

I’ve released a version dd706855 of git-wtf, available here: http://git-wt-commit.rubyforge.org/git-wtf

I’ve tweaked the output format so that branches that don’t exist on the remote server are displayed with ()‘s and those that do with []’s, and ~ is the new symbol for a merge that only occurs on the local side.

I think this produces a better display; lots more information per line of ourput.

I’ve also added a couple random options which you can discover by reading the source. :)

The big next step I’d like to take with this thing is to support multiple remote repos better. Currently it’s kinda specific to your origin repo.

Understanding the “Bayesian Average”

IMDB rates movies using a score they call the true Bayesian estimate (bottom of the page). I’m pretty sure that’s a made-up term. A couple other sites, like BoardGameGeek, use the same thing and call it a “Bayesian average”. I think that’s a made-up term, too, even through there’s a Wikipedia article on it.

Nonetheless, the formula is simple, and it has a nice interpretation. Here it is:

where is the mean vote across all movies, is the number of votes, is the mean rating for the movie, and is the “minimum number of votes required to be listed in the top 250 (currently 1300)”.

The nice interpretation is this: pretend that, in addition to the votes that users give a movie, you’re also throwing in votes of score each. In effect you’re pushing the scores towards the global average, by votes.

Is this arbitarary? Actually, no. It’s the mean (i.e. MLE) of the posterior distribution you get when you have a Normal prior with mean and precision , and a Normal conditional with variance 1.0.

In other words, you’re starting with a belief that, in the absense of votes, a movie/boardgame should be ranked as average, and you’re assuming that user votes are normally-distributed around the “true” score with variance 1.0. Then you’re looking at the posterior distribution (i.e. the probability distribution that arises as a result of those assumptions), and you’re picking the most likely value from that, which in the case of Gaussians is the mean.

Let’s see how that works.

To find the posterior distribution, we could work through the math, or we could just look at the Wikipedia article on conjugate priors. We’ll see that the posterior distribution of a Normal, when the prior is also a Normal, is a Normal with mean

where and are the mean and precision of the prior, respectively, is the precision of the vote distribution, and is the number of votes. In the case of IMDB, we assumed above that , so we have

Comparing the IMDB equation to this, we can see that above is here, above is here, above is here, and above is the hyperparameter . So we know that even though IMDB says is the “minimum number of votes required to be listed in the top 250 list”, that’s an arbitrary decision on their part: it can be anything and the formula still works. is the precision of the prior distribution; as it gets bigger, the prior distribution gets “sharper”, and thus has more of an effect on the posterior distribution.

Now the assumptions we made to get to this point are almost laughable. If nothing else, we know that Gaussians are unbounded and continuous, and user votes on IMBD are integers in the range of 1-10. The interesting take-away message here is that even though we made a lot of assumptions above that were laughably wrong, the end result is a reasonable formula with an nice, intuitive meaning.

Whisper 0.2 released

I’ve released Whisper 0.2. Beyond some minor bugfixes, the big enhancement in this one is that the “post as micro mailing list” idea now works. The comments on every post form a mailing list, with everyone who commented auto-receiving everyone else’s comments, and all replies being archived on the mailing list.

Of course you can set your reply settings on a per-comment basis to disable this, or to restrict it to only send immediate replies to your comment. The only thing you can’t do so far is change your settings (e.g. from all to none) once you’ve made them. That will be coming later.

Still to go: trackbacks, I guess, and maaaaybe add textarea comments.

Get it: sudo gem install whisper --source http://masanjin.net/

prev  0 1 2 3 4 5 6 7  next