Interview with Daniel Ellsberg

There’s a very interesting interview with Daniel Ellsberg, of Pentagon Papers fame, in the Economist. Some choice quotes:

“Obama has now prosecuted three people [for whistleblowing]. Two of whom are being prosecuted for acts carried out under George Bush and for which Bush chose not to prosecute[…]. So Obama’s famous position of not looking backward seems to apply only to crimes like torture or illegal warrantless surveillance. He’s given absolute amnesty to the officials of the Bush administration. But in the case of Thomas Drake, who told a reporter about a billion-and-half-dollar waste at the NSA, and in the case of Shamai Leibowitz, who says he exposed acts to a blogger that he regarded as illegal, Obama was willing to look backward and prosecute.”

“I think we can assume that those who don’t use Wikileaks’s technology to get the information out can be assured of prosecution. I have to assume that if I had now put out the Pentagon Papers as I did, using that now outmoded technology of Xerox, Obama would prosecute me to the full extent of the law.”

“A number of the acts undertaken against me, which were illegal at the time, forcing Nixon to obstruct justice by concealing them, have been made part of our explicit policy. Not just the warrantless wire-tapping, but the raid of my psychoanalyst’s office is now regarded as legal under the Patriot Act as a sneak-and-peek operation.”

“Nixon brought a dozen CIA assets […] with orders to incapacitate me totally. That was done covertly and was one of the factors that led to Nixon’s resignation. Obama has now announced, through his then-head of intelligence, Dennis Blair, that we have a list of those who can be assassinated by special-forces operators. And this president has even approved names of American citizens on that list. Now that’s an astonishing change, not in our covert policy—presidents have been involved in covert assassination plots repeatedly—but to announce that publicly as a supposedly legitimate policy. That negates the Magna Carta. It’s a kind of power that no king of England has asserted since John I.”

“[W]hen I said that Julian Assange is in some danger, others said that’s ridiculous, he’s too prominent, no president would do such a thing. Well, I’m not saying that it’s very likely, but I am saying that the chance of Julian Assange coming to harm from the US president should be zero, and it isn’t. To say that it’s ridiculous is simply unfounded. My own experience proves that. Because after all, I was as well-known at the time, when that assault was made, as Julian Assange is today.”

Pinker on Critical Thinking

[D]on’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.

— Steven Pinker

(or Twitter!)

Calculating string display width in Ruby

Most programmers are by now familiar with the difference between the number of bytes in a string and the number of characters. Depending on the string’s encoding, the relationship between these two measures can be either trivially computable or complicated and compute-heavy.

With the advent of Ruby 1.9, the Ruby world at last has this distinction formally encoded at the language level: String#bytesize is the number of bytes in the string, and String#length and String#size the number of characters.

But when you’re writing console applications, there’s a third measure you have to worry about: the width of the string on the display. ASCII characters take up one column when displayed on screen, but super-ASCII characters, such as Chinese, Japanese and Korean characters, can take up multiple columns. This display width is not trivially computable from the byte size of the character.

Finding the display width of a string is critical to any kind of console application that cares about the width of the screen, i.e. is not simply printing stuff and letting the terminal wrap. Personally, I’ve been needing it forever:

  1. Trollop needs it because it tries to format the help screen nicely.
  2. Sup needs it in a million places because it is a full-fledged console application and people use it for reading mail in all sorts of funny languages.

The actual mechanics of how to compute string width make for an interesting lesson in UNIX archaeology, but suffice it to say that I’ve travelled the path for you, with help from Tanaka Akira of pp fame, and I am happy to announce the release of the Ruby console gem.

The console gem currently provides these two methods:

  • Console.display_width: calculates the display width of a string
  • Console.display_slice: returns a substring according to display offset and display width parameters.

There is one horrible caveat outstanding, which is that I haven’t managed to get it to work on Ruby 1.8. Patches to this effect are most welcome, as are, of course, comments and suggestions.

Try it out!.

Trollop up to 8k downloads

UPDATE 2010-05-19: 12k downloads. Whoohoo!

I just noticed that Trollop 1.16.2 has over 8,000 downloads. That’s roughly an order of magnitude more than Sup. So, yay. Of course I suspect it’s largely thanks to the fact that it’s now a dependency of Cucumber. But I’ll take what I can get.

After all, it’s only been the best option parser for Ruby for three whole years.

Some people get it:

In my experience, OptionParser has been frustrating to use for several reasons, one of them being the poor documentation — hence your question. William Morgan, the author of Trollop, shows no mercy in his criticism (for example, see http://stackoverflow.com/questions/897630/ and http://trollop.rubyforge.org). I can’t dispute what he says.

And some people are merrily producing horrible alternatives.

Trollop 1.16.2 released

Trollop 1.16.2 has been out for a while now, but I realized I (heavens!) haven’t yet blogged about it.

Exciting features include:

  1. Scientific notation is now supported for floating-point arguments, thanks to Will Fitzgerald.
  2. Hoe dependency dropped. Finally.
  3. Some refactoring of the standard exception-handling logic, making it easier to customize Trollop’s behavior. For example, check this out:

opts = Trollop::with_standard_exception_handling p do
  p.parse ARGV
  raise Trollop::HelpNeeded if ARGV.empty? # show help screen
end

This example shows the help screen if there are no arguments. Previous to 1.16, this was difficult to do, since the standard exception-handling was baked into Trollop::options. The help message would automatically be displayed if -h was given, but programmatically invoking it on demand was difficult.

So I’ve refactored the standard exception handling into with_standard_exception_handling, and if you want fine-grained control, instead of calling Trollop::options, you now have the option to call Trollop#parse from within with_standard_exception_handling.

You don’t really need any of this stuff, of course, unless you’re really picky about how your exception-handling works. But hey, that’s why I wrote Trollop in the first place….

Simple breakpoints in Ruby

Sometimes it’s nice to have a simple breakpointing function that will dump you into an interactive session with all your local variables in place.

There are more sophisticated solutions for the world of multiple servers and daemonized code, but after some fighting with IRB, I find myself using this little snippet of code in many projects:

require 'irb'

module IRB
  def IRB.start_with_binding binding
    IRB.setup __FILE__
    w = WorkSpace.new binding
    irb = Irb.new w
    @CONF[:MAIN_CONTEXT] = irb.context
    irb.eval_input
  end
end

## call me like this: breakpoint binding
def breakpoint binding; IRB.start_with_binding binding end

As the comment states, you can invoke the breakpoint at any point by inserting a breakpoint binding statement anywhere in your code. Once that line is reached, you’ll be dumped into an IRB session with local variables intact. Quitting the session resumes execution.

Obviously with this method I’m having you pass in your binding explicitly. There are fancier tricks for capturing the binding of the caller (involving kernel trace functions and continuations), but I’m opting for the simpler solution here.

Works with Ruby 1.9, of course.

How to rank products based on user input

This is a post I’ve been meaning to write for over a year, since I first came across an article entitled How Not to Sort by Average Rating (made popular on, amongst other places, Hacker News). Recent mentions of that article have finally spurred me to finish this off. Well, better late than never.

The Context

Let’s say you have a lot of things. Let’s say your users can vote on how much they like those things. And let’s say that you want rank those things, so that you can list them by how good they are. How should you do it?

You can find examples of this pattern all over the web. Amazon ranks products, Hacker News ranks links, and Yelp ranks businesses, all based on reviews or votes from their users.

These sites all rank the same way: they compile user votes into a score, and items are ranked by that score. So how the score is calculated determines entirely how the items are ranked.1

So how might we generate such a score? The most obvious approach is to simply use the average of the users’ votes. And, in fact, that is what these sites do. (Hacker news also mixes in a time-based measure of freshness.)

Unfortunately, there’s a big problem with using the average: when you only have a few votes, the average can take on extreme values. For example, if you have only one vote, the average is that vote. (And if you have no votes… well, what do you do then?) The result is that low-vote items can easily occupy the extreme points on the list—-at the very top or very bottom.

You see this problem every time you go to Amazon and sort by average customer review. The top-most items are a morass of single-vote products, and you have to wade through them before you get to the good stuff.

So what would be a better ranking? Well, we’d probably like to see an item with 50 5-star votes before we see an item with a single 5-star vote. Intuitively speaking, the more votes, the more certain we are of an item’s usefulness, so we should be focusing our attention on high-score, high-vote-count items.

And, although we’re less likely to be concerned with what’s at the bottom of the list, the symmetric case also seems right: an item with 50 1-star votes should be ranked lower than one with a single 1-star vote, since the 50-star one is more certain to be bad.

Taken together, these two observations suggest that the best ordering is one where the low-vote items are placed somewhere in the middle, and where high-confidence items occupy the extremes—good at the top, and bad at the bottom.

(We could develop this argument more formally, by talking about things like expected utility / risk, but I’m going to leave it just intuitive for this post.)

The Problem with Confidence Intervals

The “How Not To” article linked above suggests that the right way to rank products to avoid the problem with the average is to construct a confidence interval around the average vote, and rank by the the lower bound of that interval. This confidence interval has the following behavior: when the amount of data is large, it is a tight bound around the average vote; when the amount of data is small, it is a very loose bound around the average.

Taking the lower bound of the confidence interval is fine when the number of votes for an item is large: the lower bound will be close to the average itself. But, just as with the average, the problem with this approach occurs when the number of votes is small. In this case, the lower bound of the confidence interval will be very low.

The result is that new items with few votes will always be ranked alongside the bad items with many votes, at the bottom of the list. In other words, if you rank by the lower bound of a confidence interval, you will rank an item with no votes alongside an item with 1,000 bad votes.

If you use this approach, every new item will start out at the bottom of the list.

Enter Bayesian Statistics

Is there a better alternative? Fortunately, this kind of paucity-of-data problem is tailor-made for Bayesian statistics.

A Bayesian approach gives us a framework to model not only the votes themselves, but also a prior belief of what we think an item looks like, before seeing any votes. We can use this prior to do smoothing: take the average vote, which can be jumpy, and “smooth” it towards the prior, which is steady. And the smoothing is done in such a way that, with a small numbers of votes, the score mostly reflects the prior, and as more votes arrive, the score move towards the average vote. In other words, the votes are eventually able to “override” the prior when there are enough of them.

Being able to incorporate a prior has two advantages. First, it gives us a principled way of modeling what should happen with zero votes. We are no longer at the mercy of the confidence interval; we can decide explicitly how low-vote products should be treated. Second, it provides a convenient mechanism for us to plug in any extra information that we have on hand. This extra information becomes the prior belief.

But what kind of extra information do we have? Equivalently, how do we determine the prior? Consider: when you decide to watch a Coen brothers movie, you make that decision based on past Coen brothers movies. When you buy a Sony product, you make a decision based on what you know of the brand.

In general, when you know nothing about an item, you can generalize from information about related items, items from similar sources, etc. We will do the same thing to create a prior.

Let’s see how we can use Bayesian statistics to do smoothing towards a meaningful prior.

Solution #1: the “True Bayesian Average”

The first solution, made popular by IMDB, is the so-called “true Bayesian average” (although so far as I know that terminology does not actually come from statistics). Using TBA, to compute a score , we do:

where is the average vote for the item, is the number of votes, is the smoothing target, and is a tuning parameter that controls how quickly the score moves away from as the number of votes increases. (You can read more about the Bayesian interpretation of the ‘true Bayesian average’.)

This formula has a nice interpretation: is a pseudo-count, a number of “pseudo” votes, each for exactly the value . These votes are automatically added to the votes for every item, and then we take the average of the pseudo-votes and “real” votes combined.

In this formula, the prior is , the smoothing target. What value should we choose for it? It turns out that if we set to the average vote over all items (or over some representative class of items), we get the behavior we wanted above: low-vote items start life near the middle of the herd, not near the bottom, and make their way up or down the list as the votes come in. (You can read a more rigorous argument about why using the global average is a good target for smoothing.)

The TBA is easy to implement, and it’s trivial to adapt an existing system that already uses average votes to use it.

But we can do better.

The Problem with the “True Bayesian Average”

The problem with the TBA is that it assumes a Normal distribution over user votes.

Taken at face value, we know this assumption is bad for two reasons: one, we have discrete, not continuous, votes, and two, we have no actual expectation that votes will be normally distributed, except in the limit. But in reality, neither of these are significant problems. We will always have some modeling assumptions of dubious correctness for the sake of tractability, and these are within the realm of plausibility.

The bigger problem with the assumption of Normality is that it forces us to model items as if they had some “true” score, sitting at the mean of a Normal distribution. But we know some items are simply divisive. Some Amazon products accrue large numbers of both 5-star and 1-star reviews. Some movies are loved and hated in equal measure. Being able to model those kinds of distributions accurately would be to our advantage, and a Normal distribution won’t let us do that.

Ultimately, of course, we need to produce an ordering, so we need to condense everything we know about an item into a single value.2 But it would be to our advantage to do this in an explicit, controllable manner.

This suggests we would like a solution which decomposes scoring into three parts:

  1. Our prior beliefs about an item;
  2. The users’ votes on an item; and
  3. The mapping between the vote histogram and a score.

Such a system could both account for paucity-of-data problems, and provide us with explicit control on how the items are ranked.

Let’s see how we can do this.

Solution #2: Dirichlet Priors and an Explicit Value Function

To accomplish these goals, we have to forsake the normal distribution for the multinomial. A multinomial model will let us represent the complete histogram of the votes received for each items. For example, for Amazon products, a multinomial model will capture how many votes for one star, how many votes for two stars, and so on, a product had. If there are types of votes that users can assign to an item, we can parameterize (i.e. fully specify) the corresponding multinomial with an -dimensional vector.

(I’m glossing over one technicality, which is that a multinomial really measures only the relative proportion, not the actual counts, of the histogram. But this detail won’t matter in our case.)

To fit the multinomial into a Bayesian framework, we will model our prior belief as a Dirichlet distribution. Just as a multinomial distribution represents a single -dimensional vector, a Dirichlet distribution is a probability distributions over all such -dimensional vectors. In effect, it is a distribution over all possible vote histograms for an item.

We use the Dirichlet because it is a conjugate prior of the multinomial. This means that when we use Bayes’s rule to combine the Dirichlet with the multinomial (we’ll see how to do this below), the resulting distribution is also a Dirichlet. This property is very convenient because it allows us to keep the representation to a single form, and use the same technique iteratively—we start with a Dirichlet, and every new set of votes we incorporate leaves us with a Dirichlet.

The Dirichlet is a complicated distribution. Luckily, the properties we’re interested in make it very simple to use.

For one, it is parameterized by a histogram just as well as a multinomial is. That is, if we write for a Dirichlet distribution and for a multinomial, describes a multinomial distribution corresponding to a vote histogram for a particular item with 7 one-star votes, 3 two-star reviews, etc., and describes a Dirchelet. Of course, the meaning of the Dirichlet is quite different from the meaning of the multinomial, and for the sake of brevity we won’t go into how to interpret it here. But for our purposes, this is a nice property because it means that specifying a Dirichlet given a vote histogram is trivial.

The other very handy property of Dirichlets is that when they’re combined with multinomials using Bayes’s Rule, not only is the result a Dirichlet, it’s a Dirichlet that’s easily specifiable in terms of the two input distributions. Recall that Bayes’s Rule states:

where is the set of observed votes and is a possible model of the item. In our case, that means that is our prior belief, is what our actual votes look like given the model, is our updated model, and is some normalizing constant, which we can ignore for now.

If we call our prior belief the Dirichlet and our conditional distribution the multinomial , and skip over all the difficult math, then we can turn Bayes’s rule into a rule for updating our model:

In other words, to create the posterior (that is, resulting) distribution, all we need to do is add the two input histograms. (Note that falls away.)

So now we have a way of taking our prior information, incorporating the user votes, and finding the resulting distribution. The resulting distribution is also a vote histogram for an item, smoothed, just as in the TBA case, towards the prior belief histogram. (And, in fact, the same pseudo-count analogy applies: are the pseudo-votes, and the real votes, and we’re simply adding them together. Neat, huh?)

But what do we do with that?

The final step of the puzzle is to transform this distribution into a single score for ranking. The best way to do this is to take a function that describes the score of a particular vote histogram, and compute the expected value of that function under our distribution. The expected value will represents the function as evaluated over every possible value in the distribution, weighted by the probability of seeing that value. In effect, it will capture the function as applied to the entire distribution, and package it up nicely into a single number for us, ready to be used as a score.

To compute the expected value, in general, you are required to solve a nasty integral. Happily, we can take advantage of one final property of the Dirichlet, which is that, if your Dirichlet is parameterized by , the expected value of the proportion of votes in category is simply:

In other words, the expected value of the proportion of votes in the th bucket is simply the proportion of that parameter over the total sum of the parameters.

If we stick to a linear scoring functions, we can take advantage of the linearity of expectation and use this result, avoiding anything more complicated than multiplication.

For example, sticking with our Amazon case, let’s use the “obvious” function:

where we give each one-star vote a 1, each 2-star vote a 2, and so on, and just sum these up to produce a score. Because this is linear, the expected value of this function under our Dirichlet is simply:

Of course, this simple function has many issues. For example, are these weights really want you want in practice? They imply that a five-star score worth exactly 5 times a one-star score, which is may not be the case. And it does not do anything special with “divisive” items, which you might want to up- or down-rank.

But, this framework will allow you to plug in any function you want at that point, with the caveat that non-linear functions may involve some nasty integration.3

So there you have it! We’ve achived all three of our desired goals. We can take a prior belief, any number of user votes (including none at all), and a custom scoring function, and combine them all to produce an ranking.

The final question is what prior belief we should use. Intuitively (and reasoning by analogy from the TBA case above), if we use the mean vote histogram over all items, scaled down by some tuning parameter, we should have the desired behavior introducted in the first section, where low-vote items start near the middle of the list, and high-vote high-score items are at the top, and high-vote low-score items at the bottom. Proof of the correctness of this statement is left as an exercise to the reader. (Hint: see the related blog post.)

At Long Last, Some Code

Let’s wrap it up with some Ruby code. If you’ve skipped to this point, congratulations—all of the above was a very long, very discursive way of arriving at something that is, in fact, very simple:

## assumes 5 possible vote categories, but easily adaptable

DEFAULT_PRIOR = [2, 2, 2, 2, 2]

## input is a five-element array of integers
## output is a score between 1.0 and 5.0
def score votes, prior=DEFAULT_PRIOR
  posterior = votes.zip(prior).map { |a, b| a + b }
  sum = posterior.inject { |a, b| a + b }
  posterior.
    map.with_index { |v, i| (i + 1) * v }.
    inject { |a, b| a + b }.
    to_f / sum
end

If you play around with this method, you can see:

  • With no votes, you get a score in the middle of the range, 3.0.
  • As you add high scores, the value increases, and as you add low scores, it decreases.
  • If the score is, say, 4.0, adding more 4-star votes doesn’t change it.
  • If you make the default prior bigger, you need more votes to move away from 3.0.

Enjoy!

1 Strictly speaking, there’s no reason you need to generate an intermediate score if all you’re interested in is a ranking. You could generate an ordering of the items directly and skip the middle step. But on these sites, the intermediate score value has useful semantics and is exposed to the user anyways.

2 Basically. Don’t get nitpicky with me.

3 You can probably get away with just running the function on directly, i.e. using a “point estimate” instead of an expected value. Be sure to understand the implications.

On the hell that is MathML

It’s been 12 years since the first MathML spec was released, and math on the web is still largely unsupported and incredibly complicated to get right. If that isn’t a spec failure, I don’t know what is.

Personally, after a year of doing my best to do MathML the “right” way, I’ve given up trying to be correct. I’m now using MathJax to render math, a solution that is, while absolutely horrible, far less horrible than before. In particular, people can actually see math.

But William! you might say, all you need to do for math on the web is to generate MathML. Firefox has supported MathML for years and years! And that’s true, BUT:

  1. Random browsers simply don’t support MathML (e.g. Chrome, Safari).
  2. The browsers that do support it (e.g. Firefox) support it only in strict compliance mode.

And strict compliance mode is an absolute hell for everyone involved. You must produce valid XHTML and sending your content as text/xml instead of text/html. Any kind of non-conforming XML produces a horrible cryptic red error message instead of displaying the page. You will begin to live in fear of screwing things up whenever you make a chance to your layout, and god help you if you have any kind of UGC or templating or any kind of non-trivial content generation.

MathJax smooths that away. You can embed MathML or even LaTeX math markup directly into a text/html document, and it will do the magic to turn it into math in the browser. If the browser has native MathML support, then great, it will use that. If the browser has web font support, then great, you get pretty fonts. And if not, the degradation is graceful. And you get the nice error-robust rendering that makes HTML nice.

I’m still using Ritex to translate LaTeX math into MathML, because I like the syntax and because I didn’t feel like going back and translating all the math. I’ve changed Whisper to emit text/html as the content type. So now I should have the best of all possible worlds.

Let’s try it:

If you see math above, I have succeeded. If not, I have failed.

Trollop 1.15 released

I’ve just released Trollop 1.15, which fixes an irritating misfeature pointed out by Rafael Sevilla: when Trollop runs out of characters when it’s generating short option names, e.g. when you have a lot of options, it shouldn’t throw an exception and die. It should just continue peacefully.

Trollop’s reign of domination continues!

Ruby, Ncurses and blocked threads

If you’re writing a multithreaded Ruby program that uses ncurses, you might be curious why program stops running when you call Ncurses.getch. Sup has been plagued by this issue since 2005. Thankfully, I think I finally understand it.

The problem is that there is a bug in the Ruby ncurses library such that using blocking input will block all Ruby threads when it waits for user input, instead of just the calling thread. So Ncurses.getch will cause everything to grind to a halt. This is probably due to the library not releasing the GVL when blocking on stdin.

This bug is present in the latest rubygems version of curses, 0.9.1. It has been fixed in the latest libncurses-ruby Debian packages (1.1-3).

To see if you have a buggy, blocking version of the ruby ncurses library, run this program:

require 'rubygems'
require 'ncurses'
require 'thread'

Ncurses.initscr
Ncurses.noecho
Ncurses.cbreak
Ncurses.curs_set 0

Thread.new do
  sleep 0.1
  Ncurses.stdscr.mvaddstr 0, 0, "library is GOOD."
end

begin
  Ncurses.stdscr.mvaddstr 0, 0, "library is BAD."
  Ncurses.getch
ensure
  Ncurses.curs_set 1
  Ncurses.endwin
  puts "bye"
end

(I purposely require rubygems in there to load the rubygems ncurses library if it’s present; you can drop this if you don’t use rubygems.)

There are two workarounds to this problem. First, you can simply tell ncurses to use nonblocking input:

Ncurses.nodelay Ncurses.stdscr, true

But if you’re writing a multithreaded app, you probably aren’t interested in nonblocking input, unless you want a nasty polling loop.

The better choice is to add a call to IO.select before getch, which will block the calling thread until there’s an actual keypress, and then allow getch to pick it up:

if IO.select [$stdin], nil, nil, 1
  Ncurses.getch
end

IO.select requires a delay, so you’ll have to handle the periodic nils that generates. But the background threads should no longer block.

There is one further complication, which is that you won’t be able to receive the pseudo-keypresses Ncurses emits when the terminal size changes, since they don’t show up on $stdin and thus the select won’t pass. The solution is to install your own signal handler:

trap("WINCH") { ... handle sigwinch  ... }

You will still see the resize events coming from getch, but only once the user presses a key. You can drop them at this point.

That should be enough to make any multithreaded Ruby ncurses app able function. Of course, once everyone’s using a fixed version fo the ncurses libraries, you can do away with the select and set nodelay to false.

(One last hint for the future: I’ve found it necessary to set it to false before every call to getch; otherwise a ctrl-c will magically change it back to nonblocking mode. Not sure why.)

0 1 2 3 4 5 6 7  next