Thomas Bayes Gets Ink

October 8, 2014

I read “Belief, Bias, and Bayes.” The write up appeared in the open source friendly Guardian newspaper. Bayes and his methods are more popular than ever. Instead of meeting the good churchman in a statistics class, there he is near the editorial page, on the Web, and in the blogosphere.

This particular write up is surprisingly gentle toward the bane of many university students. Here’s the explanation of the method in the article:

I find it easier to be concrete. So imagine I have a bag containing three stones; two blue and one red. Without looking, and in random order, you and I pick, and keep, one stone each. What are the chances I have blue and you have red? I could work them out two ways. If you have the only red stone (which you have a one-in-three chance of having got, without knowing anything about my choice) then I must have a blue (one-in-one). The probability is ? × 1 = ?, a third. On the other hand, if we know I have a blue stone (probability two-in-three) then there is a 50:50 chance you have a red stone. The probability is ? × ½ = ? again. The answers had to come out the same, since both ways of working it out describe the same result. The “probability of me having blue if you have red, multiplied by the probability of you having red”, has to be the same as “the probability of you having red if I have blue multiplied by the probability of me having blue”. Abstracted, that’s Bayes’ Theorem.

There you go. There was a particularly useful quotation in the article; to wit:

One of the things that gets people fired up is that Bayesian statistics can introduce a level of subjectivity into the scientific process that some scientists see as unacceptable.

Spot on. I recall one failed webmaster who publishes “expert opinions” who fulminated against this “flaw” in the method. I made a brief effort to explain the benefits of the method, but he would have none of it. The biases baked into his “expert” brain was more correct than any mathematical reasoning. That’s what makes this person a “real” expert and, of course, a failed webmaster.

The Guardian article comes at being resistant to a procedure this way:

I guess Bayesian statistics provides a mathematical definition of a closed mind. Anyone with a prior of zero about something can never learn from any amount of evidence, because anything multiplied by zero is still zero.

I think this means one is stupid. Perhaps this resistance to a method is behind much of the fulminating about Autonomy’s digital reasoning engine and its integrated data operating layer?

Stephen E Arnold, October 8, 2014

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta