Tuesday, April 22, 2008

Foundations of Probability: No Decided Opinion

Over at Good Math, Bad Math, Mark Chu-Carroll has brought up the disagreement between frequentists and Bayesians. Here's a highly technical and theoretical argument, conducted with a great deal of vitriol, and perhaps with practical consequences. You might think this is the sort of thing I would have a decided opinion on. Yet I don't, perhaps because I don't know enough about Bayesianism in practice.

I have sometimes applied Neyman-Pearson hypothesis testing, sometimes in the context of the design of experiments. I think it useful to have a decision rule before looking at the data, but I try not to get hung up on ontological commitments. I am not sure that my practice is compatible only with some position on these arguments. I generally explain the math with frequentist arguments.

I did have some thoughts on the comments. I wondered whether more schools should be distinguished than Mark or his commentators have done. The first comment supports what might be called a formalist or axiomatic approach: Probability is a mathematical theory of certain measures on sigma algebras. Somewhere in the comments, E. T. Jaynes is mentioned. Is Jaynes' entropy-maximization approach co-extensive with Bayesianism? Somewhere else, personalism is mentioned. Is this also a synonym of Bayesianism? I think some approaches are about objective properties of propositions. I gather this is Keynes' position is his book on probability. Can the non-frequentist school be further decomposed into objective and subjective branches? Somewhere I seem to vaguely recall a fidicual approach, which I understood even less. Where does this fit? I suppose I also ought to ask where Savage fits in.

I was not aware that those that worry about these sorts of things have tended to swing from frequentists to Bayesians with the growth in computing power. I was taught a frequentist approach, with an acknowledgement of the existence of debate. I'm aware that the growth of computing power has led to greater popularity of bootstrap/jackknife/resampling methods since I received my undergraduate degree. Perhaps that's another topic.

Some comments at Mark's go down what I think is a blind alley - they suggest the Monte Hall Problem is an illustration of the strength of Bayesianism. While Bayes' theorem is useful in calculating the correct solution, I don't see the problem as connected strongly to foundational principles. Perhaps, use of Bayes' theorem emphasizes. that Monte's decision rules must be precisely specified beforehand.

There is a discussion of confidence intervals and the correct way of thinking about them from the frequentist perspective. One of my colleagues once suggested to me that constructing a confidence interval is like trying to throw a hat over, say, an apple. The apple is at some position, and you don't know whether it is or is not under the hat after it is thrown. If your hat is a sombero - like a 99% confidence interval - you are more likely to catch the apple. If your hat is a English derby - like a 90% confidence interval - you are less likely to catch the apple. But when you have caught it, you've more precisely specified where the apple is.

2 comments:

Alex said...

You might like this:

http://www.aeaweb.org/articles.php?doi=10.1257/jep.22.4.199

forich said...

I think its "Monty-Hall problem", not "Monte".