Frequentism is Good


10/10/17

Here is a response I wrote to an article that appeared in the October 2017 issue of the Significance magazine, a magazine which I really enjoy. Thank you Significance editors for also making several minor, but needed, edits in my writing.


After reading "What are the odds!? The "airport fallacy" and statistical inference" by Bert Gunter and Christopher Tong (August 2017), I am curious to know what precisely they mean by "Bayesian", because there are many different types of Bayesians out there.

In their article they argue that statistical inference of the frequentist variety leads science astray. I agree that frequentism is an embarrassment, but it is actually an embarrassment of riches.

Frequentism, in the most simple case in my opinion, can be understood as a coin flip experiment, keeping track of the frequency of heads. Over time, this percent (statistic) settles down into what could be considered the probability of heads (parameter) - and note that it doesn't have to be 50% either. A strong law of large numbers result says that we can get arbitrarily close to this true probability as well, given a large enough number of flips. You tell me how close you want to get and I'll tell you the expected number of flips needed. This is rigorous, like an epsilon-delta proof from calculus. It happens regardless of your brand of philosophy of statistics. Science over time is like this too.

An example I remember is from the book The Statistical Sleuth, by Ramsey and Schafer (statisticalsleuth.com). They present a graph of estimates and confidence intervals for y, the deflection of light around the Sun, from 20 experiments carried out from 1919 to 1985. Each experiment in itself may have had flaws, may have not been conclusive in and of itself, may not have had ideal (whatever that means) statistical methods, but over time (again, long term frequencies), the conclusion is very clear: estimates are more centered around y = 1 (the value predicted by general relativity) and the confidence intervals are becoming smaller on average.


(I inserted this now, not in original article - JS)

Gunter and Tong should also read some work by Deborah Mayo if they have not (errorstatistics.com). I admit I do not understand it all, but she argues for "severe tests" to move the discipline forward, quite the opposite of abandoning tests.

I also observe that in many cases where there is a lot of data, frequentist and Bayesian approaches do tend to be in agreement ("likelihood swamps the prior"), if not in the test statistics then at least in the decisions made from the analysis. Because of all these reasons, I simply fail to see how frequentism is a fallacy.

JS

Washington, DC


We should strive to be unapologetic frequentists. The world and science itself is very frequentist in nature and much of the good of Bayesian statistics relies on frequentist concepts (see Frequentism, or How the World Works).

Thank you for reading.


If you enjoyed any of my content, please consider supporting it in a variety of ways: