3/10/16

The American Statistical Association recently put out a statement on P-values called "The ASA's statement on p-values: context, process, and purpose". I think that it is good that there is discussion on fundamental statistics, teaching, philosophy, logic, and other important related issues. However, ultimately, I think the statement is a case of much ado about nothing.

I say this because in graduate school I was taught to always, always, always, do more than just say "p<.05", so I see their examples of something
of a strawman argument. Things like effect size, confidence intervals, alternative analyses, graphs, valid scientific experiment, replication,
practical significance, meta-analysis, multiple comparisons, power, and so on, and the correct use and interpretation of alpha and p-values __are__ taught in school. The misuse or misunderstanding of
p-values (or of any other statistical method) is not grounds for indicating there are problems with the method itself.

Let's look at the history of the p-value. They were essentially created in the 1920s by Ronald Fisher. He was doing scientific experiments on crop variation. His publications and tables were used for decades. In these he talked about the p-value being a random variable and that there is nothing magical about setting alpha = .05. If Neyman and Pearson viewed p = .05 as a magical cutoff, Fisher certainly did not. If journals, that each have their own rules for publishing, choose to have a cutoff, so what? What is forcing you to publish there?

In my opinion, several of the authors of what are essentially "anti p-value" writings, while highly skilled as statisticians and scientists, merely tend to be the most vocal, the most attention seeking, the most into Bayesian statistics (coincidence, I'm sure), but probably not the most representative of practitioners or users of p-values in the real world. From what I've read, they tend to broadcast failures of the use of p-values while downplaying or not even mentioning the more numerous occassions where p-values were used in scientific discoveries, for example.

As regards to the related issue of a lot of published work having some probability of being incorrect. Again, so what? This isn't a condemnation of p-values, statistics, nor science. This fact was true before probability and statistics were even in use. This is merely the nature of the world that probably nothing is strictly deterministic and that our evidence and knowledge changes over time. Let's outlaw a specific type of random variable? Huh? It seems like the maxim "all models are wrong but some are useful" goes out the window when p-values are discussed.

The death of p-values has been *greatly* exaggerated.

If you enjoyed *any* of my content, please consider supporting it in a variety of ways:

- Check out a random article at http://statisticool.com/random.htm
- Buy what you need on Amazon from my affiliate link
- Share my Shutterstock photo gallery
- Sign up to be a Shutterstock contributor
- Search Statisticool.com: