P Values

Last updated: May 19, 2020

Consensus in the literature that p values are bad

Examples of misuse of p values

Response to journals requiring p values

Sander Greenland:

That brings me back to Dan Scharfstein’s query on what to do about journals and coauthors obsessed with significance testing. What I’ve been doing and teaching for 40 years now is reporting the CI and precise P-value and never using the word “significant” in scientific reports. When I get a paper to edit I delete all occurrences of “significant” and replace all occurrences of inequalities like “(P<0.05)” with “(P=p)” where p is whatever the P-value is (e.g., 0.03), unless p is so small that it’s beyond the numeric precision of the approximation used to get it (which means we may end up with “P<0.0001”). And of course I include or request interval estimates for the measures under study.

Only once in 40 years and about 200 reports have I had to remove my name from a paper because the authors or editors would not go along with this type of editing. And in all those battles I did not even have the 2016 ASA Statement and its Supplement 1 to back me up! Although I did supply recalcitrant coauthors and editors copies of articles advising display and focus on precise P-values. One strategy I’ve since come up with to deal with those hooked on “the crack pipe of significance testing” (as Poole once put it) is to add alongside every p value for the null a p value for a relevant alternative, so that for example their “estimated OR=1.7 (p=0.06, indicating no significant effect)” would become “estimated OR=1.7 (p=0.06 for OR=1, p=0.20 for OR=2, indicating inconclusive results).” So far every time they cave to showing just the CI in parens instead, with no “significance” comment.

Miscellaneous