Article Text

Download PDFPDF

What is a p value and what does it mean?
Free
  1. Dorothy Anne Forbes
  1. Correspondence to Dorothy Anne Forbes
    Faculty of Nursing, University of Alberta, Level 3, Edmonton Clinic Health Academy, Edmonton, Alberta, T6G 1C9, Canada; dorothy.forbes{at}ualberta.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Researchers aim to make the strongest possible conclusions from limited amounts of data. To do this, they need to overcome two problems. First, important differences in the findings can be obscured by natural variability and experimental imprecision. Thus, it is difficult to distinguish real differences from random variability. Second, researchers' natural inclination is to conclude that differences are real, and to minimise the contribution of random variability. Statistical probability minimises this from happening.1

Statistical probability or p values reveal whether the findings in a research study are statistically significant, meaning that the findings are unlikely to have occurred by chance. To understand the p value concept, it is important to understand its relationship with the α level. Before conducting a study, researchers specify the α level which is most often set at 0.05 (5%). This conventional level was based on the writings of Sir Ronald Fisher, an influential statistician, who in 1926 reported that he preferred the 0.05 cut-off for separating the probable from the improbable.2 Researchers who set α at 0.05 are willing to accept that there is a 5% chance that their findings are wrong. However, researchers may adopt probability cut-offs that are more generous (eg, an α set at 0.10 means there is a 10% chance that the conclusions are wrong) or more stringent (eg, an α set at 0.01 means there is a 1% chance that the conclusions are wrong). The design of the study, purpose or intuition may influence the researcher's setting of the α level.2

To illustrate how setting the α level may affect the conclusions of a study, let us examine a research study that compared the annual incomes of hospital based nurses and community based nurses. The mean annual income for hospital based nurses was reported to be $70 000 and for community based nurses to be $60 000. The p value of this study was 0.08. If the researchers set the α level at 0.05, they would conclude that there was no significant difference between the annual incomes of hospital and community-based nurses, since the p value of 0.08 exceeded the α level of 0.05. However, if the α level had been set at 0.10, the p value of 0.08 would be less than the α level and the researchers would conclude that there was a significant difference between the annual incomes of hospital and community based nurses. Two very different conclusions.3

It is easy to read far too much into the word significant because the statistical use of the word has a meaning entirely distinct from its usual meaning. Just because a difference is statistically significant does not mean that it is important or interesting. In the example above, at the 0.10 α level, although the findings are statistically significant, results due to chance occur 1 out of 10 times. Thus, chance of conclusion error is higher than when the α level is set at 0.05 and results due to chance occur 5 out of 100 times or 1 in 20 times. In the end, the reader must decide if the researchers selected the appropriate α level and whether the conclusions are meaningful or not.

References

Footnotes

  • Competing interests None.