Evid Based Nurs 12:67-70 doi:10.1136/ebn.12.3.67
  • EBN notebook

A beginner’s guide to probability

  1. Carl Thompson
  1. Department of Health Studies; University of York; York, UK


      Uncertainty is both “irreducible”1 and inescapable in health care: no intervention ever leads with complete certainty to a given clinical outcome, no diagnosis is ever completely established, and no prognosis is ever completely accurate. A nurse will never have all of the reliable and valid clinical information needed to make choices with 100% certainty. Because of this uncertainty, nurses know that when we make judgement calls and decisions, we can only think in terms of the balance of probabilities.

      Despite recognising the probabilistic nature of judgement and decision making in health care, most nurses (as well as doctors and patients) prefer to discuss chance events using words rather than numbers. For example, if you listen to surgical nurses talking with patients about their chances of passing flatus in the 24 hours after abdominal surgery, you will hear expressions such as, “It is rare that patients will start to pass wind in the first 24 hours after major abdominal surgery; it is more likely you will begin passing wind again 24–48 hours after surgery.” Likewise, most community nurses are unlikely to be heard telling their patients that “There is a 45–50% chance of your venous leg ulcer healing in 6 months with this compression bandage.” These qualitative expressions of uncertainty, although relatively easy to use, are prone to misunderstanding. When asked to quantify their uncertainty in response to hearing words such as “rare” or “likely,” patients and professionals—even when given access to the same information—often give widely variable estimates of what the phrase means to them.2

      Fortunately, for patients and professionals alike, there is an alternative: the language of probability. As well as being the language of uncertainty, it is also the language used in evidence-based decision making. Determining the statistical significance of a research result means knowing what probability means; and individualising the results of research studies to your patients requires knowledge of probability and how it works.

      Like any language, there are rules and structures to be learnt. This Notebook tackles some of these rules and illustrates them in a clinical context. It is intended as a starting point for a broader and more in-depth engagement with probabilistic reasoning and as a primer for the uninitiated.


      Everyone hates numbers. In fact, so great is our hatred of numbers that some commentators suggest that whole societies suffer from a form of collective innumeracy.3 Innumeracy with regard to risk and probability is manifest in 3 main ways in health care: ignorance, miscommunication, and difficulty inferring even when we have knowledge of risks.


      When individuals have no idea of the size of a relevant risk, they can be considered ignorant of those risks. An example is the common misconception among some nurses that an x-ray is a definitive test for a fracture; even x-rays yield false-positive and false-negative results. In fact, all diagnostic tests yield false-positive and false-negative results.


      How we present risk information to others matters. When we talk about risk in ways that are more likely to be understood by colleagues and patients, we can improve the communication of those risks. However, sometimes we communicate risks to others in ways that lead to confusion rather than informed decision making. It may help to consider the different ways that risk can be presented. As an experiment, each year I ask groups of student nurses to identify which number is largest: 0.05 or 0.01? Usually, about 10% of each group get it wrong (the answer is 0.05). When I present the same number as a percentage (5% or 1%), the proportion of wrong answers decreases. I could improve the communication of risk even more by using integers or whole numbers (eg, 5 out of 100 people).3

      Difficulty inferring from knowledge of risks

      Sometimes we know about the risks associated with a situation but find it difficult to draw useful conclusions about them. One example sometimes encountered in students is when they recognise that the “base rate” (or prevalence, in the language of clinical epidemiology) for a disease changes depending on the clinical setting (eg, there are usually more patients with diabetes on an endocrinology ward at any one time than in a primary care clinic). Despite having this knowledge, students are often unaware of how to adjust the results of a diagnostic test in response to different clinical settings (ie, the higher the prevalence of a disease or condition, the higher the positive predictive value of the test). Aside from learning the necessary clinical epidemiological decision “rules,” using natural frequencies can help, and tools such as decision trees (which make choices, outcomes, probabilities, and preferences clearer) are also useful. The use of decision analysis will be discussed in an upcoming EBN notebook.


      Probability in health care can be viewed most commonly as an expression of a subjective degree of belief or as the frequencies of a phenomenon in a sample of observations.

      Probability as strength of belief

      In the 18th century, the Reverend Thomas Bayes, mathematician, Presbyterian, and intellectual all-rounder, developed the idea of probability as a subjective belief that an event or phenomenon will happen. We can see the basic idea in action when a patient asks for your clinical judgement or opinion and your answer recognises the uncertainty surrounding your knowledge. For example, how would you respond if you were a surgical nurse and were asked, “How likely is this dressing to heal my scar in the next 4 weeks?” If you answered, “I’m pretty confident—I would say 80% likely,” then you would be using probability as a measure of how strongly you believe that the dressing will heal the scar. Bayesian reasoning is a separate branch of statistics but is part of evidence-based reasoning and, in particular, diagnostic thinking.4

      Frequentist probability

      Far more common in health care is the idea of probability as a reflection of the number of occurrences of a phenomenon within a sample of observations—or expressed alternatively, the relative frequency of an event in a specified reference class. For example, the statement “4% of men (or 1 in 25) are likely to die from prostate cancer” suggests a relative frequency of 4 men dying of prostate cancer out of every 100 men in the reference class (all men).

      Although both approaches can be used to describe probabilities and the measures used by both achieve the same result, the frequentist approach is mostly used in health care because it is more objective than the “strength of belief” approach. Table 1 outlines different ways of measuring probability.

      Table 1 Different ways of measuring probability


      Probabilities range from 0 to 1 (or 0% to 100%). A probability of 0 means there is no chance at all that an event will occur, and a probability of 1 (or 100%) means that an event is certain to occur.

      The easiest way to appreciate probability is to use the example of a die. Although the relevance of a die to health care is not immediately obvious, it is the simplest way to grasp the key concepts, rules, and calculations of probability. A die has 6 sides, and so we expect that there is a 1 in 6 (ie, 1 side of 6 possible sides) or 16.7% chance that any one of the numbered sides will be face up.

      It is important to remember that probability is a measure of the relative frequency of an event within a larger number of events. The upshot of this is that if you threw the die 6 times (a small number of events, or sample), then you should not be surprised to see the same number come up more than once (for the sake of argument, let’s say 4 times). However, if you threw the die 200 times (a much larger sample of events) and the same number came up 160 times, you might be suspicious. If you could be bothered to throw the die 2000 times and the same number came up 1600 times, then you should definitely suspect that the die is unfair. You may have noticed that the proportion of events (sides of the die observed face up) to the sample stays the same in these examples (80%). What changes is the size of the sample (from 6 to 200 to 2000). As the number of observations in a sample gets bigger, your assumptions change (you get more confident in the results you see)—or at least they should. Once you appreciate this relation between uncertainty and the number of observations, the importance of large sample sizes in research studies that make claims about the “significance” of their results becomes clearer.

      Probability of an event: notation

      When the probability of an event is being described, the notation used is “p(event).” So, the probability of obtaining a 4 on our die is expressed as “p(4).”

      Probability of something not happening: notation

      Sometimes we are interested in the probability of something not happening, as in the probability that a patient does not have a disease (useful when making differential diagnoses as an advanced practitioner). Using our die example, there is a 50% chance (3 out of 6 throws) that the die will show a number other than 1, 2 or 5. We can write this as p(1, 2, 5). The bar over 1, 2, and 5 means “not.” One other important thing to remember about probabilities is that the chances of something happening and not happening must add up to 1. This makes sense. Consider a coin being flipped; if there is a 50% (0.5) chance that it will land heads up and it doesn’t land heads up, then there has to be a 50% (0.5) chance that it will land tails up instead. 100% of all possibilities are covered. In clinical practice, this is important because if, for example, there is a 5% chance that a person has diabetes, then there must be a 95% chance that they do not have diabetes. Such “rules” are vital for rational and logical reasoning about problems. When we don’t follow such rules, we are prone to systematic errors and biases.5

      The summation rule: covering all possibilities

      The example of a coin toss is an example of the summation rule for probabilities. This rule says that all mutually exclusive outcomes must add up to 1. The rule applies whether there are just 2 possibilities or if there are more than 2. Consider the example of establishing the probability that a patient who has had a myocardial infarction will adhere to a cardiac rehabilitation programme consisting of dietary changes and psychological self-help. We need to consider the probabilities of the patient complying with diet alone, self-help alone, both diet and self-help, and neither diet or self-help. Thus, the probabilities can be expressed as p(diet) + p(self-help) + p(both) + p(neither)  =  1.

      Conditional probabilities

      Sometimes we are only interested in some outcomes in a specific group of patients or a specific patient care setting or situation. For example, a community nurse specialising in diabetes may only want to know the prevalence of venous leg ulcers in patients with type 2 diabetes (rather than all patients); or as a mental health nurse, I may be interested in parasuicide attempts by patients in forensic mental health settings (rather than all care settings). These types of probabilities are called conditional because a condition has been placed on the possible chances. This probability can be written using the symbol “|” for “given”: p(venous leg ulcers|type 2 diabetes). This is read as, “What is the probability of a venous leg ulcer in a patient given that the patient has type 2 diabetes?” Diagnostic test results are often interpreted with reference to conditional probabilities. For example, the sensitivity of a diagnostic test is the probability of a positive test result given that the person has the disease we are interested in.

      If the condition makes no difference to the probability, then the factors are called independent. For example, if being older makes no difference to the prevalence of venous leg ulcers, then age and venous leg ulcer prevalence are independent. If this were the case, you would not need to be more aware of the likelihood of an ulcer in an older patient than a younger patient. Actually, they are not independent: p(ulcer|>70 years of age) > p(ulcer|⩽70 years of age).

      The prevalence of venous leg ulcers in the UK is 3 ulcers per 1000 in younger people and increases to 20 ulcers per 1000 in people >70 years of age.6 Clearly, 0.3% (3/1000) is not equal to 2% (20/1000), and so the 2 are not independent.

      Combining probabilities

      Consider 2 outcomes of interest (A and B). There are 3 questions we can ask:

      (i) What is the probability of seeing A OR B?

      (ii) What is the probability of seeing A AND B?

      (iii) What is the probability of seeing A GIVEN B?

      (i) Probability of seeing A OR B

      If the possibilities are mutually exclusive (ie, they cannot possibly occur together), then the probability of the events is found by adding the individual probabilities of the 2 events together. For example, taking our die again, we can say that the probability of landing a 2 or a 4 is 0.16 + 0.16  =  0.32 or 32%.

      What about those (common) situations in health care that are not mutually exclusive? For example, suppose you are a practice nurse interested in whether a patient has diabetes OR has hypertension. In the figure, we see that the 2 categories (patients with diabetes and patients with hypertension) overlap.

      Possible relation between diabetes and hypertension

      In this case, we need to add up (sum) the probability of having hypertension and the probability of having diabetes and then subtract the probability of having both. We would write this as p(diabetes) + p(hypertension) – p(diabetes and hypertension)

      This is really just an extension of the summation rule because if the events (diabetes and hypertension) were independent, then the “overlap” p(diabetes and hypertension) would be 0.

      (ii) Probability of seeing A AND B

      So far, we have been concerned with uncertainties about a single event, but health care often involves multiple uncertainties. When one outcome does not affect the probability of the other occurring (ie, they are independent events), then you simply multiply the 2 (or more) probabilities. For example, if you want to know the probability that a young mother-to-be who is planning a family will give birth to 2 boys (separately, not as twins), then the probability is simply 0.5 × 0.5  =  0.25 or 25% (assuming there is a 50% chance of having a boy).

      (iii) Probability of seeing A GIVEN B

      Far more probable in health care is the scenario in which multiple uncertainties are present, and they are related to each other. If you think about the number of problems in health care that are related, the list soon becomes extensive (eg, smoking and heart disease; alcohol abuse and memory loss; social class, educational level, and healthy eating). Let’s continue with the example of the nurse concerned about the conditional probability that a patient has BOTH diabetes AND hypertension. The probability we are interested in is the overlap shown in the figure. There are 2 ways of determining this probability: using probability notation and simple math and using tables.

      Using probability notation and simple math, we can reconfigure our problem into 2 independent probabilities and just multiply as if they were unrelated. Thus, our question becomes p(diabetes and hypertension)  =  p(diabetes) × p(diabetes|hypertension).

      To illustrate the second way of getting the correct probability (ie, using the information in a “contingency table”), let’s use the relation between body mass index and depression in people with diabetes. This can be expressed as, “what is the probability of being overweight AND having depression GIVEN you have diabetes?” (being overweight and being depressed are related and thus not independent). Looking at data from a study of 3010 patients in the USA,7 we can work out the appropriate probabilities (table 2).

      Table 2 Body mass index (BMI) and depression in patients with diabetes*

      Organising the numbers in this way makes it easier to make sense of the information. By looking at the “depressed” column, we see that 320 of the 2969 people were depressed, and so the chance of being depressed is 11% or 0.11 (320/2969). How does the table help us to learn more about the chances of depression if you are overweight? If we examine the row representing people who are overweight (BMI >30), we see that there are 1501 such individuals. If we examine the “depressed” column, we see that 216 people had both depression and a BMI >30. Thus, the probability that a person is depressed given that they are overweight is 14% or 0.14 (216/1501). This is clearly much higher than the probability of being depressed if you are not overweight, which is 7% or 0.7 (104/1468). So, the probability of being depressed if you are not overweight is half that of people who are overweight. Aside from concluding that being overweight increases your chances of being depressed, we also know that depression and weight are dependent on each other. If they had been statistically independent of each other, the 3 probabilities would have been identical.

      If we return to the mathematical route of multiplying probabilities (now that we have them), we can write down the multiplication rule: for any 2 events A and B, p(A and B) =  p(A|B) × (B)  =  p(B|A) × (A).

      You will recall that this is exactly what we did earlier when we looked at the relation between diabetes and hypertension. It may appear that it does not matter which way we express the probabilities (eg, p(B|A) is the same as p(A|B), but this is not true. Have a look at table 3, which presents the rates of hay fever and eczema in UK children at 11 years of age in the National Child Development Study.8

      Table 3 Hay fever and eczema in children 11 years of age*

      To understand that p(A|B) and p(B|A) are not the same, work out the probabilities that a child will have hay fever given that she has eczema (p(hay fever|eczema)), and then work out the probability that a child will have eczema given she has hay fever (p (eczema|hay fever)).

      The probability that a child with hay fever will also have eczema is p(eczema|hay fever)  =  141/1069  =  0.13. This is much less than the probability that a child with eczema will have hay fever. p(hay fever|eczema)  =  141/561  =  0.25.

      Another example of ignoring the order of conditional probabilities in reasoning can be found in the public health scare of the late 1990s regarding measles, mumps, and rubella (MMR) vaccination. During this crisis, the public were led to believe that MMR triple vaccination was linked to an increased risk of autism in children. At the heart of the scare was a fundamental confusion in the minds of many parents (and healthcare professionals) that the probability of developing autism given that a child has had the MMR vaccine was the same as the probability of having had the MMR vaccine given that the child is autistic. Or using probability notation, p(autism|MMR)  =  p(MMR|autism). At the time of the scare, most children with autism would have had the MMR vaccine—almost all children had the vaccine. However, if we took a random sample of all children and gave half the MMR vaccine and half no immunisation and followed them over time to see if they developed autism, it is likely that the numbers of children developing autism in the 2 groups would be the same.9 Confusion over conditional probabilities matters in every day health care practice.

      Odds and probabilities

      At some point in clinical decision making and research, you will encounter the concept of “odds.” Odds are just another way of expressing uncertainty but with slightly different properties than probabilities. Probabilities, as we have seen, range from 0 (impossible) to 1 (certainty). Odds, however, range from 0 to infinity. Why? They have such a broad spread because they represent the ratio of the chance of something happening to the chance of something not happening. So, if we have an event with a 0.75 (75%) chance of happening and so a 0.25 (or 25%) chance of not happening, then the odds of the event happening are 0.75/0.25 or 3 (often written as 3:1 or 3 to 1).


      Mastering the language of probability is the key to thinking creatively about uncertainty in decision making. Without learning the rules, nurses are artificially limiting the size of their problem solving toolkit for clinical practice. Without competence in this basic building block for giving “due weight” to research evidence, clinicians will always fall back on the intuitive, the familiar, and the experiential—all of which have some severe limitations. This Notebook has provided the first step in introducing some of the key ingredients necessary for reasoning probabilistically; only by acknowledging uncertainty, making it transparent, and factoring it into our decisions can we move toward truly shared decision making with our patients.


      • A complete version of this Notebook appears as a chapter in the forthcoming text: Thompson CA, Dowding D. Essential clinical decision making for nurses. London: Elsevier Science. To be published summer 2009.


      Free Sample

      This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of EBN.
      View free sample issue >>

      EBN Journal Chat

      The EBN Journal Chat offers readers the opportunity to participate in discussion about research articles and commentaries from Evidence Based Nursing (EBN).

      How to participate >>

      Don't forget to sign up for content alerts so you keep up to date with all the articles as they are published.