The numbers game: What they didn’t teach me in medical school ! (or may be I just bunked)

“Statistics can be notoriously slippery. Easily misused by the unscrupulous. Misinterpreted by the unwary. Nowhere is this more true than in the field of human health”

Numbers game

Why do we have an obsession for numbers? Come to think of it, it’s actually inconceivable for anyone to think of a world without numbers. Indeed, numbers help us make sense of the world around us. But there’s a fundamental human tendency at work here. We all feel uncomfortable in the face of uncertainty. And numbers provide us with a sense of control. And numbers have that halo of certainty. The allusion to science. As long as a number can be assigned to a problem, we know how big or small it is. Yet that sense of certainty is often an illusion. And perhaps in no other field does this illusion manifests itself as starkly as it does in the business of medicine. It all comes down to who you are. Numbers can be played up or played down.

“There are 3 kinds of lies: lies, damned lies and statistics” – Mark Twain

Here’s a question: Which disease would you prefer? Disease A that kills 25 out of 100 people; or Disease B that kills 250 out of 1000 people? Sounds like a stupid question ? It is. Both mean the same: a quarter of people will be killed. Although both diseases imply the same risk; yet it tends to affect our perception. Its the science of cognition at play here; the theory of cognitive psychology that is often brazenly misused for manipulating minds. Here’s another one:  Which of the two diseases is more serious? Disease A kills 1286 people out of 10000; or Disease B that kills 24.1 people out of 100. In a study, people rated disease A more serious even though Disease B carries almost double the risk. Gerd Gigerenzer calls this innumeracy. It comes in various forms – ‘illusion of certainty’, ‘ignorance of risk’, ‘mis-communication of risk’ or ‘clouded thinking’.

Playing with data is not a new vocation in medical industry. Small tweaks fetch big dollars. Any talk on this subject would be incomplete without reference to the commercially driven manipulation of drug efficacy data. From Prozac to Pravastatin there is no dearth of examples. My interest in this particular issue stems from the time I worked on my ‘medical ethics’ dissertation. Here I quote just one example from my thesis – the case of the celebrated ‘statin’ group of drugs – the bad cholesterol busters. Back in 1995, a press release by the American Heart Association (AHA) read – “Wonder drug cuts risk of death by 22% in people with high cholesterol”. The blockbuster headline was referring to the clinical trial results of pravastatin. But what is 22% one might ask? An average educated person would probably take it as “22 deaths prevented out of every 100 people treated with pravastatin.” But a quick look at the data and the mischief is hard to miss. The official report was a mish-mash of statistical jargon & hyperbole mixed with some mind boggling calculations. But the above AHA headline could only have been made from the following basic figures. Data reported at the completion of 5 years:

Deaths per 1000 people treated with pravastatin for 5 years: 32

Deaths per 1000 people treated with a placebo for 5 years: 41

Its fascinating how simple tweaks can allow one to abuse the data, so to speak. It is common for the media to report clinical trial results in terms of relative risk reduction; which is what the AHA did. But there are two other ways of interpreting the same data.

Wonder drug

The AHA version first. Relative Risk Reduction: 41 minus 32 = 9 ÷ 41 x 100 = 22%. The calculation for Absolute Risk Reduction is a bit different. Let’s see how – Deaths were reduced from 41 to 32 for every 1000 people taking the drug. Implies 9 deaths averted for every 1000 people treated. In other words, an absolute risk reduction of 0.9 %. Now lets talk about the less industry-friendly interpretation- ‘Number Needed to Treat’ (NNT). NNT refers to the number of people who must take the drug in order to save one life. In case of pravastatin, thrder urns out to be 111. Implies that 111 individuals need to be treated in order to save one life. How? Because 9 in 1000 deaths were prevented by the drug; it comes to about 1 in 111). The most impressive and the most misused statistic is the first one – ‘Relative Risk Reduction‘ that suggests higher benefit than what really exists. Simply put, NNT means that out of every 111 people who swallowed the tablet daily for five years, only 1 person benefited while the other 110 did not. The phenomenon of innumeracy is not just exclusive to non-medical people, even physicians unwaringly suffer from this form of clouded thinking. Take the case of mammography that is a much recommended screening tool for breast cancer.

Normal women after a particular age are encouraged to periodically undergo mammography even in the absence of any symptoms. Lets simulate a real world scenario and see how this works (or doesn’t work). Taking a simple approach, the risk of a woman between 40-50 years of age having breast cancer is about 0.8 percent. That’s the rough prevalence rate. Studies have shown that only 90% of women who actually have breast cancer will be diagnosed as such by mammography. Among women who do not have breast cancer, the probability of their being labelled as a case of breast cancer comes to about 7 %. Now it gets a bit complicated from here on. What is the actual probability of a woman whose mammogram is positive for breast cancer, will actually have a breast cancer? Its surprising how simple these equations become if we think in terms of numbers rather than percentages. Lets do this again with numbers:

8 out of every 1000 women have breast cancer. Out of 8 such women, 7 will turn out positive on mammography. Of the remaining 992 women who don’t have breast cancer, some 70 women will still have a positive mammogram. Imagine a group of women who have positive mammogram on screening. How many will actually have breast cancer? Its the same information as given earlier, but thinking in terms of numbers makes it a bit easy to calculate, which is: Only 7 out of the 77 women who tested positive on mammography (70+7) actually have breast cancer. This works out to about 1 in 11 women or, in other words, 9 %. Much lower than the estimate of 90% highlighted by the media. One incident widely reported in the media showed how clouded thinking may affect even physicians. A consultant in the erstwhile Clinton administration asked a group of American physicians the probability of a woman who tested positive on mammography, as actually having breast cancer. 95 % of physicians estimated the probability as 75% – almost 10 times more than that in reality.

That there were extraneous considerations motivating the aggressive industry-sponsored campaign promoting mammography, is a tale for another day. The point here is not to debate the merits and demerits of mammography. The relevance of the foregoing is more to acknowledge our deficiency when it comes to interpretation of research statistics. Acknowledging our innumeracy is the first step towards dealing with it. Modern medicine is gradually witnessing a transition towards patient-centred medicine, where patient and physician are poised to equally participate in deciding the course of treatment, guided by each individual’s unique circumstances. This paradigm change is already occuring, and the best start to it would be to grow out of our innumeracy. Referring to the risks and benefits of a particular medical intervention in terms of natural frequencies (numbers), rather than percentages, is a step towards warding off the illusion of certainty that envelopes scientific medicine. Let this aspect of science be diligently taught to take numbers for what they are- just numbers!


One thought on “The numbers game: What they didn’t teach me in medical school ! (or may be I just bunked)

Comments are closed.