“The absence of evidence of effect does not imply evidence of absence of effect”. Please explain how this statement applies to evaluation of the medical literature.

[Click here to toggle visibility of the answers]

College Answer

Candidates were expected to think more broadly than just the “power” of a study. Consider:

No evidence - never asked the question. Low level evidence. Physiological  data  only. Animal data  only. Ethical barriers to conducting the definitive study. Unanswerable for logistic reasons. Restrospective / case series only. Poorly designed existing studies (related to blinding, allocationconcealment, loss of follow up, intention to treat, uniform management apart from intervention., appropriate stats methods etc.). Meta-analysis pitfalls - significant disagreements with subsequent RCT. Type 2 error - false acceptance of null hypothesis - inadequate power - small single centre studies


This question recalls a more uncivilised time, when bewildered CICM fellowship candidates were assailed by vaguely worded essay questions in an attempt to wring some sort of creative lateral thinking from their algorithmic reptile brains. The resulting confusion can be observed even in the college answer, which -rather than defending any particular argument - instead exhorts us to think "broadly", and then presents us with a word salad of key phrases to consider. The modern papers are thankfully free from this sort of thing.

If one were to take this question seriously, one would structure one's response in the following manner:


“The absence of evidence of effect does not imply evidence of absence of effect” is a rebuttal to the Argument from Ignorance, which (put simply) states that if something has not been proven true, then it must be false. The rebuttal addresses the third possibility, that the currently available evidence has failed to detect a phenomenon. In the interpretation of medical literature, this means that a study that has failed to demonstrate the evidence of a risk has not succeeded in demonstrating the absence of risk. Similarly, a study which has failed to demonstrate a significant difference between two treatments has not demonstrated the absence of difference, only the absence of evidence of a difference.


The idea that the absence of evidence for a phenomenon should imply that there is no such phenomenon is known in the form of the Kehoe principle, named after Robert Kehoe who argued that the use of leaded petrol was safe because at that stage there was no evidence to the contrary. The opposite view is known as the Precautionary Principle. It holds that in the absence of evidence, one must take a conservative stance and manage uncertain risks in a manner which most effectively serves human safety.


In the absence of evidence, the precautionary principle recommends that the clinician takes reasonable measures to avoid threats that are serious and plausible. In this, it may be a more humanistic principle than the alternatives (such as the Expected Utility Theory).

In brief:

  • Safest and most humanistic approach
  • Risk-averse
  • The burden of proof of safety is on the investigator
  • The burden of risk and benefit analysis is on the clinician


In its strongest formulation, the Precautionary Principle calls for absolute proof of safety before new treatments or techniques are adopted. Such stringent standards may result in an excessive regulation of potentially useful treatment strategies. One may envision a reductio ad absurdum where table salt is outlawed because there is insufficient evidence for its safety. Some authors have suggested that the precautionary principle "replaces the balancing of risks and benefits with what might best be described as pure pessimism". Furthermore, not all experimental questions can be answered with high-level evidence (eg. in the case of rare diseases with insufficient sample size for RCTs, or in the cases where it is unethical to randomise intervention).

Published data may not offer sufficient evidence. The power of a study influences its ability to discern an effect of a given size, and it is possible that small studies are inadequately powered to detect a small treatment effect. Type 2 errors can be committed in this way.

In brief:

  • Potentially useful treatments may be discarded for lack of evidence
  • Not all treatments can be the subject of RCTs, particularly
    • where sample size in by necessity small
    • where randomisation is unethical
    • where blinding is impossible
  • Not all studies of effective treatments are appropriately powered to detect an effect of appropriate size
  • Not all meta-analysis reviews are able to find all the available evidence due to publication bias

In summary:

There is a danger of misinterpreting "negative studies", because studies which have not found statistically significant differences in effect may have been inadequate to detect such an effect. In careful interpretation of medical literature one must be alert to the idea that not all negative studies are truly "negative". Decisonmaking in uncertainty should be guided by humanistic principles and careful risk-vs-benefit analysis.


Foster, Kenneth R., Paolo Vecchia, and Michael H. Repacholi. "Science and the precautionary principle." Science 288.5468 (2000): 979-981.


Alban, S. "The ‘precautionary principle’as a guide for future drug development."European journal of clinical investigation 35.s1 (2005): 33-44.


Peterson, Martin. "The precautionary principle should not be used as a basis for decision‐making." EMBO reports 8.4 (2007): 305-308.


Altman, Douglas G., and J. Martin Bland. "Statistics notes: Absence of evidence is not evidence of absence." Bmj 311.7003 (1995): 485.


Resnik, David B. "The precautionary principle and medical decision making."Journal of Medicine and Philosophy 29.3 (2004): 281-299.


Rabin, Matthew. "Risk aversion and expected‐utility theory: A calibration theorem." Econometrica 68.5 (2000): 1281-1292.


Alderson, Phil. "Absence of evidence is not evidence of absence." BMJ328.7438 (2004): 476-477.