Last updated on Tue, 12/19/2017 - 16:39
Highest mark: 7.3
A colleague directs your attention to a recently published randomised trial on a therapeutic intervention.
Outline the features of the trial that would lead you to change your practice.
Points to consider in the answer would be:
- Does the population studied correspond with the population the candidate expects to treat?
- Were the inclusion/exclusion criteria appropriate?
- Was the trial methodology appropriate – was there adequate blinding and randomisation?
- Was the primary outcome a clinically relevant or a surrogate endpoint?
- Was the length of follow up adequate?
- Was the trial sufficiently powered to detect a clinically relevant effect?
- Were the groups studied equivalent at baseline?
- Is the statistical analysis appropriate – was there an intention to treat analysis, have differences between groups at baseline been adjusted for? Are there multiple sub group analyses, and if so were they specified a priori?
- Is this a single centre study or multi centre?
- Were the results clinically significant rather than just statistically significant?
- Is the primary hypothesis biologically plausible with pre-existing supporting evidence?
- Are the findings supported by other evidence – have these results been replicated?
- Would there be logistical and/or financial implications in practice change?
- Are there important adverse effects of the treatment?
This is slightly different to asking "what makes a valid trial" or "how do you judge high-quality evidence", even though these clearly play a role (and in fact the college answer consists of a boring list of such criteria). There are situations where practice is changed by methodologically inferior but otherwise compelling studies; or where expertly designed trials make minimal impact in the daily practice of individuals. A good read on this specific subject is a wonderfully titled 2016 article by John Ioannidis, "Why most clinical research is not useful."
In short, a trial should possess the following features in order to affect practice:
Answers to a real problem. The clinical trial needs to be addressing something which is a problem, and which needs to be fixed in some way. If there is no problem, then the trial was pointless because existing practice is already good enough (i.e. no matter how good the methodological quality, the trial can be safely ignored because your practice does not need to change). Similarly, if the problem is not sufficiently serious, the cost and consequences of changing practice outweighs the benefit.
Information Gain. The clinical trial should have offered an answer which we don't already know.
Pragmatism. The trial should be related to a real-life population and realistic settings, rather than some idealised scenario.
Patient-centered outcome. Some might argue that research should be aligned with the priorities of patients rather than those of investigators or sponsors.
Transparency. The trial authors should be transparent in order for the results to inspire enough confidence to change practice on the basis of its results.
Validity. The trial should be constructed with sufficient methodological quality for its results to be taken seriously.
Oh's Intensive Care manual: Chapter 10 (p83), Clinical trials in critical care by Simon Finfer and Anthony Delaney.
JAMA: User's guides to the medical literature; see if you can get institution access to these articles.
The CONSORT statement has its own website and is available for all to peruse.
CASP (Critical Appraisal Skils Program) has checklists for the appraisal of many different sorts of studies; these actually come with tickboxes. One imagines reviewers wandering around a trial headquarters, ticking these boxes on their little clipboards.
Ioannidis, John PA. "Why most clinical research is not useful." PLoS medicine 13.6 (2016): e1002049.