From the inclusion of this topic in the primary curriculum, one might think that the college expects their fellows to be competent in running and designing a clinical trial. Pity that the majority submerge into the rich thick ooze of private practice. Either way, this topic has come up in the primary exams, so it's worth knowing a bit about it. A corresponding chapter (Steps in designing and conducting a clinical trial) exists in the Required Reading section for the Fellowship Exam, but is little more than a "stub" by the Wikipedia definition, there only as an aide memoir to the fatigued Fellowship candidate. Here however, the stages of trial design are dissected in some detail.
How much detail is required? Hard to say. One might be able to draw one's own conclusions from Question 10 from the second paper of 2016 ("Discuss the stages in designing a clinical trial"). The college answer was literally four lines, of which two were wasted on complaining about poorly prepared candidates. According to those comments from the examiners, "An outline of the background literature review, defining the hypothesis, study design, ethics, funding, consent, conduct and follow-up was expected." With nothing better than this for guidance, the following chapter attempts to answer part A.a from the 2014 Primary Exam curriculum, "Describe the stages in the design of a clinical trial."
First, in briefest summary, as seen in LITFL:
Stages of clinical trial design
Now, in luxurious detail.
The majority of the material below has been drained out of Statistical methods for anaesthesia and intensive care (P S Myles, T Gin - 1st ed - Oxford : Butterworth-Heinemann, 2001). This is the recommended text in the 2014 revision of the primary curriculum, and it has a nice 10-page section (pp.135-145) titled "How to design a clinical trial".
Develop a study protocol. The aim is to minimise bias and to maximise precision. That development has numerous facets to it:
Perform a pilot study. This, in the Myles and Gin book, is described as "an important and often neglected process". This tests the feasibility of the full-scale trial, assessing the assumptions made in the course of making the trial protocol. Results may require that the final trial protocol be modified, or that the required sample size be recalculated.
Ogden and Goldberg (2002) are an excellent resource for this topic, and offer excellent answers to the question "why was my research not funded?". In summary:
Basic rules as to how to do it properly can be found in the The Australian Clinical Trial Handbook from the TGA. In short, as thew trial runs there needs to be:
The final outcome of a trial is some sort of paper. That paper should be formatted according toThe Consolidated Standards of Reporting Trials (CONSORT) statement. In their media release there is a table (Table 1) expanding upon over thirty item numbers which must be satisfied for successful compliance. This table is not reproduced here even by this details-hungry author. The primary exam candidate is left to decide by themselves as to how much of their time this is worth.
Long-term reassessment of the study population is sometimes warranted and brings about new information; this becomes more valid if is planned well in advance and if the population and outcome measures are agreed upon before the original study is concluded.
Ethic in human research is governed by both Hippocratic principles (i.e. don't kill any patients in the name of science) as well as by utilitarian principles (to benefit many, a few may be exposed to some risk). But how do we decide how much risk is ok, and how many must benefit in order to tolerate it? If some of the experimental subjects might die as a result, how many people need to be saved in order to make this an acceptable loss? Who is the ultimate arbiter of right and wrong?
As in most things in life, the general principles guiding medical research can be summarised in the recommendation "don't do what the Nazis would have done". This suggestion was put into recognisable modern form with the Nuremberg Code, which was drafted at the end of the the Doctor's trial in Nuremberg (1947). This itself was based (very closely, to the point of plagiarism) on the German Guidelines for Human Experimentation of 1931- to the extent that Ghooi (2001)wondered how the authors were able to successfully pass it off as original work. The code consisted of six (later, ten) statements which define legitimate medical research, as contrasted with the grotesque experiments performed by the 22 Nazi doctors standing trial. These were:
This is not quoted very often, as it was largely superceded by the Declaration of Helsinki in 1967. That thing is now in its seventh revision (2013), and the original ten commandments from Nuremberg have bloated and mutated in line with medical advances into 37 points, covering topics which range from the ethical use of placebo therapies to issues of sponsorship and privacy.
Now, that all sounds fine and good, but neither document was ever made into law. In Australia, the Declaration of Helsinki has informed and guided the NHMRC standards as laid out in the National Statement on Ethical Conduct in Human Research (2007, updated 2015). These guidelines direct the approval of the research by local Human Research Ethics Committees (HRECs).
To some extent, randomisation in a trial violates the principle of informed consent (directly, because by definition nobody is "informed" as to what treatment the patient is getting). This issue is dissected in detail by Ben Freedman (NEJM, 1987). The principle of equipoise is what underlies the ongoing practice of randomising patients into trials. The principle dictates that it is appropriate to randomise to alternative treatments in a situation where the clinician and the patient have no particular preference or reason to favour one treatment over another. In other words, because neither treatment is known to carry more risk than the other, it is reasonable to offer either treatment with the confidence that no harm will be done. "The requirement is satisfied if there is genuine uncertainty within the expert medical community — not necessarily on the part of the individual investigator"
This is of course absolute bullshit, because the clinician definitely has some inclination that one treatment is superior to the other (in fact that's the hypothesis of the study). An as the trial progresses and data is collected, whatever equipoise was present will be disturbed if the trial data overwhelmingly favours one treatment over another. How to deal with this? Freedman (1987) offers several viewpoints from contemporary authors, which are essentially philosophical backdoors into unethical practice and "frank counsels of desperation" by his description, relying on such bizarre suggestions as the proposition that many people are altruistic enough to forgo some personal gain (or even survival) in the interest of progress.
Thus, the concept of clinical equipoise is needed. Freedman's widely quoted paper suggests the following:
"We may state the formal conditions under which such a trial would be ethical as follows: at the start of the trial, there must be a state of clinical equipoise regarding the merits of the regimens to be tested, and the trial must be designed in such a way as to make it reasonable to expect that, if it is successfully concluded, clinical equipoise will be disturbed."
That means, the trial must be expected to resolve a dispute among doctors. The investigators may have their equipoise disturbed as much as they like, provided they recognise that their less-favored treatment is preferred by colleagues who are also responsible and competent people. As trial results roll in and interim analysis data becomes available, an independent body (eg. an independent data and safety monitoring committee established to monitor the trial) can independently arrive at the conclusion that the available evidence favours one treatment so strongly that equipoise is disturbed in the entire clinician community. In this case, to protect the public the trial can be brought to an end (see the chapter on trials which end prematurely).
Statistical methods for anaesthesia and intensive care (P S Myles, T Gin - 1st ed - Oxford : Butterworth-Heinemann, 2001)
Ospina-Tascón, Gustavo A., Gustavo Luiz Büchele, and Jean-Louis Vincent. "Multicenter, randomized, controlled trials evaluating mortality in intensive care: doomed to fail?." Critical care medicine 36.4 (2008): 1311-1322.
Smith, Gordon CS, and Jill P. Pell. "Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials." BMJ: British Medical Journal 327.7429 (2003): 1459.
Vincent, Jean-Louis. "We should abandon randomized controlled trials in the intensive care unit." Critical care medicine 38.10 (2010): S534-S538.
Hébert, Paul C., et al. "The design of randomized clinical trials in critically ill patients." CHEST Journal 121.4 (2002): 1290-1300.
Jadad, Alejandro R., and Murray Enkin. Randomized controlled trials: questions, answers, and musings. Blackwell Pub., 2007.
Walker, Wendy. "The strengths and weaknesses of research designs involving quantitative measures." Journal of research in nursing 10.5 (2005): 571-582.
Sanson-Fisher, Robert William, et al. "Limitations of the randomized controlled trial in evaluating population-based health interventions." American journal of preventive medicine 33.2 (2007): 155-161.
Levin, Kate Ann. "Study design VII. Randomised controlled trials." Evidence-based dentistry 8.1 (2007): 22-23.
Efird, Jimmy. "Blocked randomization with randomly selected block sizes." International journal of environmental research and public health 8.1 (2010): 15-20.
Stang, Andreas. "Randomized controlled trials—an indispensible part of clinical research." Deutsches Ärzteblatt International 108.39 (2011): 661.
Singal, Amit G., Peter DR Higgins, and Akbar K. Waljee. "A primer on effectiveness and efficacy trials." Clinical and translational gastroenterology 5.1 (2014): e45.