Question 23

In the context of a randomised control trial comparing a trial drug with placebo:

a)  briefly explain the following terms:

  • Type 1 error
  • Type 2 error
  • Study power
  • Effect size

b)  List the factors that influence sample size.

[Click here to toggle visibility of the answers]

College Answer

Type 1 error
The null hypothesis is incorrectly rejected. Type 1 errors may result in the implementation of therapy that is in fact ineffective or a false positive test result.

Type 2 error
The null hypothesis is incorrectly accepted. Type 2 errors may result in rejection of effective treatment strategies or a false negative test result.

Study power
Power is equal to 1-β. Thus if β = 0.2, the power is 0.8 and the study has 80% probability of detecting a difference if one exists

Effect size
Effect size (∆) is the clinically significant difference the investigator wants to detect between the study groups. This is arbitrary but needs to be reasonable and accepted by peers. It is harder to detect a small difference than a large difference. The effect size helps us to know whether the difference observed is a difference that matters.

Factors influencing  sample size
•    Selected values for significance level, α, power β and effect size ∆ (smaller values mean larger sample size)
•    Variance /SD in the underlying population (larger variance means larger sample size)

Discussion

The college presents a concise and effective answer to this question, which should serve as a model. Below is a non-model answer overgrown with the unnecessary fat of references and digressions.

a)

Type 1 error: The incorrect rejection of a null hypothesis.

  • A false positive study.
  • Finding a treatment effect where there actually is none.
  • Results in the implementation of an ineffective treatment.

Type 2 error: the incorrect rejection of the alternative hypothesis.

  • A false negative study.
  • Finding no treatment effect, when there actually is one.
  • Results in an effective treatment being wrongly discarded.

Study power: The probability that the study correctly rejects the null hypothesis, when the null hypothesis is false.

  • Expressed as (1-β), where β is the probability of Type 2 error (i.e. the probability of incorrectly accepting the null hypothesis).
  • Generally, the power of a study is agreed to be 80% (i.e. = 0.2), because anything less would incur too great a risk of Type 2 error, and anything more would be prohibitively expensive in terms of sample size.

Effect size: a quantitative reflection of the magnitude of a phenomenon; in this case, the magnitude of the positive effects of a drug on the study population.

  • In this case, it is the difference in the incidence of an arbitrarily defined outcome between the treatment group and the placebo group.
  • Effect size suggests the clinical relevance of an outcome
  • The effect size is agreed upon a priori so that a sample size can be calculated (as the study needs to be powered appropriately to detect a given effect size)

Factors which influence sample size:

There is a good article on this in Radiology (2003)

  • Alpha value: the level of significance (normally 0.05)
  • Beta-value: the probability of incorrectly accepting the null hypothesis (normally 0.2)
  • The statistical test one plans to use
  • The variance of the population (the greater the variance, the larger the sample size)
  • Estimated measurement variability (similar to population variance)
  • The effect size (the smaller the effect size, the larger the required sample)

References

There is an online Handbook of Biological Statistics which has an excellent overview of power analysis.

Kelley, Ken, and Kristopher J. Preacher. "On effect size." Psychological methods 17.2 (2012): 137.

Moher, David, Corinne S. Dulberg, and George A. Wells. "Statistical power, sample size, and their reporting in randomized controlled trials." Jama 272.2 (1994): 122-124.

Cohen, Jacob. "A power primer." Psychological bulletin 112.1 (1992): 155.

Dupont, William D., and Walton D. Plummer Jr. "Power and sample size calculations: a review and computer program." Controlled clinical trials 11.2 (1990): 116-128.

Eng, John. "Sample Size Estimation: How Many Individuals Should Be Studied? 1." Radiology 227.2 (2003): 309-313.