# Question 10

Inspect the data representation shown below.

10.1. What form of data representation is depicted here?

10.2. With respect to the study plots what is represented by:

• The horizontal lines?
• The position of the square?
• The size of the square?

10.3. From the data depicted what could be inferred with regard to the effectiveness of the treatment under investigation?

10.4. What further information relating to the performance of this analysis would you require in order to gauge the accuracy of the conclusions?

10.1. What form of data representation is depicted here?

Forest Plot or Meta Analysis Graph

10.2. With respect to the study plots what is represented by: The horizontal lines?
The position of the square? The size of the square?

The position of the square and the horizontal line indicate the point estimate and the 95%
confidence intervals of the odds ratio respectively. The size of the square indicates the weight of the study.

10.3. From the data depicted what could be inferred with regard to the effectiveness of the treatment under investigation?

The depicted data suggest the treatment is not more effective than control as the 95% confidence limits of the combined odds ratio cross the vertical line.

10.4. What further information relating to the performance of this analysis would you require in order to gauge the accuracy of the conclusions?

Definition of inclusion criteria for studies
Assessment of methodological quality

Measurement of heterogeneity

Assessment of publication bias

## Discussion

This topic is explored in LITFL, where they call it a "forrest plot", perhaps out of respect for Pat Forrest. This is substantially better than Wikipedia, where this form of data representation is referred to as a blobbogram. The example LITFL use for their explanation is derived from the college question.

Anyway. The college answer is correct but very brief, and probably represents something like the "passing grade" for this 10-mark question. With that in mind, and free from the need to be concise, one can launch into an exhaustingly verbose dissection of this question.

10.1 - This is a forest plot. It represents the results of a meta-analysis of studies.

10.2 - The standards for labelling and graphical representation are well summarised by this Cochrane document (however, it appears that careful adherence to standards is no defence against the absence of useful content).

• The horizontal lines: the confidence interval of the individual study
• The position of the square: a point estimate of the odds ratio (OR)
• The size of the square: the weight of the study according to the weighing rules of the meta-analysis, likely representing the sample size and statistical power. This is a powerful tool of psychological manipulation. A paper by a couple of psychiatrists dissected this practice, and suggested that a failure to use square size to identify study weight "may result in unnecessary attention being attracted to those smaller studies with wider conﬁdence intervals that put more ink on the page (or more pixels on the screen)".

10.3 - From the forest plot, one can infer that though statistically there is a trend towards a positive treatment effect, it still does not achieve statistical significance because the range of the 95% confidence interval for their odds ratio crosses the vertical line (the vertical line being an OR of 1.0, which means "no association"). Thus, on the basis of this meta-analysis one would be forced to conclude that the treatment has no effect.

10.4 - "What further information relating to the performance of this analysis would you require in order to gauge the accuracy of the conclusions?" This is a thinly veiled question about the assessment of the validity of a meta-analysis. The college answer demonstrates this in the points they used. In that context, one would theoretically be interested in every aspect of the analysis.

Generic points in the assessment of validity of a meta-analysis include the following:

• Research questions are clearly defined.
• Definition of inclusion criteria for studies is clear.
• Methodological quality of the included studies is rigorously assessed, and the assessment method is transparent.
• A pooled estimate is calculated, and the calculation is transparent.
• A graphical representation of the results is available (Forest plot).
• A measurement of heterogeneity is carried out, with appropriate corrections for heterogeneity (eg. use of fixed-effects or random-effects analysis
• An assessment of publication bias is attempted (Funnel Plot)

If one were to only consider the presented graph, one would be more likely to respond with relevant questions for the meta-analysis authors.

• Inclusion and exclusion criteria. Study 2 is a massive outlier; it would be interesting to learn why it was included, and whether other excluded studies had similar characteristics. Potentially, the exclusion of this study would shift the overall OR off the vertical line.
• Assessment of methodological quality. Again - if the methodology of Study 2 was called into question and it were excluded, this meta-analysis would reach substantially different conclusions. It would be important to learn how the authors of the meta-analysis evaluated its methodology, and whether they were correct to include this study.
• Search strategy and attempts to detect publication bias. There are only 4 studies in the meta-analysis. The addition of another 1 or 2 studies may have a significant impact on the overall OR. If the search strategy was somehow inadequate, studies which might meet the inclusion criteria may have been missed.
• Dealing with heterogeneity. This is important, because there is substantial heterogeneity (again I point to Study 2). Excluding studies simply because they do not agree with the majoritydefeats the purpose of the meta-analysis, but it is important to correct for heterogeneity-inducing differences between trials. This can be done with the use of a random-effects model, which uses a "heterogeneity parameter" as a coefficient to downgrade the precision and weighing of each individual study's effect estimate. This model assumes that in each study the intervention had a different effect, and views each study as a random sample from a hypothetical population of similar studies. The effect of this on the forest plot may not be magical; it merely distributes the weighting (usually giving more weight to smaller studies and less to large ones; Cochrane's handbook suggests that this is because "small studies are more informative for learning about the distribution of effects across studies than for learning about an assumed common intervention effect"). Having used such a heterogeneity correction technique, one can be more confident that the resulting summed OR is not damaged by the inclusion of a garbage study. However, the use of a random-effects model can exacerbate publication bias if the results of smaller studies are systematically different from results of larger ones (eg. small studies and independent and find no treatment effect, but large studies are funded by Big Pharma and find a treatment effect where there is none). Cochranerecommends meta-analysis authors compare the results of a fixed-effects model and random-effects model analysis to see whether the smaller studies have a significant effect on the effect size.

## References

Schriger, David L., et al. "Forest plots in reports of systematic reviews: a cross-sectional study reviewing current practice." International journal of epidemiology39.2 (2010): 421-429.

Lewis, Steff, and Mike Clarke. "Forest plots: trying to see the wood and the trees." Bmj 322.7300 (2001): 1479-1480.

Anzures‐Cabrera, Judith, and Julian Higgins. "Graphical displays for meta‐analysis: An overview with suggestions for practice." Research Synthesis Methods 1.1 (2010): 66-80.

Cochrane: "Considerations and recommendations for
figures in Cochrane reviews: graphs of statistical data"
4 December 2003 (updated 27 February 2008)

Reade, Michael C., et al. "Bench-to-bedside review: Avoiding pitfalls in critical care meta-analysis–funnel plots, risk estimates, types of heterogeneity, baseline risk and the ecologic fallacy." Critical Care 12.4 (2008): 220.

DerSimonian, Rebecca, and Nan Laird. "Meta-analysis in clinical trials."Controlled clinical trials 7.3 (1986): 177-188.

Biggerstaff, B. J., and R. L. Tweedie. "Incorporating variability in estimates of heterogeneity in the random effects model in meta-analysis." Statistics in medicine 16.7 (1997): 753-768.

The Cochrane Handbook: 9.5.4 "Incorporating heterogeneity into random-effects model"