Basic definitions in statistics

Occasionally, the college likes to ask about basic definitions in statistics. Most often, these questions appear in the Fellowship Exam, where the number of questions of the "define this or that concept" variety have required a dedicated revision chapter full of quick definitions. In the Primary, so far the only question of this sort has been Question 15 from the first paper of 2011, which asked for definitions of sensitivity, specificity, NPV and PPV. This vaguely recalls Section A(g) from the 2014 Primary Exam syllabus, which urges the candidates to *"understand the terms sensitivity, specificity, positive and negative predictive value and how these are affected by the prevalence of the disease in question".*

At risk of breaking SEO, the list of definitions from the Fellowship Exam revision chapter is reproduced here to simplify revision.

- The range of values within which the "actual" result is found.
- A CI of 95% means that if a trial was repeated an infinite number of times, 95% of the results would fall within this range of values.

- The CI gives an indication of the precision of the sample mean as an estimate of the "true" population mean
- A wide CI can be caused by small samples or by a large variance within a sample.

- The probability of the observed result arising by chance
- The
*p-*value is the chance of getting the reported study result (or one even more extreme) when the null hypothesis is actually true.

- This is a "false positive".
- The null hypothesis is incorrectly rejected (i.e. there really is no treatment effect, but the study finds one)
- The alpha value determines the risk of this happening. An alpha value of 0.05 - same as the p-value - so there is a 5% chance of making a Type 1 error.

- This is a "false negative"
- The null hypothesis is incorrectly accepted (i.e. there really is a treatment effect, but you fail to find it)
- The beta determines the risk of this happening. Where beta is 0.2 (a common setting), at a power of 0.8 (1-beta), there is a 20% chance of making a Type 2 error.

- The power of a statistical test is the probability that it correctly rejects the null hypothesis, when the null hypothesis is false.
- The chance of a study demonstrating a "true" result
- Power = (1 - false positive rate)
- Power = (1- beta error)
- Normally, power is 80% (i.e. a 20% chance of a false negative result)

**Alpha value:**the level of significance (normally 0.05)**Beta-value:**the power (normally 0.2)- The statistical test you plan to use
- The variance of the population (the greater the variance, the larger the sample size)
- The effect size (the smaller the effect size, the larger the required sample)

- Effect size is a quantitative reflection of the magnitude of a phenomenon, eg. the difference in the incidence of an arbitrarily defined outcome between the treatment group and the placebo group.
- Effect size suggests the clinical relevance of an outcome.
- The effect size is agreed upon
*a priori*so that a sample size can be calculated (as the study needs to be powered appropriately to detect a given effect size)

- Actual event rate in the group (treatment or placebo). Essentially, it is the incidence rate.

- The rate of events in the treatment group, divided by the rate of events in the control group.

The college describes it as "the difference in event rates between 2 groups expressed as proportion of the event rate in the untreated group".

- The Odds Ratio represents the odds that an outcome will occur given a particular exposure, compared to the odds of the outcome occurring in the absence of that exposure.
- An OR =1 suggests there is no association.
- If the CI for an OR includes 1, then the OR is not significant (i.e. there might not be an association)

**RRR= absolute risk reduction divided by the control group risk.**- Or, one can calculate it by subtracting relative risk (RR) from 1.
- Thus, RRR = (1-RR)

**This is the difference between the baseline population risk and the treatment risk.**- It is an effective way of demonstrating a treatment effect.
- ARR = incidence in exposed - incidence in unexposed

- This is a measure of the absolute effect of the risk of those exposed compared to unexposed.
**The inverse of the absolute outcome difference between the treatment group and the control group.**

- NNT = 1 / (control event rate - experimental event rate)
- One must use the absolute, rather than the relative, values here. NNT is the inverse of absolute risk reduction.
- Lets say the absolute risk reduction is 10%. Thus, NNT = 1/0.1, or 10.

**Sensitivity**= true positives / (true positives + false negatives)- This is the proportion of disease which was correctly identified

**Specificity**= true negatives / (true negatives + false positives)- This is the proportion of healthy patients in who disease was correctly excluded

- The proportion of the positive tests results which are actually positive is the Positive Predictive Value
- PPV describes the likelihood of disease or outcome of interest given a positive test result.
- PPV = true positives / total positives (true and false)

- The proportion of negative test results which are actually negative is the Negative Predictive Value
- NPV describes the likelihood of no disease or avoiding an outcome given a negative test result
- NPV = true negatives / total negatives (true and false)

Question 15 from the first paper of 2011 asked about the influence of prevalence on NPV and PPV. Myles and Gin is offered as a reference, but no specific chapter is referred to. The textbook contains Chapter 8, *Predicting outcome: diagnostic tests or predictive equations* (p.94). In summary, when there is a high prevalence of a disease in a population, a test will have a high PPV in spite of being a complete failure as a test. Similarly, a poor test will have a high NPV when the prevalence of a disease is low.

A discussion of Bayes' theorem in relation to NPV and PPV was* "noted in more comprehensive answers" * to Question 15 from the first paper of 2011, according to the examiner's remarks. To what extent such a discussion was *expected* is not mentioned, and it is unclear whether the additional marks earned thereby would be worth the additional expenditure of time and effort, particularly if time is short. For the revising exam candidate who does not wish to sacrifice important cardiovascular and respiratory physiology topics to an indepth exploration of Bayes' Theorem, the following brief points would probably be enough.

- This is a formula used to calculate the probability of having the disease, given a positive test result.
- It combines prior probability with test sensitivity to calculate the PPV.
- In short, PPV = (sensivity × prevalence) / (probability of a positive test result)

- This is a plot of sensitivity vs. false positive rate, for a number of test results
- Sensitivity is on the y-axis, from 0% to 100%
- The ROC curve graphically represents the compromise between sensitivity and specificity in tests which produce results on a numerical scale, rather than binary (positive vs. negative results)
- The ROC curve determines the cut off point at which the sensitivity and specificity are optimal.

- AUC is the Area Under the ROC curve.
- The higher the AUC, the more accurate the test
- An AUC of 1.0 means the test is 100% accurate
- An AUC of 0.5 (50%) means the ROC curve is a a straight diagonal line, which represents the "ideal bad test", one which is only ever accurate by pure chance.
- When comparing two tests, the more accurate test is the one with an ROC curve further to the top left corner of the graph, with a higher AUC.
- The best cutoff point for a test (which separates positive from negative values) is the point on the ROC curve which is closest to the top left corner of the graph.

- A tangent at a point on the ROC curve represents the likelihood ratio for a single test value
**Positive likelihood ratio**= sensitivity / (1-specificity)- The chance of having the disease if the test is positive

**Negative likelihood ratio**= (1-sensitivity) / specificity- The chance of having a disease if the test is negative

**Pre-test probability**= (true positive + false negative) / total sample**Pre-test odds:**pre-test probability / (1- pre-test probability)**Post-test odds:**likelihood ratio × pre-test odds