At various stages, the CICM fellowship exam has expected its candidates to either define or calculate sensitivity, specificity, a predictive value or a likelihood ratio. Question 24 from the second paper of 2009 even asked about ROC curves. More mundane questions have included the follwoing selection:
- Question 20 from the second paper of 2016 (sensitivity, specificity, PPV NPV and accuracy)
- Question 19.2 from the first paper of 2010 (Calculate sensitivity, specificity, PPV and NPV)
- Question 24 from the second paper of 2009 ( ROC curves)
- Question 29.2 from the first paper of 2008 (Calculate sensitivity, specificity, PPV and NPV)
- Question 15 from the first paper of 2007 (Calculate sensitivity, specificity, PPV, NPV and PLR)
- Question 13 from the first paper of 2005 (Define sensitivity, specificity, PPV and NPV)
- Question 14 from the second paper of 2002 (Define sensitivity, specificity, PPV and NPV)
Additionally, in the primary exam the college asked about the definitions of sensitivity, specificity, NPV and PPV in Question 15 from the first paper of 2011.
Probably the best, most comprehensive reference for this topic would have to be the 2008 article by Ana-Maria Šimundić. It is the main source for the information in the summary below.
Sensitivity and specificity
- Sensitivity = true positives / (true positives + false negatives)
- This is the proportion of disease which was correctly identified
- Specificity = true negatives / (true negatives + false positives)
- This is the proportion of healthy patients in who disease was correctly excluded
- Unaffected by prevalence of the disease
- By increasing sensitivity, the test becomes less specific
- SNOUT and SPIN: a SeNsitive test rules OUT disease, a SPecific test rules IN disease
Positive and negative predictive value
- The proportion of the positive tests results which are actually positive is the Positive Predictive Value
- PPV = true positives / total positives (true and false)
- The proportion of negative test results which are actually negative is the Negative Predictive Value
- NPV = true negatives / total negatives (true and false)
Unlike sensitivity and specificity, PPV and NPV take into account the community prevalence of the disease.
Accuracy
All these tests described here are measures of accuracy in some sense or another. In is disturbing that most people, when confronted with the task of defining accuracy, will usually be unable to give more than the common household definition (i.e. it is "the quality of being correct or true to some objective standard"). In statistics, the definition of accuracy is governed by the ISO, who define as follows:
- Accuracy is the proximity of measurement results to the true value
- Precision is the repeatability, or reproducibility of the measurement
Accuracy is occasionally referred to as "diagnostic accuracy" or "diagnostic effectiveness" and is expressed as the proportion of correctly classified subjects among all subjects:
- Accuracy = (true positives + true negatives) / (total)
Youden's index
- This is a measure of a test's performance, used to evaluate its overall discriminative power in order to compare it with other tests.
- YI = (sensitivity + specificity) - 1
- For a test with poor diagnostic accuracy, Youden's index equals 0, and in a perfect test Youden's index equals 1.
- Unfortunately, it does not differentiate the sensitivity and specificity of tests. Tests with very poor sensitivity and very good specificity would end up with the same index as tests which have excellent sensitivity and virtually no specificity.
Receiver operating characteristic curve (ROC curve)
Primary exam Question 14 from the second paper of 2012 also asked about the ROC. A more in-depth discussion is carried out in the required reading section for the primary exam. Here, all trimming has been stripped down to brutalist concrete point-form.
- The ROC curve is a plot of sensitivity vs. false positive rate, for a range of diagnostic test results.
- Sensitivity is on the y-axis, from 0% to 100%
- The ROC curve graphically represents the compromise between sensitivity and specificity in tests which produce results on a numerical scale, rather than binary (positive vs. negative results)
- ROC analysis can be used for diagnostic tests with outcomes measured on ordinal, interval or ratio scales.
- The ROC curve can be used to determine the cut off point at which the sensitivity and specificity are optimal.
- All possible combinations of sensitivity and specificity that can be achieved by changing the test's cutoff value can be summarised using a single parameter, the area under the ROC curve (AUC).
- The higher the AUC, the more accurate the test
- An AUC of 1.0 means the test is 100% accurate
- An AUC of 0.5 (50%) means the ROC curve is a a straight diagonal line, which represents the "ideal bad test", one which is only ever accurate by pure chance.
- When comparing two tests, the more accurate test is the one with an ROC curve further to the top left corner of the graph, with a higher AUC.
- The best cutoff point for a test (which separates positive from negative values) is the point on the ROC curve which is closest to the top left corner of the graph.
- The cutoff values can be selected according to whether one wants more sensitivity or more specificity.
Advantages of the ROC curves:
- A simple graphical representation of the diagnostic accuracy of a test: the closer the apex of the curve toward the upper left corner, the greater the discriminatory ability of the test.
- Allows a simple graphical comparison between diagnostic tests
- Allows a simple method of determining the optimal cutoff values, based on what the practitioner thinks is a clinically appropriate (and diagnostically valuable) trade-off between sensitivity and false positive rate.
- Also, allows a more complex (and more exact) measure of the accuracy of a test, which is the AUC
- The AUC in turn can be used as a simple numeric rating of diagnostic test accuracy, which simplifies comparison between diagnostic tests.
Likelihood ratio
A tangent at a point on the ROC curve represents the likelihood ratio for a single test value
- Positive likelihood ratio = sensitivity / (1-specificity)
- The chance of having the disease if the test is positive
- Negative likelihood ratio = (1-sensitivity) / specificity
- The chance of having a disease if the test is negative
- Pre-test probability = (true positive + false negative) / total sample
- Pre-test odds: pre-test probability / (1- pre-test probability)
- Post-test odds: likelihood ratio × pre-test odds