Parametric and nonparametric are two broad classifications of statistical procedures. According to Hoskin (2012), “A precise and universally acceptable definition of the term ‘nonparametric’ is not presently available". It is generally held that it is easier to show examples of parametric and nonparametric statistical procedures than it is to define the terms. A reasonable definition is that parametric statistical procedures assume that the population follows a normal distribution; the "parameters" are in fact such values as the mean and standard deviation. Nonparametric statistical procedures make no assumptions about the shape or parameters of the population distribution.
This chapter answers parts from Section A(e) of the Primary Syllabus, "Describe the appropriate selection of non-parametric and parametric tests and tests that examine relationships (e.g. correlation, regression)". This topic was examined in Question 2(p.2) from the first paper of 2009. Prior to this, it was examined in Question 4 from the second Fellowship Exam paper of 2004.
Description of parametric tests
Parametric tests are more accurate, but require assumptions to be made about the data, eg. that the data is normally distributed (in a bell curve). If the data deviate strongly from the assumptions, the parametric test could lead to incorrect conclusions.
If the sample size is too small, parametric tests may lead to incorrect conclusions due to the loss of "normality" of sample distribution.
Examples of parametric tests:
Description of non-parametric tests
Non-parametric tests make no assumptions about the distribution of the data. If the assumptions for a parametric test are not met (eg. the distribution has a lot of skew in it), one may be able to use an analogous non-parametric tests.
Non-parametric tests are particularly good for small sample sizes (<30). However, non-parametric tests have less power.
Examples of non-parametric tests: