One typically relies on some sort of dispassionate machine to decide which patients end up in the treatment group, and which in the control group. This eliminates the bias of selection and confounding.

Essentially, the aim is to ensure that both groups have an equal chance of developing the treatment outcome before the treatment is administered. Then, you need to ensure that nobody can predict which group any given patient is going to be allocated to- this is called allocation concealment. This way both groups maintain the equality of their chance to develop that treatment outcome - they both remain identical, with the exception of the administered treatment. Randomisation must be truly random - there cannot be any sort of predictable sequence to it, otherwise allocation concealment cannot occur.

Blinding is the next step - ensuring that all the trial participants don't know who is getting what treatment. Not always is this possible.

According to Oh's Manual, poor allocation concealment can lead to a 40% exaggeration in treatment effect, and poor blinding to another 17%.

## Randomisation in clinical trials

• Assignment of clinical trial participants so that each participant has an equal chance of being assigned to any of the groups.
• Successful randomisation requires that group assignment cannot be predicted in advance.
• Minimises selection bias
• Allows probability theory to be used to express the likelihood that chance is responsible for the diffences in outcome among groups.

## Simple randomisation:

• Computer-generated randomness.

## Block randomisation:

• Arrangement of experimental subjects in blocks, designed to keep the group numbers the same.
• Usually, the block size is a multiple of the number of treatments (i.e. if it is a binary Drug A vs Drug B trial, the blocks would be in multiples of two).
• Small blocks are better than large blocks.
• By using an example where block sizes of 4 are used in a trial of drug A versus drug B, one ensures that one answers Question 19 from the first paper of 2016. Coincidentally, this is the same example used by Bland and Altman in their classical 1999 article, "How to randomise".
• That example now, verbatim:

"...sometimes we want to keep the numbers in each group very close at all times. Block randomisation (also called restricted randomisation) is used for this purpose. For example, if we consider subjects in blocks of four at a time there are only six ways in which two get A and two get B: 1:AABB 2:ABAB 3:ABBA 4:BBAA 5:BABA 6:BAAB.  We choose blocks at random to create the allocation sequence. Using the single digits of the previous random sequence and omitting numbers outside the range 1 to 6 we get 5623665611. From these we can construct the block allocation sequence BABA/BAAB/ABAB/ABBA/BAAB, and so on. The numbers in the two groups at any time can never differ by more than half the block length. Block size is normally a multiple of the number of treatments."

According to Question 19 from the first paper of 2016, the official Delaney definition of block randomisation is as follows:

"Simple randomisation may result in unequal treatment group sizes; block randomisation is a method that may protect against this problem and is particularly useful in small trials.

In the context of a trial evaluating drug A or drug B and with block sizes of 4, there are 6 possible blocks of randomisation: AABB, ABAB, ABBA, BAAB, BABA, BBAA.

One of the 6 possible blocks is selected randomly and the next 4 study participants are assigned according to the order of the block.  The process is then repeated as needed to achieve the necessary sample size."

## Cluster randomisation

Features of a cluster-randomised trial:

• Groups of patients rather than individuals are randomised
• A group may be as large as a hospital or an ICU
• This is done because sometimes, it would be totally impractical to randomise an intervention to each individual patient; for example where the intervention is a large scale organisational change
• The number of patients in each cluster does not matter as much as the total number of clusters, and power design involves deciding how many clusters one requires (patients within a cluster are more likely to have similar outcomes).
• The outcome for each patient can no longer be assumed to be independent of that for any other patient,

• Able to test interventions applied to whole services or communities
• Increased logistical convenience (less difficulty than individual randomisation)
• Greater acceptability by participants (when something viewed as a worthwhile intervention is delivered to a large group rather than to individuals)
• Both the direct and indirect effects of an intervention can be captured in a population, i.e. the study is more pragmatic (a good example is a study of infectious disease: not only do the randomised participants benefit from a decontaminatingtreatment, but also the population who are exposed to them)
• This increases the external validity

• The statistical power of a cluster randomised trial is greatly reduced in comparison with a similar sized individually randomised trial (Campbell & Grimshaw, 1998)
• The number of patients required  may be twice or thrice that of a comparable individually randomised trial
• To calculate the power of such a trial requires a specialised approach. The intracluster correlation coefficient needs to be taken into account, as standard power calculations will lead to an underpowered trial if it is analysed taking clustering into account.
• Analysis needs to take into account the cluser design: "If the clustering effect is ignored p values will be artificially extreme, and confidence intervals will be over-narrow, increasing the chances of spuriously significant findings and misleading conclusions". Apparently, this adjustment does not routinely happen.

## Allocation concealment

According to Question 19 from the first paper of 2016, the official Delaney definition of allocation concealment is:

"Procedure for protecting the randomization process and ensuring that the clinical investigators and those involved in the conduct of the trial are not aware of the group to which the subject has been allocated"

In human language:

• This is a technique of preventing selection bias.
• The selection of patients is randomised, and nobody knows what treatment the next enrolled patient will receive.
• A truly random sequence of allocations prevents the investigators from being able to predict the allocated treatment on the basis of previously allocated treatments.

## Difference between blinding and allocation concealment

• Allocation concealment prevents the investigators from predicting who is getting what treatment before the patient is enrolled.
• Blinding prevents the investigators from knowing who is getting what treatment after the patient is enrolled.

## Stratification:

• Stratification is the partitioning of subjects and results by a factor other than the treatment given.
• Stratification ensures that pre-identified confounding factors are equally distributed, to achieve balance. The objective is to remove "nuisance variables", eg. the presence of neutropenic bone marrow transplant recipients in a trial performed on septic patients. One would want to ensure that the treatment group and the placebo group had equal numbers of these haematology disasters.
• According to Question 19 from the first paper of 2016, the official Delaney definition of stratification is as follows:

"Stratification is a process that protects against imbalance in prognostic factors that are present at the time of randomisation.

A separate randomisation list is generated for each prognostic subgroup. Usually limited to 23 variables because of increasing complexity with more variables"

## Minimisation algorithm:

• Minimisation is a method of adaptive stratified sampling.
• The objective is to minimise the imbalance between groups of patients in a clinical trial by ensuring that the treatment group and placebo group each get an equal number of patients with some sort of predetermined characteristics which might act as confounding factors.
• The minimisation algorithm carefully places patients in groups according to the pre-identified confounding factors. Only the first patient is randomly allocated.
• Minimisation is methodologically equivalent to true randomisation but does not correct for unknown confounders (only the  known pre-determined ones)
• According to Question 19 from the first paper of 2016, the official Delaney definition of minimisation algorithm is:

"an alternative to stratification for maintaining balance in several prognostic variables.  The minimisation algorithm maintains a running total of the prognostic variables in patients that have already been randomised and then subsequent patients are assigned using a weighting system that minimizes imbalance in those prognostic variables. "

### References

Altman, Douglas G., and J. Martin Bland. "How to randomise." Bmj 319.7211 (1999): 703-704.

Divine, George W., J. Trig Brown, and Linda M. Frazier. "The unit of analysis error in studies about physicians’ patient care behavior." Journal of General Internal Medicine 7.6 (1992): 623-629.

Kerry, Sally M., and J. Martin Bland. "The intracluster correlation coefficient in cluster randomisation." Bmj 316.7142 (1998): 1455-1460.