Resources for CRTs

Cluster randomised trials (CRTs) differ from individually randomised trials by the presence of correlation between observations within the same cluster. This has implications for the design and analysis since this clustering must be taken into account. Moreover, CRTs are often more prone to bias so this needs to be assessed and reported carefully.

This page provides resources to help trialists and researchers to conduct a CRT following existing guidelines and using the most recent methodological developments.

See What is a CRT? for several textbooks that give an excellent overview of the different stages of the design, conduct, analysis and reporting of CRTs, along with practical examples.

Resources below are separated into sections on Design, Analysis and Reporting.

Design: Introduction

This page provides resources to help with designing a CRT. It covers the following aspects: general information, pragmatic trials, pilot and feasibility studies, ethics, methods to identify and prevent bias at the design stage, randomisation methods, and sample size.

Design: general information

Turner et al. reviewed the recent developments in the design of CRTs:

Recommendations have been proposed to design efficient CRTs:

Design: Pragmatic trials

Many CRTs are pragmatic in that they are aiming to test the effectiveness of interventions as applied in the real world rather than ideal conditions. Although not developed for CRTs only, the Precis-2 tool should be used to design more pragmatic trials:

Design: Pilot and feasibility studies

Pilot and feasibility studies are often useful before conducting a main CRT. This is because of the added complexity in CRTs compared to individually randomised trials, meaning it is important to ensure that conducting a CRT is feasible before conducting the main trial:

This website gives more information on pilot and feasibility studies.

Design: Ethics

In individually randomised trials, consent to participate usually covers the willingness to be randomised, to be subject to the intervention being tested, and to be contacted for follow-up and for data collection. In a CRT these aspects may be separated and opting out of the intervention at an individual level may not be possible. CRTs therefore have unique features that complicate the application of standard ethical guidelines for research.

 

The Ottawa Statement was developed to provide detailed guidance to researchers, research ethics committees and regulators:

The following paper provides a practical and useful framework to guide researchers and research ethics committees through consent issues in CRTs:

Design: Methods to identify and prevent bias

CRTs might be prone to bias due to the procedures surrounding patient recruitment. A review of these biases and solutions to prevent them have been proposed:

Caille et al developed a graphical tool to help with identifying bias at the design (and reporting) stage:

This tool can be easily designed online.

The Cochrane Risk of Bias (RoB) tool is being extended to cluster randomised trials. A draft version of the tool and supporting information can be found on the RoB website.

Design: Randomisation methods

The design chapter in Eldridge and Kerry’s book provides a good introduction to randomisation methods in CRTs:

For parallel CRTs, standard cluster randomisation does not always ensure baseline covariate balance between arms. Ivers et al. have described alternative randomisation techniques, including stratification and minimisation:

Li et al. have investigated the performance of constrained randomisation:

Pseudo cluster-randomisation is also an interesting strategy to deal with selection bias and contamination, as well as the “best allocation” method:

Various practical resources have been produced. See Design – Tools and software.

Design: Sample size

Clustering must be accounted for when estimating the sample size. For an overview of sample size formulas and practical recommendations:

Specific methods have been proposed to estimate the sample size in the following situations:

Fixed number of clusters

Different sized clusters

Three-level CRTs

Time-to-event outcomes

Expected attrition/missing data

Various practical resources have been produced for calculating sample size in CRTs. See Design – Tools and software.

Design: Tools and software

Randomisation resources

Practical resource in R for undertaking allocation concealed blocked randomisation:

SAS macro for constrained covariate randomisation of cluster randomised and unstratified stepped-wedge designs:

Stata command to perform covariate-constrained randomisation for achieving baseline balance in CRTs:

Sample size resources

Sample size/power calculations for CRTs in Stata:

Sample size/power calculations for CRTs in R:

Sample size/power calculations for CRTs in SAS:

Online sample size calculator for CRTs:

Stand-alone program to perform power analysis for multilevel data: (Moerbeek M, Teerenstra S. Power Analysis of Trials with Multilevel Data. Boca Raton: Chapman and Hall/CRC 2015) 

Design: Intracluster correlation coefficient

For an overview of the intra-cluster correlation coefficient, see chapter 8 of Eldridge and Kerry’s book:

Sample size calculations for CRTs requires postulating a value for the intracluster correlation coefficient (ICC) (see Correlations in CRTs). One may, therefore, consider conducting a pilot study to estimate the ICC.

During analysis, computer packages estimate the intracluster correlation coefficient (ICC). Potential methods depend on the outcome:

Giraudeau cautions that clusters from different groups should not be mixed when estimating the ICC:

Methods to estimate confidence intervals for the ICC are described in:

However, Eldridge et al. suggest that pilot studies alone will usually be too small to estimate parameters required for estimating a sample size for a main CRT:

Instead, reviews of ICC values observed in published trials, and especially work on patterns in ICCs, may help in choosing an appropriate value:

Analysis: Introduction

The main consideration in the analysis of CRTS, beyond those of the parallel individually randomised trial, is to account for the clustered structure of the data. Observations in the same cluster are likely to be more similar to one another than observations in different clusters and this needs to be accounted for in the analysis.

We present some general information, followed by specific methods including cluster summary analyses, random effect methods/generalised linear mixed effect models, and generalised estimating equations (GEE). Methods may vary depending on the type of outcome. 

Trialists may need to consider whether methods are appropriate if they have a small number of clusters. There are also methods for dealing with missing data and there are causal inference methods in the situation of non-compliance or indirect effects of the intervention.

If performing a meta-analysis, there are specific methods to include CRT results in meta-analyses.

Finally Bayesian methods have been developed as an alternative framework to the frequentist methods of analysing CRTs.

Analysis: General information

The analysis chapter in Eldridge and Kerry’s book provides a good introduction to analysis in CRTs with lots of examples:

Turner et al. reviewed the recent developments in the analysis of CRTs:

The following provide a comparison of the various analysis methods:

Analysis: Cluster summary

In cluster summary analyses, regression models are fitted on cluster-level summaries of the outcome (e.g. the cluster mean). However, if cluster sizes are variable, clusters must be weighted accordingly.

Analysis: Random effect methods/generalised linear mixed effect models

Adjusting for baseline covariates using cluster summary methods is not straightforward and therefore individual-level regression models are often preferred. Using generalised linear mixed-effect models, or random effects models, allow the estimation of a cluster-specific random effect.

Mixed effects models are described in chapters 6.3.3.4 and 6.3.3.9 of Eldridge and Kerry’s book, for continuous and binary outcomes, respectively:

The following article also discusses generalised linear mixed models:

Analysis: Generalised estimating equations

Generalised estimating equations (GEEs) estimate a population-average effect while accounting for the intracluster correlation.

Generalised estimating equations are described in chapters 6.3.3.3 and 6.3.3.8 of Eldridge and Kerry’s book, for continuous and binary outcomes, respectively:

The following article also discusses generalised estimating equations:

Analysis: Type of outcome

The following papers discuss the analysis of CRTs when considering a specific outcome:

Continuous outcomes

Binary outcomes

Time-to-event outcomes

Analysis: Small number of clusters

The implementation of the approaches above can be challenging when a small number of clusters are randomised:

Analysis: Missing data

Missing outcomes should be assessed and reported in the analysis of CRTs. One may choose to do a sensitivity analysis. Multiple imputation performs well to address the missing at random problem (where missingness is not specifically related to the missing information but is related to some of the observed data), both for continuous and binary outcomes:

In theory these methods allow us to account for missing data on covariates as well, but focus on the outcome (which is usually the challenge in trials since baseline data are usually well recorded).

The following practical resources are for performing multiple imputation accounting for the clustered nature of the data in R:

Using joint modelling in R

Using chained equations in R

Analysis: Causal inference

Because CRTs are often pragmatic trials, they allow the estimation of the effectiveness of the intervention, that is, the effect of the intervention in real-life situations, including non-compliance or indirect effects of the intervention.

However, it is possible to estimate a complier average causal effect in CRTs, that is, the effect of the intervention based on compliers. Compliers are patients in the intervention group who receive the intervention, and patients in the control group who do not receive the intervention but would have received the intervention had they been randomised to it.

It is also possible to measure the indirect effect through mediation analysis:

Analysis: CRTs in meta-analyses

Cluster and individually randomised trials are often meta-analysed together in systematic reviews. Specific methods to include CRT results in meta-analyses can be found in:

The Cochrane Risk of Bias (RoB) tool is being extended to cluster randomised trials. This tool can be used to assess the risk of bias in studies in a systematic review. A draft version of the tool and supporting information can be found on the RoB website.

Analysis: Bayesian methods

Bayesian methods have also been developed as an alternative framework to analyse CRTs:

Reporting: Introduction

Adequate reporting of the trial results is essential to ensure transparency and reproducibility of findings. This page provides resources to help with the reporting of CRTs. It covers three aspects: guidelines for reporting, tools and evaluation of the risk of bias, and considerations for complex interventions.

Reporting: Guidelines

The Consolidated Standards of Reporting Trials (CONSORT) statement, first published in 1996 and subsequently updated includes a checklist of items that should be included in the trial report. Initially developed for individually randomised trials, an extension for CRTs is available:

It is now a requirement that those publishing in high impact journals follow the CONSORT checklist. This checklist can be used along with other relevant extensions, such as the extension for pilot trials or non-inferiority trials. All the supporting documents are freely available on the CONSORT website.

Campbell et al. also provide recommendations for the reporting of the intracluster correlation coefficient in CRTs:

Reporting: Current standard

Several systematic reviews showed that the current reporting of the results of CRTs is suboptimal in various contexts. Here we list a selection of these:

Reporting: Tools and evaluation of the risk of bias

The timeline cluster tool, which can also be used at the design stage, allows a graphical representation of the trial chronology, by intervention arm. It is flexible enough to describe a large variety of CRTs designs, such as cross-sectional, cross-over or longitudinal CRTs. It can also help with identifying the risk of bias surrounding the trial chronology:

A website providing the template and examples can be found here.

In the context of systematic reviews, risk of bias can also be assessed using the risk of bias tool (RoB.2), for which an extension for CRTs is being developed.

Reporting: Complex interventions

Complex interventions are common in CRTs. The following are guidelines for their reporting/development:

The following are graphical representations to describe complex interventions, often evaluated using CRTs: