ISSN: 2378-315X BBIJ

Biometrics & Biostatistics International Journal
Editorial
Volume 2 Issue 3 - 2014
A Tale of Two Randomization Procedures
Vance W Berger*
National Cancer Institute, University of Maryland Baltimore County, USA
Received: April 08, 2015 | Published: April 16, 2015
*Corresponding author: Vance W Berger, University of Maryland Baltimore County, Biometry Research Group, National Cancer Institute, 9609 Medical Center Drive, Rockville, MD 20850, USA, Tel: (240) 276-7142, Email:
Citation: Berger VW(2014) A Tale of Two Randomization Procedures. Biom Biostat Int J 2(3): 00031. DOI: 10.15406/bbij.2015.02.00031

Abstract

Perhaps the most fundamental dimension along which trial quality is, or should be, judged is the one that made it into the name, RANDOMIZATION. How well do we, as a society, do in ensuring that only the best randomization procedures are used in the pinnacle of evidence-based medicine, the randomized trial? Sadly, the answer is not very well, and this is uniform across all disease areas, all journals and all research groups. The emperor’s new clothes have yet to be exposed, and so the charade continues unabated, with the near ubiquitous choice of blocked randomization, despite its offering only weak encryption, over the vastly superior maximal procedure, offering strong encryption. Nor is this choice even justified. Nowhere in the literature is there an argument to suggest that blocked randomization is superior to, or even equivalent to, the maximal procedure. But by avoiding the issue altogether, researchers are able to implicitly justify the use of an unjustifiable procedure, one that could never be justified explicitly. The best we can do is point out the folly.

Keywords: False Controversy; Maximal procedure; Permuted blocks; Randomized trials; Weak encryption.

Editorial

Perhaps the most fundamental dimension along which trial quality is, or should be, judged is the one that made it into the very name of the randomized clinical trial, RANDOMIZATION. So it is not only fair, but also imperative, to ask how well we, as a society, do in ensuring that only the best randomization procedures are used in the pinnacle of evidence-based medicine, the randomized trial. Sadly, the answer is not very well, and this is uniform across all disease areas, all journals and all research groups.

The usual methods for evaluating trial quality, such as the jaded score, represent nothing more than a course Eddington fishing net, perhaps suitable for catching outright fraud and those major biases and flaws that have made it into prime time, but not the equally important ones whose 15 minutes of fame are yet to come. This latter class of smaller fish, the ones that slip right through the net, would include flawed randomization procedures that offer only weak encryption, despite the fact that strong encryption is both necessary and readily available. This particular version of the emperor’s new clothes has yet to be exposed as such, and so the charade continues unabated, with the near ubiquitous choice of blocked randomization, despite its offering only weak encryption, over the vastly superior maximal procedure [1,2] offering strong encryption.

It is worth noting, even emphasizing, that this choice is never justified in practice, nor could it be. Nowhere in the literature is there an argument to suggest that blocked randomization is superior to, or even equivalent to, the maximal procedure. This is a false controversy. But by avoiding the issue altogether, researchers can implicitly justify the use of an unjustifiable procedure that could never be justified explicitly. Each time they do, they not only invalidate their own trial, and preclude the possibility of allocation concealment, but they also engage the ripple effect to empower other researchers to do the same. Each time a trial is conducted with permuted block randomization, it lends perverse credibility to the method, which then becomes a standard, and is that much more likely to be used in future trials as well, and that much harder to dislodge. And so it perpetuates itself with a vicious cycle that is a major contributor to the reproducibility crisis that has been discussed so frequently in recent times.

The solution is rather obvious. We need to make statistics boring again [3] so that true quality is more influential than either novelty or, in the case of blocked randomization, frequency of use. If researchers were diligent in weighing their options, and serious about the public trust invested in them, then they would arrive at the only conclusion possible. Use the maximal procedure instead of permuted blocks.

References

  1. Berger VW (2005) Selection Bias and Covariate Imbalances in Randomized Clinical Trials. John Wiley & Sons, Chichester, USA.
  2. Berger VW, Ivanova A, Knoll MD (2003) Minimizing predictability while retaining balance through the use of less restrictive randomization procedures. Stat Med 22(19): 3017-3028.
  3. Berger VW (2010) “Making Statistics Boring Again”. Statistics in Medicine 29(13): 1458.
© 2014-2016 MedCrave Group, All rights reserved. No part of this content may be reproduced or transmitted in any form or by any means as per the standard guidelines of fair use.
Creative Commons License Open Access by MedCrave Group is licensed under a Creative Commons Attribution 4.0 International License.
Based on a work at http://medcraveonline.com
Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version | Opera |Privacy Policy