MOJ ISSN: 2471-139X MOJAP

Anatomy & Physiology
Review Article
Volume 2 Issue 4 - 2016
Research Methods in Medicine and Health Sciences
William James Cobb*
Department of Anatomy, University of Medicine, St Kitts, West Indies
Received: May 19, 2016| Published: May 25, 2016
*Corresponding author: William James Cobb, Department of Anatomy, University of Medicine, St Kitts, West Indies, Email:
Citation: Cobb WJ (2016) Research Methods in Medicine and Health Sciences. MOJ Anat & Physiol 2(4): 00056. DOI: 10.15406/mojap.2016.02.00056

Abstract

Randomised Controlled Trials or (RCTs) have been described by several authors as being the gold standard or bench mark from which significant scientific/ numerical data can be retrieved. Block [1] suggested that although the RCT methodology represents the bench mark or gold standard for an overall scientific methodical research modality, it can only be the gold standard for the type of project that it has been assigned to. This has been echoed by Polit and Beck [2] and Rodeck and Whittle [3]. Comparatively, Grossman and Mackenzie [4] and Seidman [5] suggested that assigning gold standards and bench marks to a specific research methodology could leave the potential for bias within the choice of study design.

Introduction

This work will consider the types of bias that may arise in an RCT, and how it can be subjectively analysed and minimized. Bias has been described as being prejudice or inclination for or against person/s or group/s in a manner that is considered unfair [6]. This will be explored by considering bias at different points within the RCT, namely subject bias and sampling bias. Areas that will be explored within subject bias will include blind and double blind trials. Bias will also be considered with reference to sample selection, taking into consideration how certain samples may have advantages/disadvantages for RCT design methodology.

An RCT is a study in which a group/s of numerically even participants are randomly assigned to sub set groups. The purpose of subdivision of groups is so that one group or one cluster group can be tested with a specific treatment, whilst the other group receives another form of treatment, placebo or no treatment at all. One group will be classed as an experimental group and will receive the actual intervention or treatment, whilst the other group/s (comparison or control) will receive a placebo or no treatment at all. The purpose of this subdivision is to statistically analyse the differences between the groups involved in the trial.

Bias can potentially manifest itself at any point within the RCT. It may be introduced at the beginning of a trial or during the early stages, and can continue through the trial process if not identified. Therefore it is important that it is controlled and minimized within the trial by the careful construction of the study design. In the later stages of the study, once data is collected, it can be analysed statistically to reduce incidences of bias. However, this is dependent on the subjective bias being considered at the beginning of the research [7].

Carter et al. [8] conducted an RCT looking at the acceptability and feasibility outcomes of a self-monitoring weight management intervention delivered by a Smartphone app, compared with a website and paper diary. The study looked at a sample of 128 overweight volunteers who were randomized to receive a weight management intervention delivered by a Smartphone app compared with a website and paper diary. The outcome of the trial showed trial retention of 93% in the Smartphone group, 55% in the website group, and 53% in the diary group. In this trial, subject bias could have been prevalent as the researchers may have wanted to prove that the app is effective, so that it can be commercially viable and hence profitable. Secondly, the sample of overweight volunteers may have been selective as they could have been involved in a slimming club, and therefore the expectation was that weight loss would be favorably charted when using the app.

In order to illustrate the potential bias that could occur within an RCT, a brief fictional scenario will be discussed and analysed to highlight how bias can occur, when it may occur and how it could be avoided. For example, a vascular surgical team wanted to look at the mortality rates of 200 patients with an age range of 30-60 over a course of 2 years post abdominal aortic repair (AAR), with the objective of collecting numerical data that could be used to identify longevity of life post procedure. What potential bias may be instantly recognizable in the title? If from this fictional example the first part of the study is examined, we have a group that is potentially very specific and specialized in nature, this being the vascular surgical team. This fictional project may be a group project that was conceived by a surgeon or surgeons to quantify his/ her/ their surgical procedure/ procedures and its effectiveness. The study could also be from an external source which is investigating the effectiveness of such a procedure, sought possibly through the NHS or under the NICE guidelines.

This initial process of thought from the surgical team would correspond to the first point that this work would like to highlight, namely subject bias. Gad [9] suggests that subject bias is the beginning point of the stages of bias within an RCT. This would be the point at which ideas and research needs are being formed (the boiling point of ideas).

Isenberg [10] discusses the point that the RCT develops from an idea or set of ideas into a lead question that can form a potential project. It is during this initial stage of thought that bias of the group/s conducting the research may begin. Other authors such as Rawlings [11] and Balakrishnan [12] mention the professional biases which may be introduced into a project or projects when a beneficial outcome is needed for the initiating groups/groups of researchers.

A potential way of combating this initial stage of bias as discussed so far and relating it to the fictional example may be to blind or double blind the trial group. Blinding has been described as a process in which crucial information on allocation/s of treatment/s is hidden from the participants, or from the observer or evaluator in the study. The method of blinding in an RCT is used to ensure that there are no differences in the way in which each group is assessed or managed, and therefore, minimizes bias [13]. In the case of the fictional scenario, it may not be possible to blind or double blind as the intervention is surgical and therefore technically and ethically this may not be viable.

To validate the need for this research to be undertaken, and to validate the research question itself, a researcher or reader looking at a project such as the fictional one outlined may consider certain questions regarding bias in the initial project proposal or research title. By asking themselves questions relating to the project proposal or title, broader outcomes and potential bias could be identified. An example of some questions that could be used to evaluate the initial project proposal or project title could include:

  1. Who has chosen this specific project?
  2. Why have they chosen this specific project?
  3. Who are they?
  4. What personal or professional involvement do they have in this project?
  5. Is there a personal incentive for this project to go ahead?
  6. Is the project based on previous research?
  7. Have the ethical principles been considered against the possible outcomes?

The list of questions is not exhaustive and indeed can be either looking for bias or not looking for bias dependent on reader/s. Variation and validity would be very much dependent on the observer; this phenomenon is known as the Hawthorne effect and is a considerable factor when considering bias [14].

Subject bias could be assessed in a number of ways when considering the validity of a project, and this could be accomplished by using an array of research techniques into the topic area. Research could be carried out to see if the chosen area had been based on similar projects (follow up study) and if so, other questions could be asked in relation to the project. These could include: why was the project carried out before? How was the project carried out? Was it needed? Who benefited from the creation of such a project?

Subject bias can have very wide ranging effects on the research project itself and indeed implications for patients/ participants in general, as if the research his biased, then the results themselves are likewise.

The fictional example in this work could be dissected into its core elements to examine the fundamental elements of the research question. This could be accomplished by examining and scrutinizing areas such as the surgical team or teams and their personal/ professional interests in such a project, through interviews, questionnaires, validity research responses. The project could further be examined and scrutinized by several different bodies or researchers to consider different options and to generate a set of differing opinions to broaden the spectrum of the research question.

If the fictional project could be validated at source, or the first element of questionable bias is acceptable, then the next area of bias could be considered, this being the sample or group of subjects in the body of the project. Sample selection within the RCT has been described by several authors such as Nezu [15], Friedman et al. [16] and Monsen and Horn [17] as being a process that involves the selection of a specific group or groups of patients or subjects in order to observe certain aspects relating to a research question. The fictional RCT example in this work would be looking for a specific patient group to consider mortality rates post AAR. This question would directly affect the sample selection process so potentially bias could be incorporated during this. Patients could be handpicked who had positive outcomes post surgery, or vice versa depending on where the research question is aimed.

Several authors including Berger [18] and Domanski and McKinlay [19] have suggested that selection bias is a large contributory factor to the internal validity of an experiment or research project. Selection bias can occur when patients/participants are selected for an intervention on the basis of a variable associated with an outcome. RCTs use randomisation or other similar methods to attempt to combat selection bias, but it must be considered that there is a broad spectrum of sample selection biases which include: subversion bias, technical bias, attrition bias, consent bias, ascertainment bias, dilution bias, recruitment bias, resentful demoralisation, delay bias, chance bias, Hawthorne effect and analytical bias [20]. With this in mind it can be seen that in the sample selection alone, a huge number of considerations have to be taken into account when conducting a systematic review of bias within an RCT.

Conclusion

In conclusion, it is interesting to consider that a very large potential for bias in an RCT research project can be present. The focus of this work has been to consider two of the many areas in which bias can/could manifest. The subject nature of a project alone, be this medical based or non medical, leaves many doors open for some of the biases discussed and indeed many more. The author/ author’s choice to formulate such ideas raises points of bias. Can bias be seen in all RCTs, and if so, what level of bias is acceptable in research? The variables in the outcomes are very broad and can be distorted, depending on the use to which the results will be put. Subject bias appears to be somewhat subjective itself as the reader or observer may be unwittingly engaging in the process known as the Hawthorne effect. Sample selection bias is therefore as complicated as the initial project outline bias, as it too has many variables, which prompt questions as to what are acceptable and what are not acceptable levels of bias in a piece of work. The process of considering bias has raised questions about levels of acceptability of bias within RCT research and how these can be quantified in order to ensure that results are valid and ethical.

References

  1. Wink CS, Rossowska MJ, Nakamoto T (1996) Effects of Caffeine on Bone Cells and Bone Development in Fast-Growing Rats. Anat Rec 246(1): 30-38.
  2. Block DJ (2006) Healthcare Outcomes Management: Strategies for Planning and Evaluation. Jones and Bartlett, London.
  3. Polit DF, Beck CT (2008) Nursing Research: Generating and Assessing Evidence for Nursing Practice. Lippincott Williams and Wilkins, UK.
  4. Rodeck CH, Whittle MJ (2009) Fetal Medicine: Basic Science and Clinical Practice. Churchill Livingstone Publishing, UK.
  5. Grossman J, Mackenzie FJ (2005) The randomized controlled trial: gold standard, or merely standard? Perspect Biol Med 48(4): 516-534.
  6. Seidman AD (2001) Dose Intensity. IOS Press, Amsterdam, Netherlands.
  7. Howlett E, Rogo J, Gabiola Shelton T (2014) Evidence-based Practice for Health Professionals: An Interprofessional Approach. Jones and Bartlett, USA.
  8. Hamer S, Collinson G (2005) Achieving Evidence-based Practice: A Handbook for Practitioners. British Library Cataloguing, London.
  9. Carter MC, Burley VJ, Nykjaer C, Cade JE (2013) Adherence to a smartphone application for weight loss compared to website and paper diary: pilot randomized controlled trial. J Med Internet Res 15(4): e32.
  10. Gad SC (2009) Clinical Trials Handbook. Wiley and Sons, USA.
  11. Isenberg SF (2000) Managed Care, Outcomes, and Quality: A Practical Guide. Thieme Publishing, USA.
  12. Rawlings MD (2011) Therapeutics, Evidence and Decision-Making. Taylor and Francis Publishing, USA.
  13. Balakrishnan N (2010) Methods and Applications of Statistics in the Life and Health Science. Wiley and Sons, New Jersey, USA.
  14. Hart A (2001) Making Sense of Statistics in Healthcare. Radcliffe Publishing, Oxford.
  15. Wilson W, Craig L, Stevenson L (2013) FRCS General Surgery - 500 SBAs and EMIs. JP Medical Publishing, London.
  16. Nezu AM (2008) Evidence-Based Outcome Research: A Practical Guide to Conducting Randomized. Oxford University Press, UK.
  17. Friedman CD, Furberg DL, DeMets DL (2010) Fundamentals of Clinical Trials. Springer Publishing, USA.
  18. Monsen ER, Horn LV (2008) Research: Successful Approaches. Library of Congress Publishing, USA.
  19. Berger VW (2005) Selection Bias and Covariate Imbalances in Randomized Clinical Trials. Wiley and Sons, USA.
  20. Domanski MJ, McKinlay S (2009) Successful Randomized Trials: A Handbook for the 21st Century. Lippincott Williams and Wilkins, USA.
  21. Cook TD, DeMets L (2008) Introduction to Statistical Methods for Clinical Trials. Taylor and Francis Publishing, USA.
© 2014-2016 MedCrave Group, All rights reserved. No part of this content may be reproduced or transmitted in any form or by any means as per the standard guidelines of fair use.
Creative Commons License Open Access by MedCrave Group is licensed under a Creative Commons Attribution 4.0 International License.
Based on a work at http://medcraveonline.com
Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version | Opera |Privacy Policy