Randomized controlled trial

From Citizendium
Revision as of 07:50, 23 December 2007 by imported>Robert Badgett (→‎Variations in design)
Jump to navigation Jump to search
This article has a Citable Version.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article has an approved citable version (see its Citable Version subpage). While we have done conscientious work, we cannot guarantee that this Main Article, or its citable version, is wholly free of mistakes. By helping to improve this editable Main Article, you will help the process of generating a new, improved citable version.

"A clinical trial is defined as a prospective scientific experiment that involves human subjects in whom treatment is initiated for the evaluation of a therapeutic intervention. In a randomized controlled clinical trial, each patient is assigned to receive a specific treatment intervention by a chance mechanism."[1] The theory behind these trials is that the value of a treatment will be shown in an objective way, and, though usually unstated, there is an assumption that the results of the trial will be applicable to the care of patients who have the condition that was treated.

Variations in design

Cluster-randomized trials

In some settings, health care providers, or healthcare institutions should be randomized rather than randomizing the research subjects.[2] This should occur when the intervention targets the provider or institutions and thus the results from each subject are not truly independent, but will cluster within the health care provider or healthcare institution. Guidelines exist for conducting cluster randomised trials.[3] Designing an adequately sized cluster-randomized trial is based on several factors. One factor is the intraclass (intracluster) correlation coefficient (ICC).[4][5] The ICC between clusters in analogous to the variance between subject in a randomized controlled trial. Just as in Student's t-test for randomized controlled trial more variance between subjects means a larger study is needed, the less correlation between clusters means more clusters are needed.

Before-after studies

Uncontrolled before-after studies and controlled before-after studies probably should not be considered variations of a randomized controlled trial, yet if carefully done offer advantages to observational studies.[6] As in a true cluster-randomized trial, the intervention group can be randomly assigned; however, unlike a cluster-randomized trial, the before-after study does not have enough clusters or groups. An interrupted time series analysis can try to improve plausibility of causation; however, interrupted time series are commonly performed incorrectly.[7]

Crossover trial

In crossover trials, patients start in intervention and controls, but later all patients switch groups.[8]

Factorial design
Intervention A
Given Not given
Intervention B Given Group 1 Group 2
Not given Group 3 Group 4

Factorial design

A factorial design allows two interventions to be be studied with ability to measure the treatment effect of each intervention in isolation and in combination.

n of 1 trial

In a "n of 1" trial, a single patient randomly proceeds through multiple blinded crossover comparisons. This address the concerns that traditional randomized controlled trials may not generalize to a specific patient.[9]

Noninferiority and equivalence randomized trials

Noninferiority and equivalence randomized trial are difficult to execute well.[10] Guidelines exists for noninferiority and equivalence randomized trials.[11]

Ethical issues

Ethics in selection of the intervention for the control group

Comparing a new intervention to a placebo control may not be ethical when an accepted, effective treatment exists. In this case, the new intervention should be compared to the active control to establish whether the standard of care should change.[12] The observation that industry sponsored research may be more likely to conduct trials that have positive results suggest that industry is not picking the most appropriate comparison group.[13] However, it is possible that industry is better at predicting which new innovations are likely to be successful and discontinuing research for less promising interventions before the trial stage.

There are ethical concerns in comparing a surgical intervention to sham surgery; however, this has been done.[14][15] Guidelines by the American Medical Association address the use of placebo surgery.[16]

Ethics in randomization

Is it ethical to treat patients according to a randomization schedule? The answer is:sometimes, depending on the choice of treatments, the medical condition of the patient, and whether the patient has a choice in the matter. Take a university professor who has just received the devastating diagnosis of a malignant brain tumor. Let us say that this particular tumor is resistant to radiation treatment and has infiltrated too much of the brain to be surgically removed, the professor has a fatal disease. There is one drug (Drug A) that has shown a limited benefit in clinical practice to retarding the growth of this tumor, but there not only no known cure for the professor's condition, there is not even a truly effective treatment to slow the progression of the disease. There is a thoeretical reason to believe that Drug B may be curative-or at least helpful, and Drug B has been tested in animal studies that indicate it should be reasonably safe in humans. In this situation, asking the professor to participate in a trial of Drug A, versus Drug B, in which the choice will be according to a code generated by a computer program is not unethical, assuming that the professor understands and agrees. However, let's change the scenario. If there is a treatment that has some benefit, is it ethical then to ask for the professor's participation in this study? Let's go further, perhaps there is a treatment that has been reported to cure 10% of patients?

In most randomized trials, there is

Assessing the quality of a trial

The Jadad score may be used to assess quality and contains three items:[17]

  1. Was the study described as randomized (this includes the use of words such

as randomly, random, and randomization)?

  1. Was the study described as double blind?
  2. Was there a description of withdrawals and dropouts?

Each question is scored one point for a yes answer. In addition, for questions and 2, a point is added if the method was appropriate and a point is deducted if the method is not appropriate (e.g. not effectively randomized or not effectively double-blinded).

External validation

References

  1. Stanley K (2007). "Design of randomized controlled trials". Circulation 115 (9): 1164–9. DOI:10.1161/CIRCULATIONAHA.105.594945. PMID 17339574. Research Blogging.
  2. Wears RL (2002). "Advanced statistics: statistical methods for analyzing cluster and cluster-randomized data". Academic emergency medicine : official journal of the Society for Academic Emergency Medicine 9 (4): 330–41. PMID 11927463[e]
  3. Campbell MK, Elbourne DR, Altman DG (2004). "CONSORT statement: extension to cluster randomised trials". BMJ 328 (7441): 702–8. DOI:10.1136/bmj.328.7441.702. PMID 15031246. Research Blogging.
  4. Campbell MK, Fayers PM, Grimshaw JM (2005). "Determinants of the intracluster correlation coefficient in cluster randomized trials: the case of implementation research". Clin Trials 2 (2): 99–107. PMID 16279131[e]
  5. Campbell M, Grimshaw J, Steen N (2000). "Sample size calculations for cluster randomised trials. Changing Professional Practice in Europe Group (EU BIOMED II Concerted Action)". J Health Serv Res Policy 5 (1): 12–6. PMID 10787581[e]
  6. Wyatt JC, Wyatt SM (2003). "When and how to evaluate health information systems?". Int J Med Inform 69 (2-3): 251–9. DOI:10.1016/S1386-5056(02)00108-9. PMID 12810128. Research Blogging.
  7. Ramsay CR, Matowe L, Grilli R, Grimshaw JM, Thomas RE (2003). "Interrupted time series designs in health technology assessment: lessons from two systematic reviews of behavior change strategies". Int J Technol Assess Health Care 19 (4): 613–23. PMID 15095767[e]
  8. Sibbald B, Roberts C (1998). "Understanding controlled trials. Crossover trials". BMJ 316 (7146): 1719. PMID 9614025[e]
  9. Mahon J, Laupacis A, Donner A, Wood T (1996). "Randomised study of n of 1 trials versus standard practice". BMJ 312 (7038): 1069–74. PMID 8616414[e]
  10. Kaul S, Diamond GA (2006). "Good enough: a primer on the analysis and interpretation of noninferiority trials". Ann. Intern. Med. 145 (1): 62–9. PMID 16818930[e]
  11. Piaggio G, Elbourne DR, Altman DG, Pocock SJ, Evans SJ (2006). "Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement". JAMA 295 (10): 1152–60. DOI:10.1001/jama.295.10.1152. PMID 16522836. Research Blogging.
  12. Rothman KJ, Michels KB (1994). "The continuing unethical use of placebo controls". N. Engl. J. Med. 331 (6): 394–8. PMID 8028622[e]
  13. Djulbegovic B, Lacevic M, Cantor A, et al (2000). "The uncertainty principle and industry-sponsored research". Lancet 356 (9230): 635–8. PMID 10968436[e]
  14. Cobb LA, Thomas GI, Dillard DH, Merendino KA, Bruce RA: An evaluation of internal-mammary-artery ligation by a double-blind technique. N Engl J Med 1959;260:1115-1118.
  15. Moseley JB, O'Malley K, Petersen NJ, et al (2002). "A controlled trial of arthroscopic surgery for osteoarthritis of the knee". N. Engl. J. Med. 347 (2): 81–8. DOI:10.1056/NEJMoa013259. PMID 12110735. Research Blogging.
  16. Tenery R, Rakatansky H, Riddick FA, et al (2002). "Surgical "placebo" controls". Ann. Surg. 235 (2): 303–7. PMID 11807373[e]
  17. Jadad AR, Moore RA, Carroll D, et al (1996). "Assessing the quality of reports of randomized clinical trials: is blinding necessary?". Control Clin Trials 17 (1): 1–12. DOI:10.1016/0197-2456(95)00134-4. PMID 8721797. Research Blogging.