Evidence-based medicine
Evidence-based medicine is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.".[1] Alternative definitions are "the process of systematically finding, appraising, and using contemporaneous research findings as the basis for clinical decisions"[2] or "evidence-based medicine (EBM) requires the integration of the best research evidence with our clinical expertise and our patient's unique values and circumstances."[3] Better known as EBM, evidence based medicine emerged in the early 1990's to help healthcare providers and policy makers evaluate the efficacy of different treatments.
Evidence-based practice is not restricted to medicine; dentistry, nursing and other allied health science are adopting "evidence-based medicine" as well as alternative medical approaches, such as acupuncture[4][5]. Evidence-Based Health Care or evidence-based practice extends the concept of EBM to all health professions, including management[6][7] and policy[8][9][10].
Two types of evidence-based medicine have been proposed.[11] Evidence-based guidelines is EBM at the organizational or institutional level, and involves producing guidelines, policy, and regulations. Evidence-based individual decision making is EBM as practiced by an individual health care provider when treating an individual patient.
Why do we need evidence-based medicine?
It is easy to assume that physicians always use scientific evidence conscientiously and judiciously in treating patients. In fact, most of the specific practices of physicians and surgeons are based on traditional techniques learned from their mentors in the care of patients during training. Additional modifications come from personal clinical experience, from information in the medical literature and continuing education courses. Although these practices almost always have a rational basis in biology, the actual efficacy of treatments is rarely tested by experimental trials in people. Further, even when the results of experimental trials or other evidence have been reported, there is a lag time between the acceptance of changes to medical practice and establishing them as routine in clinical care. EBM seeks to address these issues by promoting practices that have been shown to have validity using the scientific method.
Steps in evidence-based medicine
Ask
"Ask" - Formulate a well-structured clinical question.
Acquisition of evidence
The ability to "acquire" evidence in a timely manner may improve healthcare.[12] Unfortunately, doctors may be led astray when acquiring information.[13]
One proposed structure of the evidence search is the 5S search strategy,[14] which starts with the search of "summaries" (textbooks). [15]
Appraisal of evidence
The U.S. Preventive Services Task Force [1] grades its recommendations for treatments according to the strength of the evidence and the expected overall benefit (benefits minus harms). Its five grades are: A.— There is good evidence that the treatment improves important health outcomes, and benefits substantially outweigh harms. B.— There is fair evidence that the treatment improves important health outcomes and that benefits outweigh harms. C.— There is fair evidence that the treatment can improve health outcomes but the balance of benefits and harms is too close to justify a general recommendation. D.— Recommendation against routinely providing the treatment to asymptomatic patients. There is fair evidence that [the service] is ineffective or that harms outweigh benefits. I.— The evidence is insufficient to recommend for or against routinely providing [the service]. Evidence that the treatment is effective is lacking, of poor quality, or conflicting and the balance of benefits and harms cannot be determined. The USPSTF also grades the quality of the overall evidence for a service as good, fair or poor: Good: Evidence includes consistent results from well-designed, well-conducted studies in representative populations that directly assess effects on health outcomes. Fair: Evidence is sufficient to determine effects on health outcomes, but its strength is limited by the number, quality, or consistency of the individual studies, generalizability to routine practice, or indirect nature of the evidence on health outcomes. Poor: Evidence is insufficient to assess the effects on health outcomes because of limited number or power of studies, important flaws in their design or conduct, gaps in the chain of evidence, or lack of information on important health outcomes. |
To "Appraise" the quality of the evidence found is very important, as one third of the results of even the most visible medical research is eventually either attenuated or refuted.[16] There are many reasons for this[17]; two important reasons are publication bias[18] and conflict of interest[19]. These two problems interact, as conflict of interest often leads to publication bias.[20][18]
However an obvious important reason is that many (if not all) studies contain potential flaws in their design, and even when there are no clear methodological flaws, any outcome of a test that is evaluated by statistical test has a margin of error: this means that some positive outcomes will be "false positives".
Publication bias
Whether a treatment or medical intervention is effective or not may be judged either on the basis of the experience of the practising physician, or on the basis of what has been published by others. The publications with greatest authority are generally those that appear in the peer-reviewed scientific journals, and particularly in those journals generally thought to have the highest standards of editorial scrutiny. However, it is no simple matter to get a study published in any peer-reviewed journal, least of all in the best journals. Accordingly, many studies go unreported. It is often thought to be particularly difficult to publish small studies, the outcome of which conflicts with the reported outcomes of larger prviously published studies, or to publish studies where the outcome is equivocal - where no clear conclusion can be drawn. In part this reflects the wish of the best journals to publish influential papers, and in part it reflects simply the fact of authors choosing not to put their energies into publishing studies that are thought to be uninteresting. Such publication bias can be difficult to recognise, but its effects generally tend to encourage publication of studies that support an already formed conclusion, while tending to discourage publication of contradictory or equivocal findings.[18][21] Publication bias may be more prevalent in industry sponsored research.[22]
In performing a meta-analyses, a file drawer[23] or a funnel plot analysis[24][25] may help detect underlying publication bias among the studies in the meta-analysis.
Conflict of interest
In any publication, there is always some issue with regard to conflict of interest. All the work by scientists is funded by groups such as charities, public bodies or private industry. Accordingly there could be pressure to overstate any outcomes or bias a trial to favor a particular outcome. Unfortunately, the presence of authors with a conflict of interest is not reliably indicated in journal articles.[26] Worse it has been reported that some published articles use 'ghost writers'.[27] Ghost writers may have a conflict of interest but this is not apparent since they are not credited as an author in the byline. Finally, academic scientists gain their professional reputations by publishing in quality journals and purely factual summaries do not necessarily impress journal editors any more than they inspire casual readers.
In the design of randomized controlled trials, industry-sponsored studies may be more likely to select an inappropriate comparator group that would favor finding benefit in the experimental group. This may manifest itself by comparing the effectiveness of a new drug with the effectiveness of an established older treatment rather than choosing a competitors current treatment for comparison.[22] When reporting data from randomized controlled trials, industry-sponsored studies may be more likely to omit intention-to-treat analyses.[20] Regarding the conclusions reached in randomized controlled trials, industry sponsored studies may be more likely to conclude that drugs are safe, even when they have increased adverse effects.[28] Alternatively, the usefulness of drugs may be overstated, although, this is contentious since one study did not find evidence of overstatement.[29] in contrast, a later study found that industry sponsored studies are more likely to recommend the experimental drug as treatment of choice even after adjusting for the treatment effect.[30]
Obviously a pharmaceutical company wants to report that its drug is better than a competitor's drug, or better than no treatment, however, due to the threat of litigation, it is not in their interests to suppress or minimise evidence of harm. For the scientists who are conducting the trials, however, the perspective might different: if it becomes clear that a drug is useless or harmful, then the company will cease to work on the drug and a scientists livelyhood could be threatened. Consequently, the responsibility for the integrity of the design and analysis of studies lies squarely with the authors. If the scientists involved in any trial are lacking in competence or integrity, then this will prejudice the value of a trial both for the public and indeed for their industrial sponsors.
Statistical analysis
Statistical analysis of the outcomes of a clinical trial is a complex and highly technical process. These often require the involvement of a professional medical statistician whose advice is needed in the design of the trial as well as in the analysis of its outcome. Flaws in the design of a trial can lead subsequently to weaknesses in statistical analysis. Ideally, a trial protocol should be carefully designed with statistical issues in mind, and with the hypothesis under test clearly formulated, and the agreed protocol should then be strictly adhered to. Often however problems arise during the test; for example, there may be unanticipated outcomes of the trial, or problems in patient recruitment or in compliance with the test protocol, and these problems can weaken the power and authority of the trial. Common problems include small sample sizes in some of the groups[31], problems of "multiple comparisons" when several different outcomes are being assessed, and biasing of study populations by selection criteria.
The outcome of a trial or study is often summarised by calculation of a "P-value" that expresses the likelihood that an observed difference (between treatment groups reflects a true difference in treatment effectiveness; the P value is a statistical calculation of the chance that the observed apparent difference reflects the chance outcome of random sampling. Some have argued that focussing on P values neglects other important sources of knowledge and information that should properly be used to assess the likely efficacy of a treatment [32] In particular, some argue that the P-value should be interpreted in light of how plausible is the hypothesis based on the totality of prior research and physiologic knowledge.[33][32][34]
Application
It is important to "apply" the best practices found to the correct situation. One of the common problems in applying evidence are difficulties with numeracy. Both patients and healthcare professionals have difficulties with health numeracy and probabilistic reasoning.[35] A second problem is to recognise the patient population that will benefit from the new practices. Extrapolating study results to the wrong patient populations (over-generalization)[36][37][38] and not applying study results to the correct population (under-utilization)[39][40] can both increase adverse outcomes.
The problem in over-generalization of study results may be more common among specialist physicians.[41] Two studies found specialists were more likely to adopt cyclooxygenase 2 inhibitor drugs before the drug rofecoxib was withdrawn by its manufacturers after it emerged that its use had unanticipated adverse effects [42][43]. One of the studies went on to state:
- "using COX-2s as a model for physician adoption of new therapeutic agents, specialists were more likely to use these new medications for patients likely to benefit but were also significantly more likely to use them for patients without a clear indication".[43]
Similarly, orthopedists may provide more intensive care for back pain, but without benefit from the increased care.[44]
The problem of under-utilizing study results may be more common when physicians are practicing outside of their expertise. For example, specialist physicians are less likely to under-utilize specialty care[45][46], while primary care physicians are less likely to under-utilize preventive care[47][48].
Metrics used in evidence-based medicine
Diagnosis
- Sensitivity and specificity
- Likelihood ratios (Odds ratios)
Interventions
Relative measures
- Relative risk ratio
- Relative risk reduction
Absolute measures
- Absolute risk reduction
- Number needed to treat
- Number needed to screen
- Number needed to harm
Health policy
- Cost per year of life saved[49]
- Years (or months or days) of life saved. "A gain in life expectancy of a month from a preventive intervention targeted at populations at average risk and a gain of a year from a preventive intervention targeted at populations at elevated risk can both be considered large."[50]
Experimental trials: producing the evidence
In the treatment of the sick person, the physician must be free to use a new diagnostic and therapeutic measure, if in his or her judgement it offers hope of saving life, re-establishing health or alleviating suffering. The potential benefits, hazards and discomfort of a new method should be weighed against the advantages of the best current diagnostic and therapeutic methods. In any medical study, every patient- including those of a control group, if any- should be assured of the best proven diagnostic and therapeutic method. The refusal of the patient to participate in a study must never interfere with the physician-patient relationship. If the physician considers it essential not to obtain informed consent, the specific reasons for this proposal should be stated in the experimental protocol for transmission to the independent committee. The physician can combine medical research with professional care, the objective being the acquisition of new medical knowledge,only to the extent that medical research is justified by its potential diagnostic or therapeutic value for the patient. From The Declaration of Helsinki [2]|} "A clinical trial is defined as a prospective scientific experiment that involves human subjects in whom treatment is initiated for the evaluation of a therapeutic intervention. In a randomized controlled clinical trial, each patient is assigned to receive a specific treatment intervention by a chance mechanism."[51] The theory behind these trials is that the value of a treatment will be shown in an objective way, and, though usually unstated, there is an assumption that the results of the trial will be applicable to the care of patients who have the condition that was treated. The best evidence is thought to come from large multicentre clinical trials that are randomised and placebo-controlled, and which are conducted double-blind according to a predetermined schedule that is strictly adhered to. Trials should be large, so that serious adverse events might be detected even when they occur rarely. Multi-centre trials minimise problems that can arise when a single geographical locus has a population that is not fully representative of the global population, and they can minimise the effect of geographical variations in environment and health care delivery. Randomisation (if the study population is large enough) should mean that the study groups are unbiased. A double-blind trial is one in which neither the patient nor the deliverer of the treatment is aware of the nature of the treatment offered to any particular individual, and this avoids bias caused by the expectations of either the doctor or the patient. Placebo controls are important, because the placebo effect can often be very strong. However such trials are very expensive, difficult to co-ordinate properly, and are often impractical to design optimally. For example, for many types of medical intervention, no satisfactory placebo treatment is possible. For several medical interventions, the use of a placebo, although feasible, is considered unethical (see section on Unethical use of placebos). Sackett, one of the founders of evidence-based medicine, recognized that large-scale trials were not conducted for many conditions (see section below), and that it might not be possible to conduct them. Underlining the inherent difficulty in extrapolating from large scale trials, Sackett proposed the use of N of 1 randomized controlled trials (also called single-subject randomized trials). In these trials, the patient is both the treatment group and the placebo group, but at different time periods. Blinding must be done with the collaboration of the pharmacist, and treatment effects must appear and dissapear quickly following introduction and cessation of the therapy. This type of RCT can be performed for many chronic, stable conditions.[52] The individualized nature of the single-subject randomized trial, and the fact that it often requires the active participation of the patient (questionnaires, diaries), appeals to the patient and promotes better insight and self-management[53][54] as well as patient safety,[55] in a cost-effective manner. Evidence synthesis: summarizing the evidenceSystematic reviewA systematic review is a summary of healthcare research that involves a thorough literature search and critical appraisal of individual studies to identify the valid and applicable evidence. It often, but not always, uses appropriate techniques (meta-analysis) to combine these valid studies, and may grade the quality of the particular pieces of evidence according to the methodology used, and according to strengths or weaknesses of thstudy design. While many systematic reviews are based on an explicit quantitative meta-analysis of available data, there are also qualitative reviews which nonetheless adhere to the standards for gathering, analyzing and reporting evidence. Clinical practice guidelinesClinical practice guidelines are defined as "Directions or principles presenting current or future rules of policy for assisting health care practitioners in patient care decisions regarding diagnosis, therapy, or related clinical circumstances. The guidelines may be developed by government agencies at any level, institutions, professional societies, governing boards, or by the convening of expert panels. The guidelines form a basis for the evaluation of all aspects of health care and delivery."[56] Medical informatics: Incorporating evidence into clinical carePracticing clinicians usually cite the lack of time for reading newer textbooks or journals. However, the emergence of new types of evidence can change the way doctors treat patients. Unfortunately the recent scientific evidence gathered through well controlled clinical trials usually do not reach the busy clinicians in real time. Another potential problem lies in the fact that there may be numerous trials on similar interventions and outcomes but they are not systematically reviewed or meta-analyzed. Medical informatics is an essential adjunct to EBM, and focuses on creating tools to access and apply the best evidence for making decisions about patient care.[3] Before practicing EBM, informaticians (or informationists) must be familiar with medical journals, literature databases, medical textbooks, practice guidelines, and the growing number of other dedicated evidence-based resources, like the Cochrane Database of Systematic Reviews and Clinical Evidence.[57] Similarly, for practicing medical informatics properly, it is essential to have an understanding of EBM, including the ability to phrase an answerable question, locate and retrieve the best evidence, and critically appraise and apply it.[58][59]
Criticisms of evidence-based medicineThere are a number of criticisms of EBM.[60][61] Most generally, EBM has been criticized as an attempt to define knowledge in medicine in the same way that was done unsuccessfully by the logical positivists in epistemology, "trying to establish a secure foundation for scientific knowledge based only on observed facts".[62] A general problem with EBM is that it seeks to make recommendations for treatment that (on balance) are likely to provide the best treatment for most patients. However what is the best treatment for most patients is not necessarily the best treatment for a particular individual patient. The causes of disease, and the patient responses to treatment all vary considerably, and are affected for example by the individual's genetic make-up, their particular history, and by factors of individual lifestyle. To take these properly into account requires the clinical experience of the treating physician, and over-reliance upon recommendations based upon statistical outcomes of treatments given in a standardised way to large populations may not always lead to the best care for a particular individual. Unethical use of placebosIdeally, the true effectiveness of any medical treatment should be compared with the effectiveness of a placebo treatment in a double-blind trial, where neither the patient nor the doctor is aware of whether the treatment administered is an active treatment or an inert placebo. Placebo controls are thought to be very important because of the very considerable "power of suggestion". However, as stated in The Declaration of Helsinki by the World Medical Association it is unethical to give any patient a placebo treatment if an existing treatment option is known to be beneficial.[63][64] Many scientists and ethicists consider that the U.S. Food and Drug Administration, by demanding placebo-controlled trials, encourages the systematic violation of the Declaration of Helsinki.[65] The use of placebo controls remains a convenient way to avoid direct comparisons with a competing drug. As EBM evolves, appropriate use of placebo is being revised.[66][67] When guidelines suggest a placebo is an unethical control, then an "active-control noninferiority trial" may be used.[68] To establish non-inferiority, the following three conditions should be - but frequently are not - established:[68]
Lack of randomized controlled trials for clinical decisionsRandomized controlled trials are available to support 21%[69] to 53%[70] of principle therapeutic decisions.[71] Due to this, evidence-based medicine has evolved to accept lesser levels of evidence when randomized controlled trials are not available.[72] Ulterior motivesAn early criticism of evidence-based medicine is that it will be a guise for rationing resources or other goals that are not in the interest of the patient.[73][74] In 1994, the American Medical Association helped introduce the "Patient Protection Act" in Congress to reduce the power of insurers to use guidelines to deny payment for a medical services.[75] As a possible example, Milliman Care Guidelines state they produce "evidence-based clinical guidelines since 1990".[76] In 2000, an academic pediatrician sued Milliman for using his name as an author on a practice guidelines that he stated were "dangerous" [77][78][79] A similar suit disputing the origin of care decisions at Kaiser has been filed.[80] The outcomes of both suits are not known. Conversely, clinical practice guidelines by the Infectious Disease Society of America are being investigated by Connecticut's attorney general on grounds that the guidelines, which do not recognize a chronic form of Lyme disease, are anticompetitive.[81][82] EBM not recognizing the limits of clinical epidemiologyA common criticism addressed to epidemiology is that it can show association, but not causation. Evidence-based medicine is a set of techniques derived from clinical epidemiology. While clinical epidemiology has its role in inspiring clinical decisions, if it is complemented with testable hypotheses on disease,[83] many critics consider that Evidence-Based Medicine is a form of clinical epidemiology which became so prevalent in health care systems, and imposed such an empiricist bias on medical research, that it contributed to undermine the very notion of causal inference in clinical practice.[84] It is argued that it has even become condemnable to use common sense,[85] as was cleverly illustrated in a systematic review of randomized controlled trials studying the effects of parachutes against gravitational challenges (free falls).[86] Fallibility of knowledgeEvidence-based medicine has been criticized on epistemologic grounds as "trying to establish a secure foundation for scientific knowledge based only on observed facts"[62] and not recognizing the fallible nature[87] of knowledge in general. The inevitable failure of reliance on empiric evidence as a foundation for knowledge was recognized over 100 years ago and is known as the "Problem of Induction" or "Hume's Problem".[88] Complexity theoryComplexity theory and chaos theory are proposed as further explaining the nature of medical knowledge.[89][90] Regarding health services research, although complexity theory has not advanced to the state of being able to mathematically model healthcare delivery, it has been used as a framework for case study[91][92][93][94] and traditional bivariate analysis[95] of healthcare delivery. For example, a systematic review of organizational interventions to improve the quality of care of diabetes mellitus type 2 suggests that interventions based on complexity theory will be more successful.[95] If the goal of modeling healthcare is to comply with specific quality indicators, interventions based on systems theory may be more effective than those based on complexity theory.[96] References
See also |
- Pages with reference errors
- Pages using duplicate arguments in template calls
- Pages using PMID magic links
- Pages using ISBN magic links
- Editable Main Articles with Citable Versions
- CZ Live
- Health Sciences Workgroup
- Articles written in American English
- Advanced Articles written in American English
- All Content
- Health Sciences Content
- Pages with too many expensive parser function calls