Talk:Evidence-based medicine/Draft

From Citizendium
< Talk:Evidence-based medicine
Revision as of 14:20, 18 November 2007 by imported>Chris Day (→‎Apply section)
Jump to navigation Jump to search
This article has a Citable Version.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
To learn how to update the categories for this article, see here. To update categories, edit the metadata template.
 Definition The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. [d] [e]
Checklist and Archives
 Workgroup category Health Sciences [Categories OK]
 Talk Archive 1  English language variant American English


I will be gad to help author here, and would like to go over a plan for the article. I think that, as this article covers a a special sort of medical field that we should discuss "audience". Please, fellow editors, argue with any of these points if they differ from your understanding. Evidence based medicine is certainly all about clinical care of patients- but, unlike an article on dermatology, say, it really is about a way of thinking about medicine, an approach. Reading what is written so far- it is really meaty and presents that approach, but, in my mind suffers from 2 faults, one is that there is too much technical language without explanation, and (2) the history of medicine (in a way) has to be presented so that the naive reader understands that actually, "regualar medicine" is not evidenced based. I tyhink also, that including some real examples of changes in clinical practice that are based on evidence based medicine, may be helpful. I am going to add some of this and am open to discussion, especially from Supten. Nancy Sculerati 09:35, 15 May 2007 (CDT)

References-with notes

O'Malley P. Order no harm: evidence-based methods to reduce prescribing errors for the clinical nurse specialist. [Review] [17 refs] [Journal Article. Review] Clinical Nurse Specialist. 21(2):68-70, 2007 Mar-Apr. UI: 17308440 Classed under evidenced based medicine by Ovid (Medline) , his article reviews actual sources of medication errors.

Doumit G. Gattellari M. Grimshaw J. O'Brien MA. Local opinion leaders: effects on professional practice and health care outcomes.[update of Cochrane Database Syst Rev. 2000;(2):CD000125; PMID: 10796491]. [Review] [54 refs] [Journal Article. Review] Cochrane Database of Systematic Reviews. (1):CD000125, 2007. UI: 17253445

Lorenz LB. Wild RA. Polycystic ovarian syndrome: an evidence-based approach to evaluation and management of diabetes and cardiovascular risks for today's clinician. [Review] [60 refs] [Journal Article. Review] Clinical Obstetrics & Gynecology. 50(1):226-43, 2007 Mar. UI: 17304038

Jordan A. McDonagh JE. Transition: getting it right for young people. [Review] [29 refs] [Journal Article. Review] Clinical Medicine. 6(5):497-500, 2006 Sep-Oct. UI: 17080900

Thanigaraj S. Wollmuth JR. Zajarias A. Chemmalakuzhy J. Lasala JM. From randomized trials to routine clinical practice: an evidence-based approach for the use of drug-eluting stents. [Review] [48 refs] [Journal Article. Review] Coronary Artery Disease. 17(8):673-9, 2006 Dec. UI: 17119375

Stanley K. Design of randomized controlled trials. [Review] [9 refs] [Journal Article. Review] Circulation. 115(9):1164-9, 2007 Mar 6. UI: 17339574

Sectioning

Are there perhaps more sections than are useful here? CZ:Article Mechanics recommends against many relatively short sections in favor of relatively few, longer sections. But I don't think we have any very hard-and-fast rules about this.

Glad to see you here, Dr. Badgett! --Larry Sanger 22:01, 23 October 2007 (CDT)

Thanks - Robert Badgett 22:37, 31 October 2007 (CDT)

'Main' template not working

I added a new call to the main template, and now all three calls are not displaying correctly. - Robert Badgett 22:37, 31 October 2007 (CDT)

Misuses of EBM

The article ignores the misuses of EBM in the real world. Very few of the methods actually used in medicine have ever been validated by independent prospective randomized double-blind studies, or are likely to be. The main use of EBM is by HMOs and other prepaid managed care organizations, as an excuse to refuse to pay for expensive studies or treatments, while happily paying for inexpensive, untested, unproven treatments, such as herbal and other "alternative" medicines. I do not think this misuse of EBM should be ignored in this otherwise wholly laudatory article. Harvey Frey 17:20, 12 November 2007 (CST)

Hi!
The use of the "there is no evidence that" is becoming a little too frequent in clinical medicine. I suggest these two articles for inclusion; unfortunately I cannot access them (full text) right now.
J Med Ethics 2004;30:141-145 Evidence based medicine and justice: a framework for looking at the impact of EBM upon vulnerable or disadvantaged groups. W A Rogers
S I Saarni and H A Gylling Evidence based medicine guidelines: a solution to rationing or politics disguised as science?
J. Med. Ethics, Apr 2004; 30: 171 - 175.
May I summarize the two abstracts in the Criticisms section?
Pierre-Alain Gouanvic 23:34, 12 November 2007 (CST)

Problem with the references

Somewhere around the 50th reference, there is a bug. Can someone fix this? Pierre-Alain Gouanvic 23:47, 12 November 2007 (CST)

Great! Pierre-Alain Gouanvic 13:50, 13 November 2007 (CST)

Criticisms that may be incorporated into the Section

I think more needs to be added about the sources of much so-called EBM, from sources interested in minimizing expenses of government health plans, like the Cochrane group, or through medical auditors interested primarily in maximizing profits of private HMOs, like Milliman & Robertson. There also needs to be a fair admission of how little of accepted medical practice has actually been validated by 'gold-standard' studies. When should a procedure be denied based on lack of EBM support? And, to what extent are surrogate measures acceptable when, say, survival data is unavailable? For instance in Radiation Oncology (my own specialty) if you know that higher radiation doses kill more cancer cells, and high doses are usually limited by doses to surrounding tissues, and if you can show that some new technique gives less dose to surrounding tissues this allowing higher doses to cancers, is it irrational to take that as evidence that the new technique is superior? Must an HMO insist on a prospective randomized double-blind study using 20 year survival as an endpoint before allowing use of the new technique? The other issue is the extent to which 'cost' should be involved in EBM studies, and if it IS allowed, what should be the conversion factor between dollars and years of life, or dollars and years of pain-free life. Should we EVER do a coronary bypass operation, given that the same number of dollars could save thousands of lives if spent on malaria prevention instead? But, WOULD the dollars saved be spent on malaria prevention, or would it go to executive perks and stockholder dividends? One doctor in California recently received almost a billion dollars selling his share of an HMO. Those were dollars not spent on medical care, often justified by calling some procedure "not medically necessary", or "investigational"! And, what weight should be given to the EBM "guidelines"? Should they be used to overrule the decision of the primary doctor on the case? If so, who takes responsibility for adverse results? The clerk who countermanded a doctor's order based on an M&R cookbook? Harvey Frey

I think these are all legitimate issues. What we have so far is a pretty mainstream article, your stuff would help. Much of this could be added to the 'criticisms' section, which is currently sparse. Some of what you suggest might be better on the clinical guidelines page. Robert Badgett
Here's another example: http://www.careguidelines.com/ An entirely PROPRIETARY set of "EBM Guidelines" from Milliman, originally a hospital accounting firm, based on no known public peer review, widely sold to managed care organizations in the US, for the express purpose of controlling cost. And, of course they come with disclaimers, to avoid liability if anyone is injured by one of their clients using them. I do remember a case in California a few years ago when they figured prominently when a hospital prematurely discharged a woman post-delivery, based on these guidelines. Unfortunately, it wasn't an reported appellate case, so I'm having trouble finding it now. Harvey Frey
Interesting. I cannot find their guidelines to assess their methods, but from your description, it sounds like they hijacked the label evidence-based. Robert Badgett
If I understood you well, the example you provide from oncology:
For instance in Radiation Oncology (my own specialty) if you know that higher radiation doses kill more cancer cells, and high doses are usually limited by doses to surrounding tissues, and if you can show that some new technique gives less dose to surrounding tissues this allowing higher doses to cancers, is it irrational to take that as evidence that the new technique is superior?
is an illustration of the difficulty of using causal inferences and, for that matter, common sense, in the framework of EBM. I unearthed something like a little gem, which could be useful in defining EBM from the practicioner's and patient's point of view (I'm not saying that this article is "one of its kind" though): Critique of (im)pure reason: evidence-based medicine and common sense [1]
While the goal of evidence-based medicine (EBM) is certainly laudable, it is completely based on the proposition that 'truth' can be gleaned exclusively from statistical studies. In many instances, the complexity of human physiology and pathophysiology makes this a reasonable, if not necessary, assumption. However, there are two additional large classes of medical 'events' that are not well served by this paradigm: those that are based on physically required causality, and those that are so obvious (to the casual observer) that no self-respecting study will ever be undertaken (let alone published). Frequently, cause-and-effect relationships are so evident that they fall into both categories, and are best dealt with by the judicious use of common sense. Unfortunately, the use of common sense is not encouraged in the EBM literature, as it is felt to be diametrically opposed to the very notion of EBM. As is more fully discussed in the manuscript, this active disregard for common sense leaves us at a great disadvantage in the practical practice of medicine.
I believe that this criticism is important because it brings in bright light the relationship between EBM and fundamental research: the latter deals with complex-cause-and-effect relationships, the former with specific effects, out of the black box of human physiology. Pierre-Alain Gouanvic 12:05, 14 November 2007 (CST)

Some problems

"Evidence-based medicine seeks to promote practices that has been shown, through the scientific method to have validity by empiric proof." This needs re-thinking; I think that what is meant here is "promoting practices the effectiveness of which has been supported by stringent statistical analysis of the results of carefully controlled clinical studies."

Evidence-based medicine is not science-based medicine. Science-based medicine works from a fundamental understanding of basic mechanisms to generate a rationally designed intervention strategy. Not all medical interventions are actually based in science in this sense (and some would say that relatively few are). More commonly, they are based empirically on experience of what actually works, and the scientific rationale or explanation comes later (if at all).

Most importantly here though, the scientific method would test the explanations for the effectiveness of particular treatments by hypothesis-based experimental testing. Whether this has been done or not would not really influence the decision to use a particular intervemntion or not.Gareth Leng 03:55, 14 November 2007 (CST)


I haven't checked the references, only put them into what I think is style consistent within the article and consistent with Biology work group style; I've shorten author lists to et al. when there are more than 2 authors and omitted issue numbers as redundant, generally to try to keep the list concise for printing. My general feeling is that it seems over-referenced - I'd be wary of this as a large current reference list becomes outdated fast, a smaller list of elite core references has a longer shelf life. The size is also a burden for verification. However it's a very nicely written very helpful article. I'd just return to the use of the word "proof" which I'd strongly urge that you avoid. Scientists would rarely consider anything to be proved; the evidence might be strong enough to accept a conclusion (provisionally), but if a conclusion rests on statistics then there is always a margin for error.Gareth Leng 06:51, 14 November 2007 (CST)

required fixes, self approval?

Several things need fixing prior to approval. The article needs to be consistant, ie "evidence-based" vs "evidence based" oocurs in the article, as well as minor typos. At least two sections are completely empty somewhere near the bottom, including the "Apply" and "Assess" sections. They need to be removed or expanded. Finally, the nominating editor appears to have created and written on this page. I suggest removal of nomination, a careful read and editing, and then re-nomination David E. Volk 09:18, 14 November 2007 (CST)

Studies of effectiveness section

The last sentence in this paragraph is not a sentence. I can't figure out what was meant. I inserted EBM in a few sentences where it seemed to be missing. David E. Volk 10:15, 14 November 2007 (CST)

Is this ready for approval?

Several section were blank. i added text from related articles to give an overview but we can't have an approved aricles with blank sections. Also it seems incomplete in places. Particularly, There are four cases of one sub section in a hierarchy. This seems to imply there is another sub section that could be added. If not then the subsection seems unnecessary. For example;

7 Incorporating evidence into clinical care
7.1 Medical informatics
7.2  ?
8.3 Clinical reasoning
8.3.1 Improving clinical care
8.3.2  ?
9.4 Apply
9.4.1 Clinical reasoning
9.4.2  ?
10.3 Epistemology
10.3.1 Complexity theory
10.3.2  ?

In all these cases it seems like there should either be another subsection or that the x.x.1 sub heading is not required. I dpn't know enough about the topic to know what the ? might be. Chris Day (talk) 22:44, 14 November 2007 (CST)


What does this mean?

I was trying to edit the second paragraph in the first section but came to the conclusion that i did not really know what it means.

"Evidence-based medicine seeks to promote practices that have been shown, through the scientific method to have validity by empiric proof. As such, it currently encompasses only a few of the actual practices in clinical medicine and surgery. More often, recommendations are made on the basis of best evidence that are reasonable, but not proven. Evidence-based medicine is also a philosophy, however, that seeks to validate practices by finding proof."

The first sentence I would change to:

"Evidence-based medicine seeks to promote practices that have been shown to have validity using the scientific method."

The second sentence reads I'm unsure what the point is. Is the implication that the actual practices in clinical medicine and surgery do not follow the scientific method? Is so, does this even need to be said, this is redundant with the first paragraph.

The third sentence seems redundant with the first sentence. It says the same as "promote practices that have been shown to have validity using the scientific method"

The last paragraph relating to phiolosophy loses me. Phiolosophy and EBM seem to be the opposite of each other but this sentence seems to be saying they are both? i find this very confusing. I hope these comments are useful. Chris Day (talk) 23:16, 14 November 2007 (CST)

It looks like there's some redundancy in it, from an outsider point of view. Even when it says that "Evidence-based medicine is also a philosophy, however, that seeks to validate practices by finding proof", it seems to read that A is something that relies on B, but A also seeks to prove B but I would not use the word "philosophy" but rather restate it as "Part of the ultimate goal of evidence-based medicine is to validate practices by establishing proof of the results." --Robert W King 23:41, 14 November 2007 (CST)

Isn't that exactly the same as the first sentence in the paragraph? Here I slightly reworded it and you'll see what I mean.

"Part of the ultimate goal of evidence-based medicine is to validate practices by using the scientic method."

It seems to me that the whole paragraph distills down to the first sentence:

"Evidence-based medicine seeks to promote practices that have been shown to have validity using the scientific method."

Chris Day (talk) 23:50, 14 November 2007 (CST)

If the rest of the paragraph is superfulous because of redundancy, I'd probably just remove the errant content! --Robert W King 00:24, 15 November 2007 (CST)
That's what i have done. Let's see what the health editors think. Chris Day (talk) 00:35, 15 November 2007 (CST)

Industry and publication bias

Reading this again, it struck me that an important part of meta-analysis and appraisal is neglected. Most studies where they are misleading are I think misleading because of flaws in design or conduct or analysis. I don't know that I've ever seen a study where legitimate criticisms can't be raised, that might affect the interpretation. A good meta-analysis grades the quality of the trials, weighting the outcomes by quality, and I think attempts to come to a global recommendation on the basis that while individual trials might be imperfect for different reasons, when collectively they come to a common conclusion that conclusion is probably reliable.

The issue of publication bias works two ways. First, negative or inconclusive results are less likely to be reported. Second, positive results may be more likely to be reported when they are confirmatory of already published findings even when the quality of the trial is poor.

Overall, industry-sponsored trials are given a rough ride here. It should be remembered I think that, without industry sponsorship, there would be far fewer trials in the first place. The quality of studies very much depends on the integrity and competence of the academic or clinical scientists conducting them. We as academics can't blame industry for our own shortcomings. I do think that the major pharmaceutical companies try to find academic partners whose integrity and competence are unimpeachable; it's very much in their interests to do so, whatever the outcome of the trials.

Gareth Leng 03:59, 16 November 2007 (CST)

There are many "jargon terms" introduced here e.g. Relative risk ratio Relative risk reduction Absolute measures Absolute risk reduction Number needed to treat Number needed to screen Number needed to harm

I wonder if it would be sensible to add a glossary as a subpage that gave definitions of these? Or is there some other solution? Perhaps there's a case for making a stub for each of these, with a brief definition and an external link.Gareth Leng 07:17, 16 November 2007 (CST)

I like the glossary idea on subpages. It goes along with the definitions template that Larry created as well. --Matt Innis (Talk) 07:56, 16 November 2007 (CST)

Delayed approval

Although this article is outside of my sphere of competence, I recomment delaying the approval for at least 7 days, so that changes can be made. There are too many comments here from senior scientists, which may not be taken into account if the approval occurs as scheduled. Please indicate your support or opposition for my proposal of a short delay. --Martin Baldwin-Edwards 08:40, 16 November 2007 (CST)

I think 7 days may not be enough unless we are able to fill in the blank spaces toward the bottom and clean up the criticism section. I have added the Healing Arts Workgroup as this article affects them as well. --Matt Innis (Talk) 07:46, 16 November 2007 (CST)

I agree that some delay is sensible. I added a stub for Odds ratio as an example for my comment above, sadly I noticed too late that this is a term omitted (dooh)Gareth Leng 08:49, 16 November 2007 (CST)

Notice what I did with odds ratio. You put the name in the r template like this {{r|odds ratio}} and then click on the little 'e' and put in the defintiion. The it shows up anywhere we put that on any page about odds ratio. --Matt Innis (Talk) 08:31, 16 November 2007 (CST) ThanksGareth Leng 09:35, 16 November 2007 (CST)

Cut this?

I cut this short section. It is in the criticism section, and I couldn't identify in what respect there is any criticism. I guess I think that this may be interesting but is probably only tangential to the article?

Complexity theory

Complexity theory is proposed as further explaining the nature of medical knowledge.[2][3]

Gareth Leng 09:35, 16 November 2007 (CST)

Restored, renamed, and expanded this section with context. See what you think. - Robert Badgett 08:57, 16 November 2007 (CST)
Complexity theory needs to be defined or explained. --Matt Innis (Talk) 09:06, 16 November 2007 (CST)
Liked what you did Robert; also agree with Matt - maybe though it's just again that Complexity theory needs a stub rather than a definition here.Gareth Leng 11:54, 16 November 2007 (CST)
That might work, depending on how you do it, but I like having at least some clarity here and then they can click on the link for more, especially on those more scientific terms that their meanings cannot be easily inferred by the average reader. --D. Matt Innis 11:44, 16 November 2007 (CST)

??

"were more likely to adopt COX-2 drugs before the drugs were recalled by the FDA"

well yes, they would be wouldn't they? But were they recalled by the FDA?Gareth Leng 08:54, 16 November 2007 (CST)

Quality of references

I have started to do some checking of the references. I looked at this "A randomized controlled trial supports the efficiency of this approach.[7]" which is a reference used 3 times. This is a study of 32 medical students assigned to one of 2 search protocols. Frankly it has statistical weaknesses; most obviously the analysis is based on the numbers of answers to questions not on the individual performances; as the individuals are independent but their answers are obviously not, this seems inappropriate. I don't mean to rubbish this small trial, only to say that I think, especially in this article, we should set the bar for citing studies as evidence at an appropriately high level - i.e. at a level appropriate for the topic. Using small or poorly controlled studies as evidence to support conclusions about EBM is surely not what we want to do? I really would recommend trimming the references in the article down to a sustainable core of unimpeachably strong studies.Gareth Leng

Agreed. Considering we have the expertise here, we don't need to reference things that are reasonably understandable unless there is a conflict or questionable synthesis of the information. --Matt Innis (Talk) 09:47, 16 November 2007 (CST)

Disagree. I think trimming this is one of the same mistakes that we accuse EBM of doing. We accuse EBM of only acting on RCT data and not allowing lesser data with expert judgment (note the new section someone added about parachutes). I missed that the trial did not adjust for intrauser correlation. I suggest noting that the trial had issues, but not delete the trial. If you delete this trial, then you are left without guidance on how to teach searching, so we are in the situation that we are not sure whether to use parachutes because there are no RCTs of parachutes. The results of this trial are very plausible based on other studies of how much time various search strategies require, so if you take a Bayesian approach to significance testing, this trial is ok, even with its statistical problems.
I do not see a reason to be parsimonious with references. Documenting sources for arguments, and not throwing out unsupported opinions, is a major function of CZ. - Robert Badgett 11:41, 18 November 2007 (CST)

alt med

I added a blurb about alt med being evaluted using EBM as well. Feel free to clarify or clean it up. I think it is important that this method might be the way to evaluate the claims made by techniques that are subject to bias. --Matt Innis (Talk) 09:40, 16 November 2007 (CST)

COx-2 drugs = COX-2 inhibitors?

Robert, can you rephrase COX-2 "drugs" to be more descriptive?

done - Robert Badgett 10:57, 18 November 2007 (CST)

Server timing

In case anyone is wondering, the timing of the server is way off and clocks are not valid, so the order of edits in the history do not necessarily follow when they were actually made, i.e. Gareth's last edit was actually made before mine, but shows an hour later. --Matt Innis (Talk) 09:42, 16 November 2007 (CST)

file drawer

This sentence:

  • In performing a meta-analyses, a file drawer[15] or a funnel plot analysis[16][17] may help detect underlying publication bias among the studies in the meta-analysis [1].

sure could use a quick explanation of file drawer and funnel plot. --D. Matt Innis 10:25, 16 November 2007 (CST)

Really ready for approval?

I just corrected a grammar error in the intro. It looked like something had been deleted and left a dangling and. Also the very last paragraph in the intro is th following.

"Evidence-Based Health Care or evidence-based practice extends the concept of EBM to all health professions, including purchasing and management [2]."

Here the references cited does not seem to give any information with regard to purchasing and management and evidence-based practice. If it does it is not obvious to me as a layperson. Also, this reference does not use the <ref></ref> format. I know these are minor issues but if these are in the intro what is the rest of the article like? Chris Day (talk) 17:38, 17 November 2007 (CST)

Those references are much better. Is there any chance to have less than seven, it seems like overkill, or is each article very different? Can it be narrowed down to the best two or three? Chris Day (talk) 11:32, 18 November 2007 (CST)

Addition of a radical critique

I was encouraged to try and find a place for another critique of EBM, which relates to the (highly condemnable!) use of common sense under the EBM paradigm (http://en.citizendium.org/wiki/Evidence-based_medicine#EBM_not_recognizing_the_limits_of_clinical_epidemiology) Thanks, Matt, I needed some courage. I dared to be bold and to formulate a general synthesis. It looks like the final criticism (about the fallibility of knowledge), but I think it provides historical and political perspective. I think that the very last part is important, even though it is dangerously funnny. Some work should be done to contrast those two similar critiques. Pierre-Alain Gouanvic 02:09, 18 November 2007 (CST)

Criticisms of EBM - are these actually criticisms of nefarious expropriators of EBM?

Two of the sections in criticisms, "Unethical use of placebos" and "Ulterior motives" , might be more appropriate to view as criticisms of those who try to use the EBM paradigm for their advantage. Does anyone see a better way to organize to reflect this.

Also, as the EBM article fills out, we need to think about some the details shifting to other articles, specifically the randomized controlled trial and clinical practice guideline article. - Robert Badgett 10:19, 18 November 2007 (CST)

Need consistency

I have noticed that in the text there are examples of "evidence-based medicine", "Evidence-Based Medicine" and EBM. Does the article intend to use the acronym throughout? That seems to be case from the intro. Should all cases of "evidence-based medicine" and "Evidence-Based Medicine" be changed to EBM? Chris Day (talk) 11:44, 18 November 2007 (CST)

Apply section

In this section is the following sentence:

"Both patients and healthcare professionals have difficulties with health numeracy and probabilistic reasoning.[29]"

How do these problems in numeracy result in a misapplication of EBM? This needs to be more explicit in the article. There is a reference for this, so what's the conclusion of the reference with respect to the misapplication of EBM? Chris Day (talk) 14:20, 18 November 2007 (CST)

Also in this section is the following sentence:

"Specialists may be less discriminating in their choice of journal reading. [39]"

Does not seem to tie in with the rest of the section. I was going to rewrite it but I'm not sure of what point is being made here. How does this fact relate to the over or under use of any evidence based method? I would expect from the context (specialist tend to over use EBM) that it is trying to suggest that results and usage of treatments from EBM are not generally reported in specialist journals? But this cannot be right given the sentence above. Wouldn't a specialist be more aware of EBM if they are less discriminating in the journals they read? Chris Day (talk) 12:11, 18 November 2007 (CST)

  1. Michelson J (2004). "Critique of (im)pure reason: evidence-based medicine and common sense". Journal of evaluation in clinical practice 10 (2): 157–61. DOI:10.1111/j.1365-2753.2003.00478.x. PMID 15189382. Research Blogging.
  2. Sweeney, Kieran (2006). Complexity in Primary Care: Understanding Its Value. Abingdon: Radcliffe Medical Press. ISBN 1-85775-724-6. Review
  3. Holt, Tim A (2004). Complexity for Clinicians. Abingdon: Radcliffe Medical Press. ISBN 1-85775-855-2.  Review, ACP Journal Club Review