CAM and Evidenced-Based Medicine

Mark Tonelli, MD has problems with evidence-based medicine (EBM). He has published a few articles detailing his issues, and he makes some legitimate points. We at science-based medicine (SBM) have a few issues with the execution of EBM as well, so I am sympathetic to constructive criticism.

In an article titled: Integrating evidence into clinical practice: an alternative to evidence-based approaches. The abstract states:

Evidence-based medicine (EBM) has thus far failed to adequately account for the appropriate incorporation of other potential warrants for medical decision making into clinical practice. In particular, EBM has struggled with the value and integration of other kinds of medical knowledge, such as those derived from clinical experience or based on pathophysiologic rationale. The general priority given to empirical evidence derived from clinical research in all EBM approaches is not epistemically tenable. A casuistic alternative to EBM approaches recognizes that five distinct topics, 1) empirical evidence, 2) experiential evidence, 3) pathophysiologic rationale, 4) patient goals and values, and 5) system features are potentially relevant to any clinical decision. No single topic has a general priority over any other and the relative importance of a topic will depend upon the circumstances of the particular case. The skilled clinician must weigh these potentially conflicting evidentiary and non-evidentiary warrants for action, employing both practical and theoretical reasoning, in order to arrive at the best choice for an individual patient.

I certainly agree that clinical evidence (what he he referring to by “empirical” evidence above) is not, and should not be, the sole type of knowledge that is incorporated into clinical decision-making. However, I think this criticism is a bit of a straw man, at least with regard to items 2, 4, and 5. The goals and values of the patient are definitely part of clinical decision-making, even in a rigorously evidence-based practice. We are, after all, treating people, not diseases. When I was in medical school this was called the biopsychosocial model of medicine. Now it is also not uncommon for quality of life measures and overall satisfaction to be incorporated as outcome measures in clinical trials, blurring the lines between empiricism and personal goals and values.

So while I agree that patient values and goals absolutely need to be taken into consideration when practicing medicine, I don’t see this as a new idea or one that is at odds with EBM, nor entirely distinct from empiricism. By including this as he does, however, there is the implication that EBM excludes such considerations, and I do not believe that is fair.

Where we likely mostly agree is on number 3 – pathophysiological rationale. I could expand this to include all of basic science – medical practices should be plausible. I also think he has a legitimate point in that EBM gives too much emphasis to clinical science and shortchanges basic science. But it is interesting to note that the EBM grading system for recommendations do allow for extrapolation (i.e grade B=.consistent level 2 or 3 studies or extrapolations from level 1 studies). Extrapolation involves considering pathophysiology and mechanism of action. While extrapolation (rather than direct evidence) downgrades the recommendation by one category (which is appropriate) it does not exclude it altogether.

Further, I think the real problem with failing to consider pathophysiology is not for support of a plausible treatment, but to be extra cautious about an implausible treatment. When the basic science dictates that a proposed treatment is highly implausible, the bar for clinical evidence should be raised proportionately.  I don’t think this is what Tonelli had in mind, however, as we will see.

Item #2- Experiential evidence, is highly problematic. While experience is great for some things, like recognizing diagnoses, being sensitive to the subtleties of history taking, and interfacing with patients – it is highly misleading when it comes to determining safety and efficacy. The simple fact is that personal experience is too limited, quirky, and uncontrolled, and is overwhelmingly likely to simply confirm our biases than actually lead us in the direction of truth.

In another related article (actually published in 2001, earlier than the 2006 paper above), Tonelli clarifies:

Empirical evidence, when it exists, is viewed as the “best” evidence on which to make a clinical decision, superseding clinical experience and physiologic rationale. But these latter forms of medical knowledge differ in kind, not degree, from empirical evidence and do not belong on a graded hierarchy.

He is partly correct here – these other forms of evidence are not necessarily below, but are tangential to, empirical evidence. But I think Tonelli is missing the context of EBM. EBM is not a method for solely determining clinical practice (clinical decision-making) but for determining safety and efficacy, which is one factor that informs practice. Values, the system, and the human side of medicine also go into clinical practice, but they should not be used to determine efficacy. So it seems his criticism is based upon a straw man constructed of his own confusion.

I might have been inclined to give Tonelli some benefit of the doubt, were it not for this:

The methods for obtaining knowledge in a healing art must be coherent with that art’s underlying understanding and theory of illness. Thus, the method of EBM and the knowledge gained from population-based studies may not be the best way to assess certain CAM practices, which view illness and healing within the context of a particular individual only. In addition, many alternative approaches center on the notion of non-measurable but perceptible aspects of illness and health (e.g., Qi) that preclude study within the current framework of controlled clinical trials. Still, the methods of developing knowledge within CAM currently have limitations and are subject to bias and varied interpretation. CAM must develop and defend a rational and coherent method for assessing causality and efficacy, though not necessarily one based on the results of controlled clinical trials. Orthodox medicine should consider abandoning demands that CAM become evidence-based, at least as “evidence” is currently narrowly defined, but insist instead upon a more complete and coherent description and defense of the alternative epistemic methods and tools of these disciplines.

This casts a new light on all of Tonelli’s other publications. It seems he is making an elaborate argument for the inclusion of other kinds of evidence (other than rigorous, controlled, clinical studies) as support for fanciful but ideologically appealing treatments.

This is a refrain that is becoming common in the CAM community -  that we need to redefine “evidence”, not restrict ourselves to narrow definitions of evidence, and that CAM modalities cannot be properly studied by traditional scientific methods. There is always a flavor that CAM must free itself from the tyranny of scientific evidence.

What is it, exactly, about scientific methods that they feel is incompatible with CAM methods – being thorough, counting all the data, controlling for variables, minimizing the effects of bias, carefully defining terms and outcomes, or being statistically rigorous?  Even individualized treatments can be studied rigorously – so that is an insufficient excuse. In the end, the call to expand the definition of evidence is just a deceptive way of asking for sloppy methods of research, because CAM modalities generally do not hold up under rigorous standards.

We don’t need to redefine or expand the methods of science – we need to return common sense to medicine.

Facebook Google Buzz Digg LinkedIn StumbleUpon LiveJournal Share

Related Posts

Comments are closed.