Yes, Jacqueline: EBM ought to be Synonymous with SBM

“Ridiculing RCTs and EBM”

Last week Val Jones posted a short piece on her BetterHealth blog in which she expressed her appreciation for a well-known spoof that had appeared in the British Medical Journal (BMJ) in 2003:

Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials

Dr. Val included the spoof’s abstract in her post linked above. The parachute article was intended to be humorous, and it was. It was a satire, of course. Its point was to call attention to excesses associated with the Evidence-Based Medicine (EBM) movement, especially the claim that in the absence of randomized, controlled trials (RCTs), it is not possible to comment upon the safety or effectiveness of a treatment—other than to declare the treatment unproven.

A thoughtful blogger who goes by the pseudonym Laika Spoetnik took issue both with Val’s short post and with the parachute article itself, in a post titled “NotSoFunny – Ridiculing RCTs and EBM.”

Laika, whose real name is Jacqueline, identifies herself as a PhD biologist whose “work is split 75%-25% between two jobs: one as a clinical librarian in the Medical Library and one as a Trial Search Coordinator (TSC) for the Dutch Cochrane Centre.” In her post she recalled an experience that would make anyone’s blood boil:

I remember it well. As a young researcher I presented my findings in one of my first talks, at the end of which the chair killed my work with a remark that made the whole room of scientists laugh, but was really beside the point…

This was not my only encounter with scientists who try to win the debate by making fun of a theory, a finding or …people. But it is not only the witty scientist who is to *blame*, it is also the uncritical audience that just swallows it.

I have similar feelings with some journal articles or blog posts that try to ridicule EBM – or any other theory or approach. Funny, perhaps, but often misunderstood and misused by “the audience”.

 Jacqueline had this to say about the parachute article:

I found the article only mildly amusing. It is so unrealistic, that it becomes absurd. Not that I don’t enjoy absurdities at times, but absurdities should not assume a life of their own.  In this way it doesn’t evoke a true discussion, but only worsens the prejudice some people already have.

Jacqueline argued that two inaccurate prejudices about EBM are that it is “cookbook medicine” and that “RCTs are required for evidence.” Regarding the latter, she made reasonable arguments against the usefulness or ethics of RCTs for “prognostic questions,” “etiologic or harm questions,” or “diagnostic accuracy studies.” She continued:

But even in the case of interventions, we can settle for less than a RCT. Evidence is not present or not, but exists on a hierarchy. RCT’s (if well performed) are the most robust, but if not available we have to rely on “lower” evidence.

BMJ Clinical Evidence even made a list of clinical questions unlikely to be answered by RCT’s. In this case Clinical Evidence searches and includes the best appropriate form of evidence.

  1. where there are good reasons to think the intervention is not likely to be beneficial or is likely to be harmful;
  2. where the outcome is very rare (e.g. a 1/10000 fatal adverse reaction);
  3. where the condition is very rare; [etc., for a total of 6 more categories]

In asserting her view of another inaccurate prejudice about EBM, Jacqueline took Dr. Val and Science-Based Medicine to task:

Informed health decisions should be based on good science rather than EBM (alone).

Dr. Val: “EBM has been an over-reliance on “methodolatry” - resulting in conclusions made without consideration of prior probability, laws of physics, or plain common sense. (….) Which is why Steve Novella and the Science Based Medicine team have proposed that our quest for reliable information (upon which to make informed health decisions) should be based on good science rather than EBM alone.”

Methodolatry is the profane worship of the randomized clinical trial as the only valid method of investigation. This is disproved in the previous sections.

The name “Science Based Medicine” suggests that it is opposed to “Evidence Based Medicine”. At their blog David Gorski explains: “We at SBM believe that medicine based on science is the best medicine and tirelessly promote science-based medicine through discussion of the role of science and medicine.”

While this may apply to a certain extent to quack[ery] or homeopathy (the focus of SBM) there are many examples of the opposite: that science or common sense led to interventions that were ineffective or even damaging, including:

As a matter of fact many side-effects are not foreseen and few in vitro or animal experiments have led to successful new treatments.

At the end it is most relevant to the patient that “it works” (and the benefits outweigh the harms).

Furthermore EBM is not -or should not be- without consideration of prior probability, laws of physics, or plain common sense. To me SBM and EBM are not mutually exclusive.

Jacqueline finished by quoting a few comments that had appeared on the BMJ website after the parachute article. Some of them (not all, I’m happy to report) revealed that their authors lacked a sense of humor. Another argued that “EBM is not RCTs.” Still others argued that RCTs are valuable for precisely the reason illustrated by Jacqueline’s examples listed above: that even some, seemingly safe and effective treatments—based on science or common sense or clinical experience—have eventually been shown, when subjected to RCTs, to behave otherwise. No one at SBM would argue the point.

Science-Based Medicine is Not Opposed to Evidence-Based Medicine

I am confident in asserting that we at SBM are in nearly complete agreement with Jacqueline regarding how EBM ought to be practiced. We are, I’m sure, also in agreement that many objections to EBM are specious. Among these, soundly criticized on this site, are special pleadings and bizarre post-modern arguments. The name “Science-Based Medicine” does not suggest that we are opposed to EBM. What it does suggest is that several of us consider EBM to be incomplete in its gathering of evidence, incomplete in ways that Jacqueline herself touched upon. I explained this in a series of posts at the inception of SBM in 2008 (wait for the link), and I discussed it further at TAM7 last summer. As such, Managing Editor David Gorski invited me to respond to Jacqueline’s article. I am happy to do so because, in addition to clarifying the issues for her, it is important to review the topic periodically: The problems with EBM haven’t gone away, but readers’ memories are finite.

Let me begin by asserting that everyone here agrees that large RCTs are the best tools for minimizing bias in trials of promising treatments, and that RCTs have repeatedly demonstrated their power to refute treatment claims based solely on physiology, animal studies, small human trials, clinical judgment, or whatever. I made the very point in my talk at TAM7, offering the Cardiac Arrhythmia Suppression Trial and the Women’s Health Initiative as examples. We also agree that there are some situations in which RCTs, whether for logistical, ethical, or other reasons, ought not to be used or would not yield useful information even if attempted. Parachutes are an example, but there are subtler ones, e.g., the efficacy of pandemic flu vaccines or whether the MMR vaccine causes autism. As we shall see, however, the list of exceptions offered by Jacqueline and BMJ Clinical Evidence is neither a formal part of EBM nor universally accepted by EBM practitioners.

To reiterate: The most important contribution of EBM has been to formally emphasize that even a high prior probability is not always sufficient to establish the usefulness of a treatment—parachutes being exceptions.

EBM’s Scientific Blind Spot

Now, however, we come to an important problem with EBM, a problem not merely of misinterpretations of its tenets (although such are common), but of the tenets themselves. Although a reasonably high prior probability may not be a sufficient basis for incorporating a treatment into general use, it is a necessary one. It is, moreover, a necessary basis for seriously considering such a treatment at all; that is, for both scientific and ethical reasons it is a prerequisite for performing a randomized, controlled human trial. Rather than explain these points here and now, I ask you, Dear Reader, to indulge me by following this link to a post in which I have already done so in some detail. I’ll wait here patiently.

……………….

Are you back? OK. Now you know that we at SBM are in total agreement with Jacqueline that EBM “should not be without consideration of prior probability, laws of physics, or plain common sense,” and that SBM and EBM should not only be mutually inclusive, they should be synonymous. You also know, however, that Jacqueline was mistaken to claim that EBM already conforms to those ideals. It does not, and its failure to do so is written right into its Levels of Evidence scheme—the exceptions that she offered, including those quoted from BMJ Clinical Evidence, notwithstanding. You know all of this because you’ve now seen several examples (there are many more) from that wellspring of EBM reviews, Jacqueline’s own Cochrane Collaboration. (There is another, more subtle reason for prior probability being overlooked in EBM literature, but it is an optional exercise for the purposes of today’s discussion).

EBM and Unintended Mischief

The problems caused by EBM’s scientific blind spot are not limited to the embarrassment of Cochrane reviews suggesting potential clinical value for inert treatments that have been definitively refuted by basic science, although that would be sufficient to argue for EBM reform. The Levels of Evidence scheme has resulted in dangerous or unpleasant treatments being wished upon human subjects in the form of RCTs, case-control studies, or case series even when existing clinical or scientific evidence should have been more than satisfactory to put such claims to rest. The Trial to Assess Chelation Therapy (TACT)—the largest, most expensive, and most unethical trial yet funded by the NCCAM—was originally justified by these words in an editorial in the American Heart Journal in 2000, co-authored by Gervasio Lamas, who would later become the TACT Principal Investigator:

The modern standard for accepting any therapy as effective requires that there be scientific evidence of safety and efficacy in a fair comparison of the new therapy to conventional care. Such evidence, when widely disseminated, leads to changes in clinical practice, ultimately benefitting patients. However, the absence of a clinical trial does not disprove potential efficacy, and a well-performed but too small “negative” trial may not have the power to exclude a small or moderate benefit of therapy. In other words, the absence of evidence of efficacy does not constitute evidence of absence of efficacy. These concepts constitute the crux of the lingering controversy over chelation therapy…

Such an argument, with its obvious appeal to the formal tenets of EBM, was made and accepted by the NIH in spite of overwhelming evidence against the safety and effectiveness of Na2EDTA chelation treatments for atherosclerotic vascular disease, including the several “small” disconfirming RCTs, comprising approximately 270 subjects, to which Dr. Lamas alluded. It was also accepted in spite of its violating both the Helsinki Declaration and the NIH’s own policy stipulating that preliminary RCTs should demonstrate efficacy prior to a Phase III trial being performed.

A 2006 Cochrane Review of Laetrile for cancer would, if its recommendations were realized, stand the rationale for RCTs on its head:

The most informative way to understand whether Laetrile is of any use in the treatment of cancer, is to review clinical trials and scientific publications. Unfortunately no studies were found that met the inclusion criteria for this review.

Authors’ conclusions

The claim that Laetrile has beneficial effects for cancer patients is not supported by data from controlled clinical trials. This systematic review has clearly identified the need for randomised or controlled clinical trials assessing the effectiveness of Laetrile or amygdalin for cancer treatment.

Why does this stand the rationale for RCTs on its head? A definitive case series led by the Mayo Clinic in the early 1980s had overwhelmingly demonstrated, to the satisfaction of all reasonable physicians and biomedical scientists, that not only were the therapeutic claims for Laetrile baseless, but that the substance is dangerous. The subjects did so poorly that there would have been no room for a meaningful advantage in outcome with active treatment compared to placebo—as we have recently seen in another trial of a quack cancer treatment. The Mayo case series “closed the book on Laetrile,” the most expensive health fraud in American history at the time, only to have it reopened more than 20 years later by well-meaning Cochrane reviewers who seemed oblivious of the point of an RCT.

A couple of years ago I was surprised to find that one of the authors of that review was Edzard Ernst, a high-powered academic who over the years has undergone a welcomed transition from cautious supporter to vocal critic of much “CAM” research and many “CAM” methods. He is now a valuable member of our new organization, the Institute for Science in Medicine, and we are very happy to have him. I believe that his belated conversion to healthy skepticism was due, in large part, to his allegiance to the formal tenets of EBM. I recommend a short debate published in 2003 in Dr. Ernst’s Focus on Alternative and Complementary Therapies (FACT), pitting Jacqueline’s countryman Cees Renckens against Dr. Ernst himself. Dr. Ernst responded to Dr. Renckens’s plea to apply science to “CAM” claims with this statement:

In the context of EBM, a priori plausibility has become less and less important. The aim of EBM is to establish whether a treatment works, not how it works or how plausible it is that it may work. The main tool for finding out is the RCT. It is obvious that the principles of EBM and those of a priori plausibility can, at times, clash, and they often clash spectacularly in the realm of CAM.

I’ve discussed that debate before on SBM, and I consider it exemplary of what is wrong with how EBM weighs the import of prior probability. Dr. Ernst, if you are reading this, I’d be interested to know whether your views have changed. I hope that you no longer believe that human subjects ought to be submitted to a randomized, controlled trial of Laetrile!

Finally, for the purposes of today’s discussion, let me reiterate another point that must be considered in the context of establishing, via the RCT, whether a treatment works: When RCTs are performed on ineffective treatments with low prior probabilities, they tend not to yield merely ‘negative’ findings, as most physicians steeped in EBM would presume; they tend, in the aggregate, to yield equivocal findings, which are then touted by advocates as evidence favoring such treatments, or at the very least favoring more trials—a position that even skeptical EBM practitioners have little choice but to accept, with no end in sight. Numerous such examples have been discussed on this website.

The first sentence that I ever posted on SBM, a quotation from homeopath David Reilly, was a perfect illustration of this misunderstanding:

Either homeopathy works or controlled trials don’t!

Dr. Reilly was correct, of course, but not in the way that he supposed. If there is anything that the history of parapsychology can teach the biomedical world, it is the point just made: that human RCTs, as good as they are at minimizing bias or chance deviations from population parameters, cannot ever be expected to provide, by themselves, objective measures of truth. There is still ample room for erroneous conclusions. Without using broader knowledge (science) to guide our thinking, we will plunge headlong into a thicket of errors—exactly as happened in parapsychology for decades and is now being repeated by its offspring, “CAM” research.

Conclusion

These are the reasons that we call our blog “Science-Based Medicine.” It is not that we are opposed to EBM, nor is it that we believe EBM and SBM to be mutually exclusive. On the contrary: EBM is currently a subset of SBM, because EBM by itself is incomplete. We eagerly await the time that EBM considers all the evidence and will have finally earned its name. When that happens, the two terms will be interchangeable.

 


[Slashdot]
[Digg]
[Reddit]
[del.icio.us]
[Facebook]
[Technorati]
[Google]
[StumbleUpon]

Related Posts

Comments are closed.