Rambling Musings on Using the Medical Literature

For those who are new to the blog, I am nobody from nowhere. I am a clinician, taking care of patients with infectious diseases at several hospitals in the Portland area. I am not part of an academic center (although we are affiliated with OHSU and have a medicine residency program). I have not done any research since I was a fellow, 20 years ago. I was an excellent example of the Peter Principal; there was no bench experiment that I could not screw up.

My principal weapon in patient care is the medical literature, accessed throughout the day thanks to Google and PubMed. The medical literature is enormous. There are more than 21,000,000 articles referenced on Pubmed, over a million if the search term ‘infection’ is used, with 45,000 last year.

I probably read as much of the ID literature as any specialist. Preparing for my Puscast podcast, I skim several hundred titles every two weeks, usually select around 80 references of interest and read most of them with varying degrees of depth. Yet I am still sipping at a fire hose of information

The old definition of a specialist is someone who knows more and more about less and less until they everything about nothing. I often feel I know less and less about more and more until someday I will know nothing about everything. Yet I am considered knowledgeable by the American Board of Internal Medicine (ABIM), who wasted huge amounts of my time, a serious chunk of my cash, and who have declared, after years of testing, that I am recertified in my specialty. I am still Board Certified, but the nearly pointless exercise has left me certified bored. But I can rant for hours on Bored Certification and how out of touch with the practice of medicine the ABIM is.

My concept of an expert is a combination of experience and understanding of the literature. I used to say mastery of the literature, but no one can master a beast that large; I am just riding on the Great A’Tuin of medical writings. Experience comes with time, and I have read that it takes 10 years to become competent in a field. Whether true or not, it matches my experience. I remember as a resident reading notes on patients I had cared for as an intern, and being appalled with what an ignorant dufus I was. In my first year of practice I had a patient who died of miliary tuberculosis, and the diagnosis was, unfortunately, made at autopsy. It was an atypical manifestation of a rare (in the US) disease. About a decade later the case was presented as an unknown to a visiting professor; I had completely forgotten the case, but I piped up from the audience to pontificate on how this had to be miliary Tb. Afterwards I was shown the chart and nice documentation as to how clueless I had been a decade earlier. When it comes to being a diagnostician, there is not substitute for experience.

When it comes to treatment? That is where I tell the residents that the three most dangerous words in medicine are ‘In. My. Experience.’ You cannot trust experience when deciding on therapy, especially for relatively unusual diseases. Sometimes I will ask a doc why they use a given antibiotic, usually in a situation where it is being used in a way that is, shall we say, old fashioned. Often the response is “I like it”‘ as if the choice of a drug is like choosing a beer.

I rely on the literature — such as it is, and limited by my lack of an Ethernet jack in my brain — in deciding the best course of therapy for a patient. The literature is always unsatisfactory. That has always been known. Even with the best studies, there is always the issue of wondering if the literature applies to your patient and their particular co-morbidities, and, perhaps, genetics. As an example, it is becoming evident that the literature on the presentation and treatment of Cryptococcus, which is based on the experience with C. neoformans, is not applicable to C. gattii, a new strain of the fungus in the NW. So how to use a literature that may not be totally relevant to my local conditions? I wing it. It is an educated and experienced winging, but winging it I do.

Given the breadth and depth of the literature, it is nice to have systematic reviews, meta-analysis, and guidelines. As a practicing physician, I find them helpful as they provide an overarching understanding, a conceptual framework, for understanding a disease or a treatment. They are the Reader’s Digest abridged version of a topic, and the references are invaluable. Usually most of the relevant literature is collected in these reviews and make it easier, especially in the era of the Googles and on-line references, to find the original literature.

All three have their flaws, and if you are well versed in a field, you recognize the issues and try and compensate.

As was noted in the recent Archives, the literature to support the recommendations of the Infectious Disease Society America are not necessarily based on the best of evidence. Really? I’m shocked. Next up, water is wet, fire is hot, and the Archives confirms the obvious.

Results In the 41 analyzed guidelines, 4218 individual recommendations were found and tabulated. Fourteen percent of the recommendations were classified as level I, 31% as level II, and 55% as level III evidence. Among class A recommendations (good evidence for support), 23% were level I (1 randomized controlled trial) and 37% were based on expert opinion only (level III). Updated guidelines expanded the absolute number of individual recommendations substantially. However, few were due to a sizable increase in level I evidence; most additional recommendations had level II and III evidence.

Conclusions: More than half of the current recommendations of the IDSA are based on level III evidence only. Until more data from well-designed controlled clinical trials become available, physicians should remain cautious when using current guidelines as the sole source guiding patient care decisions.

Big duh. Anyone who is a specialist understands the weaknesses in all guidelines, but we also understand their importance. When I was a fellow, one of my attending was, and still is, one of the foremost experts in the US on Candida, and anothers areas of expertise is S. aureus infections and endocarditis.

Both have spent a career thinking deeply on their respective areas of expertise. You learn that while no one is perfect, the breadth and depth of their knowledge and experience gives their recommendations extra weight. Who would you want at the controls of your plane in a unexpected and unusual weather conditions? An experienced pilot, or someone who spent a few days on the X-Plane simulator? The same with all the guidelines. When someone with a lifetimes of work in a field helps write a guideline, you pay attention to their expertise. You know the recommendations are not necessarily right, but odds are their opinions are better than mine, just as my opinion is usually better than a hospitalist, as least as far as infections are concerned. With residents, I try make a point of differentiating when my recommendation is no better than the next doc, and when my recommendation is the Truth, big T, and based on the best understanding of the literature at the moment.

This attitude, trusting authority, held by many in medicine, goes against the University of Google approach where a day of searching and a quick misreading of the abstracts renders everyone an expert. I wonder if other fields are plagued with these quick pseudo-experts. Law is, when the accused attempt to defend themselves.

I certainly would be in favor of more money being spent on infectious disease research, and, one hopes, infectious disease doctors. In a perfect world, every disease would be subjected to careful, extensive clinical trials and I would know, for example, the best therapy for invasive Aspergillus pneumonia in a neutropenic leukemia patient. Until that time, I am, in part, going to rely on the guidelines written by those who have spent a career thinking about the diseases I have to treat. To quote Dr Powers,

“Guidelines may provide a starting point for searching for information, but they are not the finish line…Evaluating evidence is about assessing probability,” Dr. Powers commented in a news release. “Perhaps the main point we should take from the studies on quality of evidence is to be wary of falling into the trap of ‘cookbook medicine,’” Dr. Powers continues. “Although the evidence and recommendations in guidelines may change across time, providers will always have a need to know how to think about clinical problems, not just what to think.”

I was struck by a recent Medscape headline:

Cochrane Review Stirs Controversy Over Statins in Primary Prevention

Having been irritated of late by Cochrane reviews in my area of expertise, I clicked the link. The first three paragraphs are

A new Cochrane review has provoked controversy by concluding that there is not enough evidence to recommend the widespread use of statins in the primary prevention of heart disease.

The authors of the new Cochrane meta-analysis, led by Dr Fiona Taylor (London School of Hygiene and Tropical Medicine, UK), issued a press release questioning the benefit of statins in primary prevention and suggesting that the previous data showing benefit may have been biased by industry-funded studies. This has led to headlines in many UK newspapers saying that the drugs are being overused and that millions of people are needlessly exposing themselves to potential side effects.

This has angered researchers who have conducted other large statin meta-analyses, who say the drugs are beneficial, even in the lowest-risk individuals, and their risk of side effects is negligible. They maintain that the Cochrane reviewers have misrepresented the data, which they say could have serious negative consequences for many patients currently taking these agents.

Newsweek and The Atlantic both refer to the Cochrane review as a “study.” A review is not what I would consider a study, usually synonymous with a clinical trial. The use of the term makes it sound like the Cochrane folks were doing a clinical trial, patients being randomized to one treatment or another. My sloppy non scientific poll of people (all people in the medical field, but that is who I have contact with) suggests the no one considers a review of clinical trials to be a study. A review of a novel is not the same as writing a novel.

Sloppy and potentially misleading language from major news outlets. What a surprise.

I have always liked meta-analysis for the same reason I like guidelines: they provide an overarching conceptual framework for understanding a topic. But only a fool would make clinical decisions based upon a meta-analysis. Yet, meta-analysis seem to be creeping to the top of the list of the clinical information rankings to be believed.

There are issues with meta-analysis.

The studies included in a meta-analysis are often of suboptimal quality. Many spend time bemoaning the lack of quality studies they are about to stuff into their study grinder. Then, despite knowing that the input is poor quality, the go ahead and make a sausage. The theory, as I said last week, is that if you collect many individual cow pies into one big pile, the manure transmogrifies into gold. I still think it as a case of GIGO: Garbage In, Garbage Out.

It has always been my understanding that a meta-analysis was used in lieu of a quality clinical trial. Once you had a few high quality studies, you could ignore the conclusions of a meta-analysis.

Evaluations of the validity of the conclusions of meta-analysis have demonstrated that the results of a meta-analysis usually fail to predict the results of future good clinical trials. The JREF million is safe from the Cochrane, I suppose. Their conclusions are no more reliable than the studies they collect and are no more valid than the rest of the medical literature.

We identified 12 large randomized, controlled trials and 19 meta-analyses addressing the same questions. For a total of 40 primary and secondary outcomes, agreement between the meta-analyses and the large clinical trials was only fair (kappa= 0.35; 95 percent confidence interval, 0.06 to 0.64). The positive predictive value of the meta-analyses was 68 percent, and the negative predictive value 67 percent. However, the difference in point estimates between the randomized trials and the meta-analyses was statistically significant for only 5 of the 40 comparisons (12 percent). Furthermore, in each case of disagreement a statistically significant effect of treatment was found by one method, whereas no statistically significant effect was found by the other.

Once there was a quality definitive trial or three, the meta-analysis becomes, I thought, moot. A quality clinical trial trumps the meta. I guess. I am not so certain that is the attitude anymore given the freak-out in the media about Cochrane and statins.

It seems that the producers of meta-analysis have characteristics like the March of Dimes. Polio was conquered, but rather than folding up their tents and stealing away, they continue to march. That may be a good thing too, as there could be a polio resurgence if some anti-vaccine wackaloons have their way.

If there is a definitive trial, rather than declaring the question settled, the new, perhaps higher-quality, study is folded in with the prior studies and a new meta-analysis is generated. But newer studies are diluted by the older, less robust trials, so the more reliable results are lost in the wash. The best drowned in a sea of mediocrity.

For example, I see no need for a meta-analysis on the efficacy of Echinacea. The last several trials, combined with basic science/prior probability, provides sufficient evidence to conclude Echinacea does not work. Good trials win. Ha.

As a practicing specialist, no matter how much I read, I rely in part on guidelines, meta-analyses and systematic reviews as nice overviews to be used as flawed stopgaps awaiting large high quality clinical trials, that, like Godot, may never come.

I have sick patients who need treatment. I need to know what to do. I have to fight the battles with the weapons I have. I have the medical literature and I am not afraid to use it.

Facebook Google Buzz Digg LinkedIn StumbleUpon LiveJournal Share

Related Posts

Comments are closed.