July 2012

Out on a limb: Power in practice

Jordana Bieze Foster, Editor

Healthcare researchers don’t get a lot of instant gratification. Typically it takes years for a study’s findings to have an impact on clinical practice, and often that doesn’t happen until those findings have been replicated by additional studies.

It makes sense. Practitioners are busy.
They don’t have a lot of time to sort through all of the latest research and try to figure out which findings are significant enough to justify a change in the way they practice. In fact, helping practitioners cut through the clutter for research news that’s not just interesting but also clinically relevant is, to me, one of LER’s most important functions.

Because every once in a while, we’re reminded of how powerful a single study can be when its conclusions somehow find their way into the clinical community’s collective consciousness.

This time, it’s the 2002 randomized clinical trial in The New England Journal of Medicine that found arthroscopy is no more effective than sham arthroscopy in patients with knee osteoarthritis. I’m sure most of you remember when that study was published. I know I do. I was editor of BioMechanics magazine at the time, and it was the top news story in our September 2002 issue. The strength of the evidence, plus the controversial decision to perform an invasive sham procedure on patients knowing there would be no benefit, made that study unquestionably newsworthy. But we had no way of knowing how, if at all, the findings might affect surgeons in the real world.

Now we have some idea. A Cleveland Clinic study just published in the American Journal of Sports Medicine found that utilization of knee arthroscopy for patients with OA decreased from 2.36 cases per surgeon in 2001—just prior to the publication of the NEJM study—to 1.4 cases per surgeon in 2009 (see “Arthroscopy attrition”).

Normally one wouldn’t think to credit a single study for such a dramatic shift in utilization. But in this case, it’s really the only explanation. To my knowledge, no other research teams have attempted to replicate the NEJM study protocol in the last decade. So that original study itself must have been the catalyst. Which means practitioners didn’t just hear about it. They also thought about it.

The senior author of the NEJM paper suggests surgeons must have been impressed with the methodological quality of their study. The senior author of the Cleveland Clinic analysis suggests that the “broad and sustained” media coverage of the original study might have made the difference.

I suspect both are right. Media coverage itself doesn’t make a study relevant. And the most revolutionary study in the world won’t change a thing if researchers are the only ones who know about it.

Practitioners, not researchers, define practice patterns. But practitioners with access to groundbreaking research define their practice patterns accordingly. That’s good for patients. And that’s what makes my job worth doing.

 Jordana Bieze Foster, Editor

(Visited 10 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.