A negative review of homeopathy published in 2005 reverberates still in blogs, journals, and press releases.
A reader comments, and I wax philosophical. But let’s start with summary of the issues.
In 2005, a comparison of homeopathy and allopathy (mainstream medicine) studies was published in The Lancet.
- 110 homeopathy and 110 matched mainstream-medicine studies were compared.
- Smaller and lower-quality studies showed more benefits than larger and higher-quality studies.
- When the analysis was restricted to the largest studies (about 6% of the total studies), the authors concluded that homeopathy was inferior.
According to the authors, their conclusion was justified because “This finding is compatible with the notion that the clinical effects of homeopathy are placebo effects.”
Yes, it’s always comforting when your findings conform to popular opinion.
After that article appeared, the Swiss Association of Homeopathic Physicians published an open letter to the editor that was critical of the conclusions. This year, others have weighed in with their criticisms here and here, which focus on the lack of transparency in divulging the criteria used to select and evaluate the studies.
Its complicated and technical, and enough to make you go for a massage. However, I find that this recent article by 2 reviewers in Germany quickly cuts to the problem — the methods used to analyze the homeopathy studies probably lead to the erroneous, albeit comforting conclusions.
- The studies selected for the review differed greatly.
- Number of patients treated
- Type of homeopathy used
- Type of publication (some studies were unpublished)
- Medical conditions treated
- Overall, homeopathy showed a significant benefit compared to placebo.
- But restricting the analysis to successively larger studies resulted in progressively less statistical significance for homeopathy.
- Ultimately, negative conclusions from the analysis of the 8 largest homeopathy studies were influenced by 1 negative study of arnica to prevent muscle soreness in 400 long-distance runners.
The bottom line?
Were the authors lazy, unknowledgeable, or beset by bias?
Maybe, but it’s really not important. The failure here is with the journal in meeting its responsibility as gatekeeper for high-quality, peer-reviewed studies.
The Lancet is really more of a newspaper than a medical journal. Rapid publication is one of its attractive features for researchers. Considering the number of methodological deficiencies in the article, one wonders if the editors even bothered to have it peer reviewed.
11/12/08 18:32 JR