Magic shouldn’t be offered up as a mechanism in a scientific paper.
Well, that took a while. Five years after Ars’ Chris Lee pointed out that the authors of a homeopathy paper were doing little more than offering up “magic” as an explanation for their results, the editors of the journal it was published in have retracted it. The retraction comes over the extensive objections of the paper’s authors, who continued to believe their work was solid. But really, the back-and-forth between the editors and authors has gotten bogged down in details that miss the real problem with the original paper.
The work described in the now-retracted paper involved a small clinical trial for depression treatment with three groups of participants. One group received a standard treatment, another a placebo. The third group received a homeopathic remedy—meaning they received water. According to the analysis in the paper, the water was more effective than either placebo or the standard treatment. But as Chris noted in his original criticism, the authors leap to the conclusion that treating people with water must therefore be effective.
The problem with this is that it ignores some equally viable explanations, such as a statistical fluke in a very small study (only about 45 people per group) or that it was the time spent with the homeopathic practitioner that made the difference, not the water. These are problems with the interpretation of the results rather than with the data. (This probably explains why the paper ended up published by PLOS ONE, where reviewers are asked to simply look at the quality of the data rather than the significance of the results.)
That doesn’t mean there weren’t potential problems with the data. According to the retraction notice, other researchers criticized specific aspects of the research, which caused the journal to convene a panel that included three editors, an outside academic, and a statistical expert. They considered one of the issues Chris noted—the inability to exclude a placebo effect from the homeopathic process. But they also looked into how the authors chose different “treatments” that involved variations on preparing the water. There were also questions about how the participants had been diagnosed in the first place.
The authors of the original paper were given the chance to respond, and they did so. But PLOS ONE’s committee found their response insufficient, leading to the retraction.
In responding to their rejection, the authors say they’ve provided more than enough information for anyone skilled in homeopathy to repeat the study—they’d all apparently know precisely how to prepare water based on a patient’s symptoms.
But their response also kind of gives the game away. “The PLOS ONE Editors did not explain in what ways they considered our study design to be inadequate,” they wrote. “Rather, they simply stated that because the homeopathic treatments included [different] potencies of the homeopathic medicines [aka water], any positive effect seen must have been a placebo effect.” For the authors, their study design was meant to rule out a placebo effect so that any difference would show the effect of homeopathy. They’re upset that the editors didn’t see things that way.
And that, rather than the specific complaints about the methodology, is the actual problem here. Control groups don’t tell you anything about the specific mechanism that’s driving any changes in the experimental group. They just let you identify when the experimental conditions produce a different result. The cause of that difference is a matter of interpretation, informed by what we know from other scientific studies. If you see a difference, you have to consider all scientifically plausible mechanisms to account for it.
Based on what we know from other work, the PLOS ONE editors are right to consider “homeopathy generates a stronger placebo effect than a pill” a plausible mechanism. And they’re right to not consider “water behaves magically” remotely plausible. The paper’s authors prefer the latter, so their work doesn’t belong in a scientific journal.
What’s somewhat frustrating is that the editors don’t bother enunciating that. Instead, their retraction notice largely focuses on experimental details, as if the paper could be fixed by elaborating the Materials and Methods section. It provides a misleading picture of the issues here. While there’s some consolation from achieving the right result—the paper is officially retracted—it would be more helpful if the result came about for the right reasons.