The take-home message from several scientific efforts to answer that question: The effectiveness of antidepressants has been overstated, and the benefit might be limited to far fewer patients than were actually using the drugs. But that may not be the final answer.
More people in the United States are on antidepressants, as a percentage of the population, than any other country in the world. And yet the drugs’ efficacy has been hotly debated.
Some believe that the short-term benefits are much more modest than widely thought, and that harms may outweigh benefits in the long run. Others believe that they work, and that they can be life-changing.
Settling this debate has been much harder than you might think.
It’s not that we lack research. Many, many studies of antidepressants can be found in the peer-reviewed literature. The problem is that this has been a prime example of publication bias: Positive studies are likely to be released, with negative ones more likely to be buried in a drawer.
Most Read Local Stories
- Map: Kim Schrier won big in King County suburbs, even in Dino Rossi's neighborhood
- Bike-share company Lime launching car-rental service in Seattle
- Hate crimes skyrocket across the nation, almost double in Seattle over the past year
- Seattle Public Schools makes progress but doesn't meet most improvement goals in latest scorecard
- State drops charges against Tacos Guaymas owner accused of tax theft
In 2008, a group of researchers made this point by doing a meta-analysis of antidepressant trials that were registered with the Food and Drug Administration as evidence in support of approvals for marketing or changes in labeling. Companies had to submit the results of registered trials to the FDA regardless of the result. These trials also tend to have less data massaging — such as the cherry-picking of outcomes — than might be possible in journals.
The researchers found 74 studies, with more than 12,500 patients, for drugs approved between 1987 and 2004. About half of these trials had “positive” results, in that the antidepressant performed better than a placebo; the other half were “negative.” But if you looked only in the published literature, you’d get a much different picture. Nearly all of the positive studies are there. Only three of the negative studies appear in the literature as negative. Twenty-two were never published, and 11 were published but repackaged so that they appeared positive.
A second meta-analysis published that year also used FDA data instead of the peer-reviewed literature, but asked a different question. Researchers wondered if the effectiveness of a study was related to the baseline levels of depression of its participants. The results suggested yes. The effectiveness of antidepressants was limited for those with moderate depression, and small for those with severe depression.
The take-home message from these two studies was that the effectiveness of antidepressants had been overstated, and that the benefit might be limited to far fewer patients than were actually using the drugs.
“An Evidence Myth,” and some good news
These points, and more, were made in a paper written by John Ioannidis in the journal Philosophy, Ethics, and Humanities in Medicine in 2008. He argued that the study designs and populations selected, especially the short length of many studies, biased them to positive results. He argued that while many studies achieved statistical significance, they failed to achieve clinical significance. He argued that we knew too little about long-term harms, and that we were being presented with biased information by looking only at published data.
This paper — “Effectiveness of Antidepressants: An Evidence Myth Constructed From a Thousand Randomized Trials?” — sowed lingering doubts about the use of antidepressants and the conduct of medical research. But recently, the most comprehensive antidepressants study to date was published, and it appears to be a thorough effort to overcome the hurdles of the past.
Researchers, including Ioannidis this time, searched the medical literature, regulatory agency websites and international registers for both published and unpublished double-blind randomized controlled trials, all the way till the beginning of 2016.
They looked for both placebo-controlled and head-to-head trials of 21 antidepressants used to treat adults for major depressive disorder. They used a “network meta-analysis technique,” which allows multiple treatments to be compared both within individual trials directly and across trials indirectly to a common comparator. They examined not only how well the drugs worked, but also how tolerated the treatment was — what they called acceptability.
They found 522 trials that included more than 116,000 participants. Of those, 86 were unpublished studies found on trial registries and company websites. An additional 15 were discovered through personal communication or by hand-searching review articles. The authors went an extra step and asked for unpublished data on the studies they found, getting it for more than half the included trials.
The reassuring news is that all of the antidepressants were more effective than placebos. They varied modestly in terms of efficacy and acceptability, so each patient and doctor should discuss potential benefits and harms of individual drugs.
Further good news is that smaller trials did not have substantially different results from larger trials.
It also did not appear that industry sponsoring of trials correlated with significant differences in response or dropout rates. But — and this is a big “but” — the vast majority of trials are funded by industry. As a result, this meta-analysis may not have had enough data on non-industry trials to accurately determine if a difference exists.
There were also signs of “novelty” bias: Antidepressants seemed to perform better when they were newly released in the market but seemed to lose efficacy and acceptability in later years.
The bad news is that even though there were statistically significant differences, the effect sizes were still mostly modest. The benefits also applied only to people who were suffering from major depression, specifically in the short term. In other words, this study provides evidence that when people are found to have acute major depression, treatment with antidepressants works to improve outcomes in the first two months of therapy.
Because we lack good data, we still do not know how well antidepressants work for those with milder symptoms that fall short of major depression, especially if patients have been on the drugs for months or even years. Many people probably fall into that category, yet are still regularly prescribed antidepressants for extended periods. We don’t know how much of the benefit received from such use is a placebo effect versus a biological one.
Some caveats on conclusions
I asked Ioannidis if the results of this new study were as radical as many news articles had suggested. He confirmed that this was a much-larger meta-analysis — with about 10 times more information — than the ones from a decade ago, with more unpublished data and more antidepressants covered. He’s also hopeful that future studies will be even better at informing individual-level responses, which might help to see if some patients benefit substantially even when others don’t seem to benefit at all.
But he thought that some of the exuberance in the news media might be a little overblown. “I am afraid that some news stories gave very crude interpretations that may be misleading, especially when their titles were too absolute, like ‘the drugs work’, ‘the debate is over’ and so forth,” he said. “The clinical (as opposed to statistical) significance of the treatment effects that we detected will continue to be contested, and it is still important to find ways that one can identify the specific patients who get the maximum benefit.”
Even with so much research on antidepressants, there are still many unanswered questions. It’s unclear if drug companies would be interested in the results, or indeed why they would be. The drugs are already being widely used, and no regulatory agency is requiring more data. If patients want answers, they will need to demand the research themselves.