For the longest time, the field of psychiatry remained silent about the STAR*D scandal. Ed Pigott and colleagues first published a deconstruction of the study in 2010, detailing the protocol violations that the STAR*D investigators had employed to inflate the cumulative remission rate, and even after Pigott and collaborators published a RIAT reanalysis of the study findings this past July, there was silence from psychiatry regarding this scandal.
Now that silence has finally been broken, and in a powerful way.
The first crack in that code of silence occurred on December 1, when the STAR*D investigators, in a letter published by the American Journal of Psychiatry, sought to defend their actions. They did so with a claim—that Pigott and colleagues had created “post-hoc” criteria in order to remove good responders from their analysis—that was easily shown to be a lie. As such, it simply served to deepen the scandal, and further impugn the credibility of the American Journal of Psychiatry, and by extension, the American Psychiatric Association, which is the publisher of the journal.
But then the Psychiatric Times reported on Pigott’s RIAT re-analysis in its December issue, and this was a report of a very different kind. The article, written by John Miller, editor-in-chief of Psychiatric Times, prompted readers to consider the possible extraordinary harm done.
Here is the cover from that issue:
In his essay, Miller repeatedly stressed that ever since 2006, the STAR*D study had stood “out as a beacon guiding treatment decisions.” And while he didn’t conclude that Pigott’s reanalysis was proof the STAR*D results were grossly inflated, he described the paper as a “well-researched publication,” and he reviewed several of the protocol violations that Pigott and colleagues had identified.
Most important, he emphasized that psychiatry needed to turn its attention to the Pigott paper:
“In my clinical opinion, it is urgent for the field of psychiatry to reconcile the significant differences in remission rates for patients with MDD as published in the original STAR*D article in 2006 with the reanalysis just published in the BMJ article this year.”
And he succinctly identified what was now at stake:
“For us in psychiatry, if the BMJ authors are correct, this is a huge setback, as all of the publications and policy decisions based on the STAR*D findings that became clinical dogma since 2006 will need to be reviewed, revisited, and possibly retracted.”
That sentence tells of how the STAR*D study was a pivotal moment in American medical history. The published findings told of drug treatment that led to two-thirds of all patients getting well, their symptoms having vanished by the end of the four stages of treatment. This was evidence that the treatment “worked,” and worked well, at least for the majority of patients.
Pigott’s reanalysis tells of drug treatment that failed to help two-thirds of patients, even after multiple drugs and drug combinations had been tried. Equally important, his work tells of how the study failed to provide evidence that such treatment helped patients stay well.
That is the narrative clash at stake here. The published findings supported the public understanding that antidepressants are an effective treatment. Pigott’s reanalysis told of a failed paradigm of care. Thus, the question ultimately posed by Miller’s thoughtful essay: What if the STAR*D authors had told the truth? How might psychiatric care—and societal use of these drugs—have changed?
Winding Back The Clock
As is well known, it was the arrival of Prozac on the market in 1988 that kicked off the dramatic increase in the prescribing of antidepressants. The drug was marketed as an antidote to a disease. However, the creation of a “Prozac Nation” might not have been successful without the help of the NIMH, which launched an “educational campaign” at that time designed to change the public’s understanding of depression. Without that campaign, the “disease model” of depression may never have taken hold in the public mind.
Prior to the arrival of Prozac, a NIMH survey found that only 12% of American adults would take a pill to treat depression. Seventy-eight percent said they “would live with it until it passed,” confident that they could handle it on their own. This was the understanding that the NIMH decided needed to be changed.
Five months after Prozac came to market, the NIMH launched a “Depression Awareness, Recognition and Treatment” campaign. The purpose of DART was “to change public attitudes so that there is greater acceptance of depression as a disorder rather than a weakness.” The public needed to understand that depression regularly went “underdiagnosed and undertreated,” and that it could be a “fatal disease” if left untreated. The DART literature told of how antidepressants produced recovery rates of “70% to 80% in comparison with 20% to 40% for placebo.”
To conduct this campaign, the NIMH enlisted “labor, religious, educational groups” and businesses to help spread its message. The NIMH ran advertisements in the media, and Eli Lilly helped pay for the printing and distribution of 8 million DART brochures titled “Depression: What You Need to Know.” This pamphlet informed readers of the particular merits of “serotonergic” drugs for the disease. “By making these materials on depressive illness available, accessible in physicians’ offices all over the country, important information is effectively reaching the public in settings which encourage questions, discussion, treatment, or referral,” said NIMH director Lewis Judd.1
With this “educational” campaign underway, and Eli Lilly promoting Prozac as a new kind of medication, one that acted specifically on the serotonin system, the media touted it as a “breakthrough” medication that could quickly vanish the blues. Antidepressants, The New York Times informed its readers in 1990, “work by restoring the balance of neurotransmitter activity in the brain, correcting an abnormal excess or inhibition of the electrochemical signals that control mood, thoughts, appetite, pain and other sensations.” Prozac, a psychiatrist told The New York Times, “is not like alcohol or Valium. It’s like antibiotics.”
This was the story being told to the public, and prescribing of antidepressants took off. In 1987, fewer than 2% of adults had used an antidepressant in the past month. Over the next 20 years, this percentage grew four-fold, to 8%. That was the state of antidepressant usage when the STAR*D results were published.
The Trail of Evidence Leading to the STAR*D Study
Although the medical journals were filled by that time with reports of the efficacy of SSRI antidepressants, those results came from industry-funded trials. What researchers understood, however, was that those studies didn’t capture the effectiveness of the drugs in “real-world” patients. The industry-funded trials excluded those with long-term depression, suicidal thoughts, or with comorbid problems (anxiety, substance abuse, etc.). Various studies found that 60 to 90 percent of “real-world patients” would be ineligible to participate in the industry trials because of the exclusion criteria.
This led John Rush, a psychiatrist at Texas Southwestern Medical Center in Dallas, to proclaim that “both shorter- and longer-term clinical outcomes of representative outpatients with nonpsychotic major depressive disorders treated in daily practice in either the private or public sectors are yet to be well-defined.” Prior to the launch of STAR*D, Rush obtained NIMH funding to conduct a small trial in “real-world” outpatients. The 118 patients were treated with antidepressants together with emotional and clinical support “specifically designed to maximize clinical outcomes.”
The results, which Rush reported in 2004, were dispiriting. Only 26% of the real-world patients responded to the antidepressant during the first year of treatment (meaning that their symptoms decreased by at least 50% on a rating scale), and only about half of that group had a “sustained response.” Only 6% of the patients saw their depression fully remit and stay away during the year-long study. These “findings reveal remarkably low response and remission rates,” Rush concluded.
There were other discouraging findings that kept popping up at this time in government-funded research. The NIMH funded a trial that compared Zoloft to Zoloft plus exercise to exercise alone, and at the end of 10 months, 70% in the exercise-alone group were well, compared to fewer than 50% in either of the groups treated with Zoloft. At least in this 2000 study, Zoloft appeared to detract from the benefits of exercise.
Two years later, in a National Institute of Health study that compared Zoloft to St. John’s wort and to placebo, 24% of the patients treated with St. John’s wort had a “full response” compared to 25% of the Zoloft patients and 32% of the placebo group. “This study fails to support the efficacy of H perforatum in moderately severe depression,” the investigators concluded, glossing over the fact that it also failed to support the efficacy of Zoloft as a treatment for moderately severe depression.
The public heard little of these results. However, there was recognition within the NIMH that that the industry-funded trials didn’t necessarily provide evidence of the efficacy of antidepressants in real-world patients, which prompted it to launch the STAR*D study, touting it as the “largest and longest study ever done to evaluate depression treatment.” Rush was named the lead investigator of the $35 million effort.
“Given the dearth of controlled data [in real-world patient groups], results should have substantial public health and scientific significance, since they are obtained in representative participant groups/settings, using clinical management tools that can easily be applied in daily practice,” the STAR*D investigators wrote. The results, the NIMH promised, would be “rapidly disseminated.”
That is the “evidence” trail that led up to the STAR*D study. The effectiveness of antidepressant in real-world patients was understood to be “unknown,” and now this study would provide an answer to that question. The findings were expected to guide clinical care from that point forward.
The Results of the STAR*D Study
To be eligible for the STAR*D study, patients needed a baseline score of 14 or higher on the HAM-D scale, a measure that told of patients who were at least “moderately” depressed. The enrolled patients would be given up to four tries to remit on one or more antidepressant medications, and those who remitted—after any one of the four steps—would then be encouraged to participate in a year-long follow-up. The follow-up was designed to assess the efficacy of maintenance antidepressant therapy in keeping patients well.
Although there were 4,041 patients enrolled into the study, 931 didn’t meet eligibility criteria, either because their baseline HAM-D score was less than 14 or because they lacked any baseline HAM-D score. This left 3,110 “evaluable” patients, and Pigott and colleagues, in their analysis of patient-level data, determined that at the end of four steps of treatment, 1,089 in this group had remitted.
Although only 1,089 evaluable patients had remitted, there were 1,518 who entered the follow-up study “in remission.” More than one-third were from the group of 931 patients who either weren’t depressed enough to be eligible for the trial or lacked a baseline HAM-D score, as these patients had still been “treated,” and those who scored as remitted at the end of a treatment step were invited to participate in the year-long follow-up. The inclusion of this group could have been expected to boost the stay-well rate, particularly since 99 had scored in remission at baseline.
During the next 12 months, the remitted patients received regular clinical care. The protocol allowed clinicians to adjust the dosage of their medication, change medications, or combine medications. Patients were scored as relapsed if, during one of the periodic assessments, they scored 14 or higher on the HAM-D scale. The expectation was that maintenance antidepressant care would result in at least 70% of the remitted patients staying well.
However, at the end of 12 months, only 108 of the 1,518 patients were still in the study and well. All of the others had either dropped out or relapsed.
Such were the results from the STAR*D trial. Only 35% of the patients depressed enough to be eligible for the trial remitted at the end of one of the four treatment steps, and very few of the remitted patients stayed well and in the study to the end.
Indeed, as Pigott and collaborators reported in their 2010 paper, of the 4,041 patients who had entered the study, only 108 were well and still in the study at its end, a documented stay-well rate of less than 3%. These “findings argue for a reappraisal of the current recommended standard of care of depression,” they wrote.
The What-If Scenario
These results did not fit into the narrative that had been created by industry-funded trials and by the NIMH’s DART program, which told of recovery rates of “70% to 80%.” Instead, the results were more in line with the dispiriting results from other NIMH funded research—the St. John’s wort study, the exercise study, and Rush’s smaller study of 118 real-world patients.
The one-year results were particularly unsettling. How could these results be explained? If the STAR*D investigators had publicized them, the public would have been prompted to ask a question that needed to be answered: What was the natural course of depression? In the absence of treatment, what percentage of depressed patients could expect to get well and stay well?
A deep dive into the research literature can provide an answer.2
The tracking of long-term outcomes of depressed patients dates back to the work of German psychiatrist Emil Kraepelin. In the late 1800s he had systematically studied the long-term outcomes of psychotic patients at an asylum in Estonia, and during this research, he identified 450 patients with “psychotic depression” (but no mania). He reported that 60% of this group had but a single episode of depression, and only 13% had three or more episodes.
Other investigators during the first half of the 20th century reported similar outcomes. In 1931, Horatio Pollock, of the New York State Department of Mental Hygiene, conducted a long-term study of 2,700 depressed patients hospitalized from 1909 to 1920. He found that more than half of those admitted for a first episode had but a single attack, and only 17% had three or more attacks. A Swedish physician, Gunnar Lundquist, followed 216 patients that had been treated for a first episode of depression, and he determined that 49% never experienced a second attack, and that another 21% had only one other episode. After a person has recovered from a depressive episode, Lundquist wrote, he “has the same capacity for work and prospects of getting on in life as before the onset of the disease.”
These good outcomes spilled over into the first years of the antidepressant era. In 1972, Samuel Guze and Eli Robins at Washington University Medical School in St. Louis reviewed the scientific literature and determined that in follow-up studies that lasted 10 years, 50% of people hospitalized for a first episode of depression recovered and had no recurrence of their illness. Only a small minority of those with unipolar depression—one in ten—became chronically ill, Guze and Robins concluded.
This was the scientific evidence that led NIMH officials during the 1960s and 1970s to speak optimistically about the long-term course of depression. “Depression is, on the whole, one of the psychiatric conditions with the best prognosis for eventual recovery, with or without treatment. Most depressions are self-limited,” wrote Jonathan Cole, head of the NIMH’s Psychopharmacology Service Center, in 1964. Other prominent figures in the United States, such as Nathan Kline, echoed this belief. “In the treatment of depression, one always has as an ally the fact that most depressions terminate in spontaneous remissions. This means that in many cases regardless of what one does the patients eventually will begin to get better.”
In 1974, Dean Schuyler, head of the depression section at the NIMH, recapped this understanding of depression in a succinct way. Spontaneous recovery rates were so high, exceeding 50% within a few months, that it was difficult to “judge the efficacy of a drug, a treatment [electroshock] or psychotherapy in depressed patients.” Most depressive episodes, he wrote, “will run their course and terminate with virtually complete recovery without specific intervention.”
There were, however, a smattering of articles that had appeared in Europe by this time that suggested antidepressants were altering the long-term course of depression, and not in a helpful way. Physicians in Germany, Yugoslavia, and Bulgaria wrote that it appeared that the drugs were causing a “chronification” of the disease.
In 1974, a Dutch physician, J.D. Van Scheyen, examined the case histories of 94 depressed patients to see if this might be so. Some had taken antidepressants, and some had not, and when Van Scheyen looked at the outcomes at the end of five years, the difference between the two groups was startling. “It was evident, particularly in the female patients, that more systematic long-term antidepressant medication, with or without ECT, exerts a paradoxical effect on the recurrent nature of the vital depression. In other words, this therapeutic approach was associated with an increase in recurrent rate and a decrease in cycle duration . . . Should [this increase] be regarded as an untoward long-term side effect of treatment with tricyclic antidepressants?”
Those scattered voices didn’t seem to make much of a mark on U.S. psychiatrists. Yet, in 1980, when the American Psychiatric Association published the third edition of its Diagnostic and Statistical Manual of Mental Disorders, it newly conceptualized depression and other major mental disorders as diseases of the brain, which suggested they were chronic conditions.
“They should be considered medical illnesses just as diabetes, heart disease, and cancer are,” wrote Nancy Andreasen, in her bestselling 1984 book, The Broken Brain. The thought was that “each different illness has a different specific cause,” she said, adding that researchers were now honing in on those causes. “There are many hints mental illness is due to chemical imbalances in the brain and that treatment involves correcting these chemical imbalances.”
This new conception told of how depression was a permanent feature of a person’s makeup, and that it required medication to keep the symptoms at bay. And during the next 20 years, studies of the longer-term outcomes of depressed patients found that it indeed ran a chronic course. Many first-episode patients didn’t respond to an antidepressant, others responded but then relapsed when they quit taking their medication, and others relapsed while taking the medication. The American Psychiatric Association’s 1999 Textbook of Psychiatry summed up the long-term outcomes literature that now existed: “Only 15% of people with unipolar depression experience a single bout of the illness,” and for the remaining 85%, with each new episode, remissions become “less complete and new recurrences develop with less provocation.”
This new course was notably worse than it had been in the pre-antidepressant era. What made this all the more remarkable was that those earlier studies had been of people who had been hospitalized for depression, and the modern epidemiological studies often tracked outpatient outcomes. What could explain this change in the long-term course of depression? A NIMH panel of experts reviewed this history and concluded that the old epidemiological studies were flawed. “Improved approaches to the descriptions and classification of [mood] disorders and new epidemiologic studies [have] demonstrated the recurrent and chronic nature of these illnesses, and the extent to which they represent a continual source of distress and dysfunction for affected individuals,” they wrote.
Depression was at last being understood, that was the story that psychiatry embraced, and textbooks were rewritten to tell about this advance in knowledge. Not long ago, noted the 1999 edition of the American Psychiatry Association’s textbook, it was believed that “most patients would eventually recover from a major depressive episode. However, more extensive studies have disproved this assumption.” It was now known that “depression is a highly recurrent and pernicious disorder.”
That was the narrative that American psychiatry embraced to explain the bleak outcomes. Yet, as a few researchers noted, this was the course of medicated depression. Italian psychiatrist Giovanni Fava, in a series of articles dating back to 1994, raised a different possibility: perhaps antidepressants “sensitized” the brain to depression” and this was the cause of the poor long-term outcomes in modern times. He wrote:
“Antidepressant drugs in depression might be beneficial in the short term, but worsen the progression of the disease in the long-term, by increasing the biochemical vulnerability to depression . . . Use of antidepressant drugs may propel the illness to a more malignant and treatment unresponsive course.”
In a 1998 letter to the Journal of Clinical Psychiatry, three physicians from the University of Louisville echoed this sentiment. “Long-term antidepressant antidepressant use may be depressogenic,” they wrote. “It is possible that antidepressant agents modify the hardwiring of neuronal synapses [which] not only render antidepressants ineffective but also induce a resident, refractory depressive state.”
This possibility was never promoted to the American press, and after Fava first broached the subject in 1994, Donald Klein from Columbia University told Psychiatric News that this subject was not going to be investigated. “The industry is not interested [in this question,] the NIMH is not interested, and the FDA is not interested,” he said. “Nobody is interested.”
Even so, there were a handful of studies conducted during the 1990s and early 2000s that provided support for Fava’s hypothesis. Such research told of rising disability rates due to mood disorders in various countries and of better outcomes for depressed patients who eschewed taking medications.
The NIMH, for its part, funded two studies that sought to flesh out the course of “unmedicated depression.” In one, University of Iowa psychiatrist William Coryell and colleagues studied the six-year “naturalistic” outcomes of 547 people who had suffered a bout of depression, and they found that those who were treated for the illness were three times more likely than the untreated group to suffer a “cessation” of their “principal social role,” and nearly seven times more likely to become “incapacitated.” Moreover, while many of the treated patients saw their economic status markedly decline during the six years, only 17 percent of the unmedicated group saw their incomes drop, and 59 percent saw their incomes rise. “The untreated individuals here had milder and shorter-lived illnesses [than those who were treated], and, despite the absence of treatment, did not show significant changes in socioeconomic status in the long term,” Coryell wrote.
The second NIMH study was led by Michael Posternak, a psychiatrist at Brown University. “Unfortunately,” he wrote, “we have little direct knowledge regarding the untreated course of major depression.” To assess what untreated depression might be like in modern times, he and his collaborators identified 84 patients enrolled in the NIMH”s Psychobiology of Depression program who, after recovering from an initial bout of depression, subsequently relapsed but did not then go back on medication. Although these patients were not a “never exposed” group, Posternak and colleagues could still track their “untreated” recovery from this second episode of depression.
They reported that 23% percent recovered in one month, 67% percent in six months, and 85% within a year. This, they noted, was consistent with the outcomes reported in the pre-antidepressant era. “If as many as 85% of depressed individuals who go without somatic treatment spontaneously recover within one year, it would be extremely difficult for any intervention to demonstrate a superior result to this.”
Posternak published his findings in 2006, a few months before the STAR*D investigators published their summary results. There was an opportunity here for the NIMH to publicize both 12-month results at the same time, and if it had, it would have made for a bombshell.
What would the U.S. public have made of the two contrasting outcomes?
The public, of course, was not informed of these dueling outcomes. Nor did they know of the brief history cited here, of how an episodic illness had been transformed into a chronic one during the antidepressant era. There was one other study published in 2006 that could have added to this discussion.
In the early 1990s, researchers determined that 10% to 15% of depressed patients were “treatment resistant.” By 2006, this figure had climbed to 40%. Treatment-resistant depression had been on the march ever since SSRIs were introduced, and it was present in the STAR*D findings. Remission rates notably dropped after patients failed to remit in one of the first two steps.
A few years later, psychiatrist Rif El-Mallakh, who was known as an expert in mood disorders, reviewed the outcomes literature and, noting the rise of treatment-resistant depression since Prozac was introduced, concluded that SSRIs could induce a “tardive dysphoria.” He wrote:
“A chronic and treatment-resistant depressive state is proposed to occur in individuals who are exposed to potent antagonists of serotonin reuptake pumps for prolonged time periods. Due to the delay in the onset of this chronic depressive state, it is labeled tardive dysphoria (TDp). TDp manifests as a chronic dysphoric state that is initially transiently relieved by – but ultimately becomes unresponsive to – antidepressant medication. Serotoninergic antidepressants may be of particular importance in the development of TDp.”
Such is the history that provides a context for understanding the “what if” moment in 2006, when the STAR*D investigators published their summary findings. A dive into the scientific literature revealed that depression had transformed from an episodic illness into a chronic one during the antidepressant era; that treatment-resistant depression had been on the march since the introduction of SSRIs; that a large NIMH study had found better six-year outcomes for the unmedicated patients; and that researchers had hypothesized that antidepressants “sensitized” the brain to depression. Posternak’s finding of an 85% recovery rate for depressed patients who didn’t take antidepressants was consistent with outcomes in the pre-antidepressant era.
If the STAR*D investigators had honestly reported their findings to the public, the public narrative about the efficacy of antidepressants would have undergone a dramatic revision. The breakthrough medications of the 1990s would have been seen in a new light, one that surely would have dampened enthusiasm for their use.
The Fabrication that Took Over the American Mind
The STAR*D scandal is not a story of the STAR*D investigators adding a smattering of remitters to their total by slight violations of the protocol. It is a story of research misconduct on a grand scale.
If the STAR*D investigators had adhered to their protocol, they would have reported that 1,089 of 3,011 evaluable patients had remitted at the end of one of the four steps. Instead, in their final summary report, they told of a 67% cumulative remission rate out of an evaluable population of 3,671 patients.
That percentage told of 2,460 patients having remitted. Here is how they inflated the remission rate to such a great extent:
- They counted remissions in the 931 patients who weren’t depressed enough to be eligible for the trial or else lacked a baseline HAM-D score. This added 570 to their remission count.
- They switched from using the HAM-D scale to assess outcomes to the QIDS, the latter an unblinded assessment that the protocol explicitly stated should not be used to assess remissions. This added 195 to their remission count.
- They theorized that if those who dropped out had remitted at the same rate as those who stayed in the trial, then another 606 patients would have remitted. (The protocol stated that dropouts should be counted as treatment failures.)
In sum, 56% of the 2,460 remissions resulted from research misconduct, or simply imagined.
As egregious as that was, their hiding of the poor one-year outcomes was equally bad. The premise of the study was that maintenance therapy would lead to a good one-year outcome for the patients who had remitted. Instead, at the end of 12 months, the researchers were faced with this data: only 108 of the 4,041 patients who entered the study remitted and then stayed well and in the trial to its end.
The STAR*D investigators didn’t discuss this finding in their summary paper, and neither did the NIMH when it announced the study results. “Over the course of all four levels, almost 70 percent of those who didn’t withdraw from the study became symptom free,” the NIMH informed the public.
This became the data soundbite from the STAR*D study. Subsequent research articles told of this outcome, and within clinic settings, it spurred prescribers to try and try again if an initial antidepressant didn’t work.
“If the treatment attempt fails, patients should not give up,” said NIMH director Thomas lnsel. “By remaining in treatment and working closely with clinicians to tailor the most appropriate steps, many patients may find the best single or combination treatment that will enable them to become symptom free.”
The media, for its part, regularly cited the “nearly 70%” soundbite as evidence of the real-world effectiveness of antidepressants. The New Yorker, famed for its fact-checking, did so in a 2010 article. In 2022, after the public learned that the low-serotonin theory had been debunked, The New York Times rushed to reassure readers that while antidepressants didn’t “work the way many people think,” the STAR*D provided evidence that the drugs did, in fact, “work” for most people. This was the “largest study of antidepressants to date,” and “nearly 70 percent of people had become symptom-free by the fourth antidepressant.”
And then, for good measure, the Times quoted Yale psychiatrist Gerard Sanacora: “If you look at the STAR*D, better than 60 percent of those patients actually had a very good response after going through those various levels of treatment.”
That was the impact of the STAR*D fraud. The study told of a failed paradigm of care, but the public—and prescribers of the drugs—heard of findings that told of how the majority of patients “actually had a very good response.”
Today, more than 12% of American adults have taken an antidepressant in the previous month, up from 8% when the STAR*D study was published. The STAR*D soundbite helped fuel that continued increase, while the actual STAR*D results foretold of harm done if use of these drugs wasn’t curtailed. Today, there is a growing chorus of patient voices that tell of falling into a dysphoric state, of losing their sexual functioning, and of difficulty coming off antidepressants, all of whom were prescribed these agents when, if the STAR*D were to be believed, antidepressants led to “nearly 70% of patients becoming symptoms free.”
Breaking the Silence
Ultimately, the STAR*D scandal is a story of institutional failure. The original sin may have been committed by the STAR*D investigators, but once Pigott and colleagues began publishing their deconstructions of the study, starting in 2010, the American Journal of Psychiatry, the American Psychiatric Association, and the National Institute of Mental Health were duty-bound to do what Miller urged in his Psychiatric Times essay. His exhortation bears repeating here:
“In my clinical opinion, it is urgent for the field of psychiatry to reconcile the significant differences in remission rates for patients with MDD as published in the original STAR*D article in 2006 with the reanalysis just published in the BMJ article this year.”
Miller’s article reveals a commitment to ascertaining “truth,” and having that inquiry inform prescribers and the public alike. The failure of the American Journal of Psychiatry, the American Psychiatric Association, and the National Institute of Mental to launch such an inquiry tells of how they preferred to maintain a narrative that told of the effectiveness of antidepressants rather than risk an inquiry that would puncture that societal belief.
The mainstream media has failed in this regard too. Mad in America readers have known of this scandal ever since we founded our website in 2012. So too readers of Psychiatry Under the Influence, the book that I co-wrote with Lisa Cosgrove, which was published in 2015. But as far as I can tell, no major newspaper has reported on Pigott’s work, and that remains true since Pigott and colleagues published their RIAT reanalysis in late July. The major media has remained silent.
Indeed, the BMJ information page for Pigott’s article states there have been eight “blogs” written about the RIAT reanalysis. Four of the eight are BMJ blogs reporting on the “most read” BMJ articles, and not per se about the scandal. The other four are MIA articles—our initial science review of the article, and three subsequent reports about the scandal.
Fortunately, with its cover story, the Psychiatric Times has broken the silence. Perhaps the mainstream media has shied away from the story because it was waiting for a journal, or institution, to tell of Pigott’s work, and in that way validate it as credible. That is what the Psychiatric Times has done with its thoughtful essay, an article that—since it broke the years of silence—could also be described as brave.
And now, if the major media picks up on this story, they will have the chance to report on what arguably is the worst—and most harmful—scandal in American medical history.