[P]sychopharmacology...is...deeply indebted to...a remarkable series of accidental discoveries made in the fifteen or so years following the end of the Second World War.
In 1949, John Cade published an article in the Medical Journal of Australia describing his discovery that lithium sedated people who experienced mania. Cade had been testing his theory that manic people were suffering from an excess of uric acid by injecting patients’ urine into guinea pigs, who subsequently died. When Cade diluted the uric acid by adding lithium, the guinea pigs fared better; when he injected them with lithium alone, they became sedated. He noticed the same effect when he tested lithium on himself, and then on his patients. Nearly twenty years after he first recommended lithium to treat manic depression, it became the standard treatment for the disorder.
In the nineteen-forties and fifties, schizophrenic patients in some asylums were treated with cold-induced “hibernation”—a state from which they often emerged lucid and calm. In one French hospital, the protocol also called for chlorpromazine, a new drug thought to increase the hibernation effect. One day, some nurses ran out of ice and administered the drug on its own. When it calmed the patients, chlorpromazine, later named Thorazine, was recognized in 1952 as the first drug treatment for schizophrenia—a development that encouraged doctors to believe that they could use drugs to manage patients outside the asylum, and thus shutter their institutions.
In 1956, the Swiss firm Geigy wanted in on the antipsychotics market, and it asked a researcher and asylum doctor, Roland Kuhn, to test out a drug that, like Thorazine, was an antihistamine—and thus was expected to have a sedating effect. The results were not what Kuhn desired: when the schizophrenic patients took the drug, imipramine, they became more agitated, and one of them, according to a member of the research team, “rode, in his nightshirt, to a nearby village, singing lustily.” He added, “This was not really a very good PR exercise for the hospital.” But it was the inspiration for Kuhn and his team to reason that “if the flat mood of schizophrenia could be lifted by the drug, then could not a depressed mood be elevated also?” Under the brand name Tofranil, imipramine went on to become the first antidepressant—and one of the first blockbuster psychiatric drugs.
American researchers were also interested in antihistamines. In 1957, Leo Sternbach, a chemist for Hoffmann-La Roche who had spent his career researching them, was about to throw away the last of a series of compounds he had been testing that had proven to be pharmacologically inert. But in the interest of completeness, he was convinced to test the last sample. “We thought the expected negative pharmacological results would cap our work on this series of compounds,” one of his colleagues later recounted. But the drug turned out to have muscle-relaxing and sedative properties. Instead of becoming the last in a list of failures, it became the first in a series of spectacular successes—the benzodiazepenes, of which Sternbach’s Librium and Valium were the flagships.
By 1960, the major classes of psychiatric drugs—among them, mood stabilizers, antipsychotics, antidepressants, and anti-anxiety drugs, known as anxiolytics—had been discovered and were on their way to becoming a seventy-billion-dollar market. Having been discovered by accident, however, they lacked one important element: a theory that accounted for why they worked (or, in many cases, did not).
Despite their continued failure to understand how psychiatric drugs work, doctors continue to tell patients that their troubles are the result of chemical imbalances in their brains. As Frank Ayd pointed out, this explanation helps reassure patients even as it encourages them to take their medicine, and it fits in perfectly with our expectation that doctors will seek out and destroy the chemical villains responsible for all of our suffering, both physical and mental. The theory may not work as science, but it is a devastatingly effective myth.
Whether or not truthiness, as one might call it, is good medicine remains to be seen. No one knows how important placebo effects are to successful treatment, or how exactly to implement them, a topic Michael Specter wrote about in the magazine in 2011. But the dry pipeline of new drugs bemoaned by Friedman is an indication that the drug industry has begun to lose faith in the myth it did so much to create. As Steven Hyman, the former head of the National Institute of Mental Health, wrote last year, the notion that “disease mechanisms could … be inferred from drug action” has succeeded mostly in “capturing the imagination of researchers” and has become “something of a scientific curse.” Bedazzled by the prospect of unraveling the mysteries of psychic suffering, researchers have spent recent decades on a fool’s errand—chasing down chemical imbalances that don’t exist. And the result, as Friedman put it, is that “it is hard to think of a single truly novel psychotropic drug that has emerged in the last thirty years.”
Despite the BRAIN initiative recently announced by the Obama Administration, and the N.I.M.H.’s renewed efforts to stimulate research on the neurocircuitry of mental disorder, there is nothing on the horizon with which to replace the old story. Without a new explanatory framework, drug-company scientists don’t even know where to begin, so it makes no sense for the industry to stay in the psychiatric-drug business. And if loyalists like Hyman and Friedman continue to say out loud what they have been saying to each other for many years—that, as Friedman told Times readers, “just because an S.S.R.I. antidepressant increases serotonin in the brain and improves mood, that does not mean that serotonin deficiency is the cause of the disease”—then consumers might also lose faith in the myth of the chemical imbalance.