March 9, 2010
In recent years, it's become clear that much of our individual behavior depends on the dynamics of our social network. It doesn't matter if we're talking about obesity or happiness: they all flow through other people, like a virus or a meme. Last year, I profiled James Fowler and Nicholas Christakis in Wired, who have conducted several fascinating studies that demonstrate the power of social networks:
There's something strange about watching life unfold as a social network. It's easy to forget that every link is a human relationship and every circle a waistline. The messy melodrama of life--all the failed diets and fading friendships--becomes a sterile cartoon.
But that's exactly the point. All that drama obscures a profound truth about human society. By studying Framingham as an interconnected network rather than a mass of individuals, Christakis and Fowler made a remarkable discovery: Obesity spread like a virus. Weight gain had a stunning infection rate. If one person became obese, the likelihood that his friend would follow suit increased by 171 percent. (This means that the network is far more predictive of obesity than the presence of genes associated with the condition.) By the time the animation is finished, the screen is full of swollen yellow beads, like blobs of fat on the surface of chicken soup.
The data exposed not only the contagious nature of obesity but the power of social networks to influence individual behavior. This effect extends over great distances--a fact revealed by tracking original subjects who moved away from Framingham. "Your friends who live far away have just as big an impact on your behavior as friends who live next door," Fowler says. "Think about it this way: Even if you see a friend only once a year, that friend will still change your sense of what's appropriate. And that new norm will influence what you do." An obese sibling hundreds of miles away can cause us to eat more. The individual is a romantic myth; indeed, no man is an island.
In their latest paper, published this week in PNAS, Christakis and Fowler re-analyzed an earlier set of experiments led by Ernst Fehr and Simon Gachter, which investigated "altruistic punishment," or why we're willing to punish others even at a cost to ourselves.
Christakis and Fowler demonstrate that, when one of the students gave money to help someone else - they were cooperating - the recipients of that cash then became more likely to give their own money away in the next round. (Every unit of money shared in round 1 led to an extra 0.19 units being shared in round 2, and 0.05 units in round 3.) This leads, of course, to a cascade of generosity, in which the itch to cooperate spreads first to three people and then to the nine people that those three people interact with, and then to the remaining individuals in subsequent waves of the experiment.
The paper itself is filled with optimistic sentences, but it's worth pointing out that 1) selfishness is also contagious and 2) there's a big difference between lab experiments played with strangers and the messy social networks of real life. That said, altruistic cascades like this make me happy:
We report a chain of 10 kidney transplantations, initiated in July 2007 by a single altruistic donor (i.e., a donor without a designated recipient) and coordinated over a period of 8 months by two large paired-donation registries. These transplantations involved six transplantation centers in five states. In the case of five of the transplantations, the donors and their coregistered recipients underwent surgery simultaneously. In the other five cases, "bridge donors" continued the chain as many as 5 months after the coregistered recipients in their own pairs had received transplants. This report of a chain of paired kidney donations, in which the transplantations were not necessarily performed simultaneously, illustrates the potential of this strategy.
Posted by Jonah Lehrer at 11:41 AM • 0 Comments • 0 TrackBacks
March 8, 2010
One of the hazards of writing a book on decision-making is getting questions about decisions that are far beyond the purview of science (or, at the very least, way beyond my pay grade). Here, for instance, is a question that often arrives in my inbox, or gets shouted out during talks:
"How should we make decisions about whom to marry? If the brain is so smart, why do half of all marriages end in divorce?"
Needless to say, there is no simple answer to this question. (And if I had a half-way decent answer, I'd be writing a book on marriage.) But I've been recently been reading some interesting research on close, interpersonal relationships (much of it by Ellen Berscheid, at the University of Minnesota) and I'm mostly convinced that there's a fundamental mismatch between the emotional state we expect to feel for a potential spouse - we want to "fall wildly in love," experiencing that ecstatic stew of passion, desire, altruism, jealousy, etc - and the emotional state that actually determines a successful marriage over time. Berscheid defines this more important emotion as "companionate love" or "the affection we feel for those with whom our lives are deeply intertwined." Jonathan Haidt, a social psychologist at the University of Virginia, compares this steady emotion which grows over time to its unsteady (but sexier and more cinematic) precursor: "If the metaphor for passionate love is fire, the metaphor for companionate love is vines growing, intertwining, and gradually binding two people together."
What's wrong with seeking passion? Don't we need to experience that dopaminergic surge of early love, in which the entire universe has been reduced to a single person? ("It is the east, and Juliet is the sun.") The only problem with this romantic myth is that passion is temporary. It inevitably decays with time. This is not a knock against passion - this is a basic fact of our nervous system. We adapt to our pleasures; we habituate to delight. In other words, the same thing happens to passionate love that happens to Christmas presents. We're so impossibly happy and then, within a matter of days or weeks or months, we take it all for granted.
I can't help but think that Shakespeare was trying to warn us about the fickleness of passionate love even as he was inventing its literary template. Romeo and Juliet, after all, begins with Romeo in a disconsolate funk. But he's not upset about Juliet. He hasn't even met Juliet. He's miserable over Rosaline. And so, while the rest of the tragedy is an ode to young lovers and impossible passions, Shakespeare has prefaced the action with a warning: passion is erratic. The same randy Romeo who compares you to the sun was in love with someone else last night.
What makes this mismatch even more dangerous is our tendency to confuse physical attractiveness with personal goodness. In a classic 1972 paper, "What is beautiful is good," Berscheid and colleagues demonstrated that we instinctively believe that prettier people "have more socially desirable personality traits" and "lead better lives". Furthermore, this phenomenon works in both directions, so that people who have been "prejudged" to be more or less physically attractive, but don't know they've been judged that way, still behave in a more "friendly, likeable and sociable manner". This suggests that our emphasis on attractiveness, lust and beauty - these are the variables that we associate with passionate love - can actually distort our perception of more important personality variables. Because we'll habituate to those hips, and that sexy smile won't be sexy forever. And then we'll no longer confuse beauty with goodness, or believe that our attractive boyfriend is also really nice.
The point is not that passionate love isn't an important signal. It surely is - that rush of dopamine is trying to tell us something. But a successful marriage has to endure long past the peak of passion. It has survive the rigors of adaptation and intimacy, which are features of romantic relationships that don't get valorized in Hollywood, Bollywood or Shakespeare.
Posted by Jonah Lehrer at 12:43 PM • 13 Comments • 0 TrackBacks
March 5, 2010
I thought it's worth addressing this article one last time. Dr. Ronald Pies (professor of psychiatry at SUNY Upstate Medical University in Syracuse) has written three eloquent and extremely critical blog posts about the article and the analytic-rumination hypothesis. Here's his latest riposte:
Writer Jonah Lehrer caused quite a stir with his recent article in the New York Times Magazine, with the unfortunate title, "Depression's Upside." I have a detailed rejoinder to this misleading article posted on the Psychcentral website. The fault is not entirely Mr. Lehrer's however; his sources included a psychiatrist and a psychologist, who have recently presented a strained and dubious argument claiming that major depression has certain "adaptive" advantages. Lehrer apparently spent little or no time talking to mood disorder specialists who see thousands of severely depressed patients each year.
I'd like to refute that last point. I talked to numerous working psychiatrists - several of which are quoted in the article - and, not surprisingly, got a wide range of reactions to the analytic-rumination hypothesis. Some thought it was interesting and might make sense for people with mild and moderate depression; others, like Peter Kramer, thought it was utter rubbish. (As Kramer says in the article, "It's a ladder with a series of weak rungs.") Dr. Pies implies that every psychiatrist shares his viewpoint, but that's clearly not the case. See, for instance, this recent Louis Menand review for more.
Let's take a seemingly straightforward fact that Dr. Pies has cited in all three of his critiques of the article:
I would not deny that depression, like other challenges in life, may be "instructive" for some proportion of individuals-though probably a minority. I have very serious doubts (as do most of my colleagues) that major depression is "adaptive" in any significant way, though perhaps very brief and mild bouts of depression could confer some modest advantages in an evolutionary sense; eg, by increasing one's empathy toward others, which could be highly adaptive in obvious ways. [cf. "A broken heart prepares man for the service of God, but dejection corrodes service."-- Rabbi Bunam of Pzysha].
This could be true, in theory, for more severe depression, but there, the maladaptive aspects of the illness would likely outweigh any modest advantages by a huge margin; eg, the 15% mortality rate in major depression (naturalistic studies), mostly by suicide.
Dr. Pies doesn't cite the specific study, so it's unclear what he's referring to. But it's also worth pointing out that numerous studies have found no relationship between depression and increased mortality. See here, here and here. I'm not trying to dispute the correlation between major depressive disorder and mortality, which I think is pretty clear, especially when it comes to cardiovascular illness. I'm merely trying to show that even a fact as "obvious" as the link between depression and mortality gets complicated and contested very quickly. (Things get even more complicated, of course, when the conversation turns to things like the cognitive deficits of depression.) Here's the summary of a large review on the subject:
There were 57 studies found; 29 (51%) were positive, 13 (23%) negative, and 15 (26%) mixed. Twenty-one studies (37%) ranked among the better studies on the strength of evidence scale used in this study, but there are too few comparable, well-controlled studies to provide a sound estimate of the mortality risk associated with depression. Only six studies controlled for more than one of the four major mediating factors. Suicide accounted for less than 20% of the deaths in psychiatric samples, and less than 1% in medical and community samples. Depression seems to increase the risk of death by cardiovascular disease, especially in men, but depression does not seem to increase the risk of death by cancer. Variability in methods prevents a more rigorous meta-analysis of risk.
Dr. Pies has also argued that it was irresponsible to write about this speculative theory, since it might lead people to neglect treatment. Just to be clear: Neither I, nor Dr. Thomson, ever suggest that people shouldn't seek help for depression. That's just not in the article. Dr. Thomson is critical of what he regards as the "overprescription" of anti-depressants, but that's hardly a novel criticism of modern psychiatry. In fact, one can believe that the analytic-rumination hypothesis is a deeply flawed idea - and there are many good reasons for believing so - and still believe that we're too reliant on medications that aren't better than placebos for treating mild to moderate cases of depression. (Dr. Thomson, for instance, believes that we need more therapy, just better focused on solving real life problems.) But this was not an article about how to treat depression. This was an article about a new theory that attempts to explain why a disorder that feels so goddamn awful is also so common.
As I note repeatedly in the article, this hypothesis remains entirely speculative, with no direct evidence to support it. Given the dismal history of psychiatric speculations - we have no idea, for instance, why SSRI's work, when they do work - the odds are stocked strongly against it. But let's not pretend that modern psychiatry is such a settled science that it can't tolerate a controversial new idea.
Posted by Jonah Lehrer at 4:58 PM • 41 Comments • 0 TrackBacks
March 4, 2010
The ultimatum game is a simple experiment with profound implications. The game goes like this: one person (the proposer) is given ten dollars and told to share it with another person (the responder). The proposer can divide the money however they like, but if the responder rejects the offer then both players end up with nothing.
When economists first started playing this game in the early 1980s, they assumed that this elementary exchange would always generate the same outcome. The proposer would offer the responder approximately $1⎯a minimal amount⎯and the responder would accept it. After all, $1 is better than nothing, and a rejection leaves both players worse off. Such an outcome would be a clear demonstration of our innate selfishness and rationality.
However, the researchers soon realized that their predictions were all wrong. Instead of swallowing their pride and pocketing a small profit, responders typically rejected any offer they perceived as unfair. Furthermore, proposers anticipated this angry rejection and typically tendered an offer around $5. It was such a stunning result that nobody really believed it.
But when other scientists repeated the experiment the same thing happened. People play this game the same way all over the world, and studies have observed similar patterns of irrationality in Japan, Russia, Germany, France and Indonesia. No matter where the game was played*, people almost always made fair offers. As the economist Robert Frank notes, "Seen through the lens of modern self-interest theory, such behavior is the human equivalent of planets traveling in square orbits."
There are, of course, many possible explanations for the ultimatum game. Perhaps we are programmed to protect our reputation, and don't want other people to think of us as too greedy. Perhaps we anticipate the anger of the other person - they'll be pissed off if they get treated unfairly - and so we make a fair offer out of practical necessity. Or maybe we're all instinctive socialists, hard wired to prefer equitable outcomes.
Interestingly, that last explanation has just gotten some experimental support. (This doesn't mean that the other explanations aren't valid, though. Human behavior rarely has simple causes.) In a paper published last week in Nature, a team of Caltech and Trinity College psychologists and neuroeconomists looked at how the brain's response to various monetary rewards is altered by the context of inequality.
The experiment had a straightforward design. Subjects were slipped into brain scanners and given various cash rewards, such as a gain of $20 or $5. They were also told about rewards given to a stranger. Sometimes, the stranger got more money and sometimes the subject got more. Such is life.
But there was a crucial twist. Before the scanning began, each subject was randomly assigned to one of two conditions: some participants were given "a large monetary endowment" ($50), while the others had to start from scratch.
Here's where the results get interesting: the reaction of the reward circuitry in the brain (especially the ventral striatum and VMPFC) depended on the starting financial position of the subjects. For instance, people who started out with empty pockets (the "poor" condition) got much more excited when given, say, $20 in cash than people who started out with $50. They also showed less interest when money was given to a stranger.
So far, so obvious: if we have nothing, then every little something becomes valuable; the meaning of money is relative. What was much more surprising, however, is that this same contextual effect also held true for people who began in the position of wealth. Subjects who were given $50 to start showed more reward circuit activity when their "poor" partner got cash than when they were given an equivalent amount of cash. As Camerer noted in the Caltech press release: "Their brains liked it when others got money more than they liked it when they themselves got money."
This is pretty weird, if only because it shows that even our neural response to cold, hard cash - the most uncomplicated of gains - is influenced by social context. We think we're so selfish and self-interested, but our ventral striatum has clearly internalized a little Marx.
That said, these results are still open to interpretation. One possibility discussed by the scientists is that the response of the reward areas in "rich" subjects represents a reduction in "discomfort" over having more, for seemingly arbitrary reasons. (Such are the burdens of being blessed.) This suggests that inequality is inherently unpleasant, at least when we know that the inequality is due to random chance.
But what if we believe that the inequality is deserved? Does the aversion still exist when subjects believe they deserve to be "rich"? In the ultimatum game, for instance, people suddenly start behaving selfishly - they keep the vast majority of the money - when they are given a test before the game begins. (They assume the test has determined who is a proposer and who is a responder.) The lesson, then, is that while we are inequality averse, it's a fragile kind of aversion. Even a hint of meritocracy can erase our guilt.
*A reader alerts me to a correction. The Machiguenga of the Peruvian Amazon play quite differently, as demonstrated here.
Posted by Jonah Lehrer at 12:40 PM • 22 Comments • 0 TrackBacks
March 3, 2010
One of the interesting subplots of this new research on the intellectual benefits of sadness - it seems to bolster our attention and make us more analytical - is that it helps illuminate the intertwined relationship of mood and cognition. For decades, we saw the mind as an information processing machine; the brain was just a bloody computer with lipid bi-layer microchips. The problem with this metaphor is that machines don't have feelings, which led us to overlook the role of feelings in shaping how we think.
Here's an experiment I described in the depression article:
Last year Forgas ventured beyond the lab and began conducting studies in a small stationery store in suburban Sydney, Australia. The experiment itself was simple: Forgas placed a variety of trinkets, like toy soldiers, plastic animals and miniature cars, near the checkout counter. As shoppers exited, Forgas tested their memory, asking them to list as many of the items as possible. To control for the effect of mood, Forgas conducted the survey on gray, rainy days -- he accentuated the weather by playing Verdi's "Requiem" -- and on sunny days, using a soundtrack of Gilbert and Sullivan. The results were clear: shoppers in the "low mood" condition remembered nearly four times as many of the trinkets. The wet weather made them sad, and their sadness made them more aware and attentive.
Of course, this doesn't mean people in sunny climates always think worse, or that sadness is always the ideal mental state. While negative moods might promote focused attention and rigorous analysis, there's good evidence that happiness promotes a more freewheeling kind of information processing, which leads to more creative insights. Consider the following problem: I'm going to give you three different words, and you have to come with a single word that can form a compound word or phrase with all three. The three words are: AGE, MILE and SAND.
What's the answer? Look here.* If you solved this problem, the answer probably arrived in a flash of insight, popping abruptly into consciousness. According to Mark Jung Beeman and John Kounios, two scientists who have studied the neuroscience of aha moments, the brain is more likely to solve insights when the mind is relaxed, happy and perhaps a little distracted. (They've found, for instance, that subjects in a positive mood solve approximately 20 percent more insight problems than control subjects.) I wrote about their research in the New Yorker in 2008.
Why does happiness and relaxation make us better at solving remote associate problems? Beeman and Kounios describe the insight process as a delicate mental balancing act. (They've watched hundreds of undergraduates solve these word problems in fMRI machines and while wearing EEG headgear.) At first, the brain needs to control itself, which is why areas involved with executive control, like the prefrontal cortex and anterior cingulate, are activated. The scare resource of attention is lavished on a single problem. But then, once the brain is sufficiently focused, the cortex needs to relax, to seek out the more remote associations in the right hemisphere that will provide the insight.
And that's why relaxation and happiness are so helpful: these moods make us more likely to direct the spotlight of attention inwards, so that we become better able to eavesdrop on the quiet yet innovative thoughts we often overlook. (That's why so many of my best ideas often come during warm showers.) In contrast, when people are diligently focused (and perhaps a little melancholy), their attention tends to be directed outwards, towards the details of the problem they're trying to solve. While this pattern of attention is necessary when solving problems analytically, it actually prevents us from detecting those unlikely connections that lead to insights and epiphanies. (William James referred to insights as emanating from the peripheral "fringe" of consciousness, which is why they're so easy to ignore when we're staring straight ahead.)
The moral is that emotions influence how we process and pay attention to information, and that different kinds of cognitive tasks benefit from different moods. When we're editing our prose, or playing chess, or working through a math problem, we probably benefit from a little melancholy, since that makes us more attentive to details and mistakes. In contrast, when we're trying to come up with an idea for a novel, or have a hit a dead end with our analytical approach to a problem, then maybe we should take a warm shower and relax. The answer is more likely to arrive when we stop thinking about our problem. (It should also be noted, of course, that the same mental states can be induced with drugs, which is why so many artists experiment with benzedrine, marijuana, etc. They self-medicate to achieve the ideal mental state.)
If you're interested in thinking more about the tangled relationship of mood, cognition and creativity, I'd definitely recommend The Midnight Disease, by Alice Flaherty. It's a fascinating glimpse into the terrors of manic depression, and how an awful, awful mental illness can lead to a surfeit of creative production.
Update: I've received a few emails asking how this research on creative insights squares with the correlation between unipolar/bipolar depression and artistic success, at least as documented by Redfield Jamison and Andreassen. My own hunch is that, while we indulge in romantic myths about poems being generated during daydreams and long walks (see, for instance, Coleridge and Kubla Kahn), the reality of artistic production is far less leisurely. Good art is a relentless grind, requiring an inexhaustible attention to details and mistakes. Perhaps this is why a depressive mindset can be so helpful, and why so many successfful artists suffered from manic depression, in which periods of euphoric free-association are offset by prolonged and horrific states of anguish. But that's all utter speculation.
*Stone: milestone, sandstone, Stone Age.
Posted by Jonah Lehrer at 9:42 AM • 21 Comments • 0 TrackBacks
March 2, 2010
I've received a few emails along this line:
"How does this new theory about depression enhancing problem-solving relate to all the studies that have shown cognitive deficits in people with depression?"
That's a really good question. I tried to address this issue quickly in the article - I referenced the fact that the "cognitive deficits disappear when test subjects are first distracted from their depression and thus better able to focus on the exercise" - but I think it's worth spending a little more time on the scientific literature. The key point here is that the deficits are "unstable," which means they can be made to disappear when subjects are first distracted from their ruminations. (As Andrews told me, "Depressed subjects are trying to cope with a major life issue...It shouldn't be too surprising that they have difficulty memorizing a string of random numbers.") Look, for instance, at a 2002 study which compared clinically depressed subjects to non-depressed controls. In a test of executive function, the deficits of depressed subjects were erased when they were first distracted from their ruminations. Here's their conclusion:
The aspects of executive function involved in random number generation are not fundamentally impaired in depressed patients. In depressed patients, the rumination induction seems to trigger the continued generation of ruminative stimulus independent thoughts, which interferes with concurrent executive processing.
In other words, the emotional pain of depressed subjects consumed scarce mental resources - the mind is a bounded machine - which meant they didn't have enough attention left over to think about anything else, especially some artificial lab task. When we're wracked with pain, everything but the pain is irrelevant.
I also think it's worth pointing out that this latest hypothesis builds (as always in science) on numerous earlier conjectures. One important precursor for Andrews and Thomson's idea was theoretical work done on "psychic pain," by the evolutionary biologists Randy and Nancy Thornhill and by Randy Nesse, a psychiatrist at the University of Michigan. In essence, these scientists argued that emotional pain serves the same biological need as physical pain. When we break a bone in the foot, the discomfort keeps us from walking, which allows the bone to heal. The pain is also a learning signal, teaching us to avoid the dangerous behavior that caused the injury in the first place.
But there has been no shortage of clever conjectures on why depression exists. If you'd like an overview of the literature, I'd suggest reading this paper by Paul Watson, an evolutionary biologist at the University of New Mexico, whose work also influenced Andrews and Thomson. As I explicitly stated in the article, all of these theories remain just that: theoretical. There is very little direct evidence in support of any of them.
Finally, I think it's worth repeating the obvious, which is that depression is an extremely varied mental illness. Although we only have one psychiatric diagnosis - major depressive disorder - that diagnosis covers a tremendous range of symptoms. (It's also in constant flux, and will likely be altered yet again in the next DSM revision.) As I noted yesterday, one of the most cited papers in the field found that MDD exists on a spectrum of severity, ranging from mild (10.4 percent of patients) to very severe (12.9 percent), with the vast majority of patients somewhere in between. I tried to make this heterogeneity clear in the article, because I think it represents a real challenge for any theory that attempts to explain depression, either from an evolutionary perspective or from a neuroscientific perspective.
Although Nesse says he admires the analytic-rumination hypothesis, he adds that it fails to capture the heterogeneity of depressive disorder. Andrews and Thomson compare depression to a fever helping to fight off infection, but Nesse says a more accurate metaphor is chronic pain, which can arise for innumerable reasons. "Sometimes, the pain is going to have an organic source," he says. "Maybe you've slipped a disc or pinched a nerve, in which case you've got to solve that underlying problem. But much of the time there is no origin for the pain. The pain itself is the dysfunction."
Andrews and Thomson respond to such criticisms by acknowledging that depression is a vast continuum, a catch-all term for a spectrum of symptoms. While the analytic-rumination hypothesis might explain those patients reacting to an "acute stressor," it can't account for those whose suffering has no discernible cause or whose sadness refuses to lift for years at a time.
Personally, I think these are the two most important paragraphs in the article. One of the most challenging aspects of studying depression is the vast amount of contradiction in the literature. Virtually every claim comes with a contradictory claim, which is also supported by evidence. I tend to believe this confusion will persist until our definitions of depression become more precise, so that intense sadness and paralyzing, chronic, suicidal despair are no longer lumped together in the same psychiatric category.
Thank you again for all your comments and emails.
Posted by Jonah Lehrer at 12:30 PM • 11 Comments • 0 TrackBacks
March 1, 2010
First of all, thank you to everyone who took the time to write and comment on my recent article on depression. I really appreciated all the insightful emails and I'm trying to respond to every one. In the meantime, I wanted to address some important criticisms of the analytic-rumination hypothesis and of my article, which were raised by an academic psychiatrist. I've reproduced his criticisms, and my replies, below:
First, you write "depression is everywhere, as inescapable as the common cold". No, this is flatly wrong. Major depression is estimated by an absurdly broad range of epidemiological studies to have a slightly less than 20% lifetime prevalence.
I talked to more than a dozen mental health professionals about this precise issue. The general consensus, as near as I could tell, was that the 20 percent range is actually at the low end of the current estimates for lifetime prevalence for MDD. One of the problems with estimating "lifetime prevalence" is that most surveys are based on retrospective memories, as people try to reconstruct their state of mind decades earlier. There's good evidence that such surveys reliably underestimate the number of people who suffer from major depressive disorder. Consider this recent study, which conducted interviews with a random sample of Canadians during a relatively brief time span. Here are their conclusions:
The annual prevalence of MDD ranged between 4% and 5% of the population during each assessment, consistent with existing literature. However, 19.7% of the population had at least one major depressive episode during follow-up. This included 24.2% of women and 14.2% of men. These estimates are nearly twice as high as the lifetime prevalence of major depressive episodes reported by cross-sectional studies during same time interval. CONCLUSION: In this study, prospectively observed cumulative prevalence over a relatively brief interval of time exceeded lifetime prevalence estimates by a considerable extent. This supports the idea that lifetime prevalence estimates are vulnerable to recall bias and that existing estimates are too low for this reason.
And this is merely the latest longitudinal study to estimate a significantly higher percentage of people suffering from major depressive disorder. One longitudinal study of adolescents living in New Zealand showed that 37% satisfied either the third edition-revised of the DSM or the DSM-IV-TR for a diagnosis of a lifetime episode of major depression.
Obviously, it will always be difficult to precisely estimate the percentage of people suffering from a condition over a long period of time. For one thing, the diagnosis of major depressive disorder is itself in flux. However, I think there are good reasons to believe that the standard estimate of 20 percent is at the low end of the spectrum, especially given current trends. Since 1980, the diagnosis of depression has been rapidly increasing across every segment of the population. To take but one example: between 1992 and 1998 there was a 107 percent increase in depression among the elderly.
Here's another criticism:
Next - and this is most surprising to me - you rehash their arguments about the VLPFC as though it exists as a kind of anatomic source of depression. Surely you are aware - in fact I know you are because you have written at other times about this very topic - about just how complicated our current evidence base is in the area of depression. I mean even just to mention the well known deficits in hippocampal neurogenesis, the excess activity of the amygdala, or any number of other known reciprocal interactions between subcortical and prefrontal cortices would help an educated lay reader understand that while the VLPFC may be particularly active in ruminative processes, it still needs to be understood in the context of broader functional neuroanatomic findings in depression.
I'm sympathetic to this criticism. In an ideal world, I would have spent another thousand words or so outlining the neuroscience of depression; there is always more to say about a subject as rich and complex as mental illness. As noted above, there are many changes visible in the brains of people suffering from major depressive disorder. (I've written at length in other publications about many of these changes, particularly the reduction of neurogenesis.) It's also worth pointing out that many of these changes don't appear to be unique to major depressive disorder, but are rather part of larger response to chronic stress. As we now know, chronic stress is toxic for the brain, and tends to shrink the hippocampus and swell the amygdala. (Of course, like just about result in neuro-psychiatry, this claim remains controversial.) Alas, this was not an article about the neuroscience of depression, and so I was only able to discuss the proposed neural substrate of rumination in the VLPFC as it pertained to the analytic-rumination hypothesis. I never claim, of course, that the VLPC is the neural signature for MDD.
I've also received several emails from psychiatrists who criticize my reference to the DSM. Here's a sample email:
I've begun e-mailing every author I encounter who uses a phrase like "diagnostic bible" to describe the DSM to ask them to please think a little further on the subject. If there is any use for the DSM at all in the real world, it arises from the intention to create a heuristic document. That is, the very purpose of DSM is to closely describe our current understanding of diagnostic entities so that they may be tested against reality. The purpose is not to hand down scripture from psychiatric gods to psychiatric priests; the purpose is to provide a little handbook that a large group of explorers can use to map out a territory. In psychiatry, we are still trying to figure out whether this or that psychiatric diagnosis accurately describes anything in the real world, or whether it is simply a projection of the the mind of an individual or a culture. DSM, at its best, recognizes that we are largely still groping in the dark, trying to discern what is illness and what is health, whether treatment is needed or can be accomplished.
I think this is right, and I regret my use of the phrase "bible of psychiatry". I had no intention of suggesting that the DSM is a document of faith, only that it's an authoritative resource for modern psychiatry. But it was a thoughtless, cliched description.
Needless to say, there are many more criticisms to make, both of the ideas described in the article and of the article itself. As you can imagine, this is a difficult subject to write about, in large part because the facts themselves are so contested. As demonstrated in this widely cited survey, patients with major depressive disorder exist on a continuum of severity, from mild to severe, making it ridiculous to suggest that there is, or should be, only one form of treatment. If a treatment works for the individual patient that is the only fact that matters. Everything else is mere theory.
Posted by Jonah Lehrer at 12:21 PM • 40 Comments • 0 TrackBacks
February 28, 2010
I'm a terrible sleeper, which is perhaps why I got invited to contribute to a NY Times group blog on "insomnia, sleep and the nocturnal life". Here is my first contribution, which focuses on the work of Dan Wegner:
My insomnia always begins with me falling asleep. I've been reading the same paragraph for the last five minutes -- the text is suddenly impossibly dense -- and I can feel the book getting heavier and heavier in my hands. Gravity is tugging on my eyelids.
And then, just as my mind turns itself off, I twitch awake. I'm filled with disappointment. I was so close to a night of sweet nothingness, but now I'm back, eyes wide open in the dark. I dread the hours of boredom; I'm already worried about the tiredness of tomorrow.
Why did my brain wake itself up? What interrupted my slumber? To understand this frustrating mental process, let's play a simple game with only one rule: Don't think about white bears. You can think about anything else, but you can't think about that. Ready? Take a deep breath, focus, and banish the animals from your head.
You just lost the game. Everyone does. As Dostoevsky observed in "Winter Notes on Summer Impressions": "Try to avoid thinking of a white bear, and you will see that the cursed thing will come to mind every minute." In fact, whenever we try not to think about something that something gets trapped in the mind, stuck in the recursive loop of self-consciousness. Our attempt at repression turns into an odd fixation.
This human frailty has profound consequences. Dan Wegner, a psychologist at Harvard, refers to the failure as an "ironic" mental process. Whenever we establish a mental goal -- such as trying not to think about white bears, or sex, or a stressful event -- the goal is accompanied by an inevitable follow-up thought, as the brain checks to see if we're making progress. The end result, of course, is that we obsess over the one thing we're trying to avoid. As Wegner notes, "The mind appears to search, unconsciously and automatically, for whatever thought, action, or emotion the person is trying to control. ... This ironic monitoring process can actually create the mental contents for which it is searching."
These ironic thoughts reveal an essential feature of the human mind, which is that it doesn't just think: it constantly thinks about how it thinks. We're insufferably self-aware, like some post-modern novel, so that the brain can't go for more than a few seconds before it starts calling attention to itself. This even applies to thoughts we're trying to avoid, which is why those white bears are so inescapable.
What does this have to do with sleep? For me, insomnia is my white bear. My conscious goal is to fall asleep, which then causes my unconscious to continually check up on whether or not I'm achieving my goal. And so, after passing out for 30 seconds, I'm woken up by my perverse brain. (Most animals lack such self-aware thoughts, which is why our pets never have trouble taking a nap.)
In a study published in 1996 in the journal of Behavior Research and Therapy, Wegner and colleagues investigated the ironic monitoring process in the context of sleep. The experiment was simple: 110 undergraduates were randomly divided into two groups. The first group was told to fall asleep "whenever you want," while the second group was instructed to fall asleep "as fast as you can." To make matters more interesting, the scientists also varied the background music, with some students falling asleep to a loud John Phillip Sousa march and others drifting off to "sleep-conducive new age music."
Here's where the data gets interesting: subjects who were instructed to fall asleep quickly took far longer to fall asleep, at least while listening to Sousa's marching music. Because they became anxious about being able to fall asleep to the upbeat tune, all of their effort backfired, so that they would lie awake in frustration. Instead of just letting themselves drift off into dreamland, they kept on checking to see if they were still awake, and that quick mental check woke them up.
Wegner and colleagues suggest that this paradoxical thought process can explain a large amount of chronic insomnia, which occurs after we get anxious about not achieving our goal. The end result is a downward spiral, in which our worry makes it harder to pass out, which only leads to more worry, and more ironic frustration. I wake myself up because I'm trying too hard to fall asleep.
One of the paradoxical implications of this research is that reading this article probably made your insomnia worse. So did that Ambien advertisement on television, or the brief conversation you had with a friend about lying awake in bed, or that newspaper article about the mental benefits of R.E.M. sleep. Because insomnia is triggered, at least in part, by anxiety about insomnia, the worst thing we can do is think about not being able to sleep; the diagnosis exacerbates the disease. And that's why this frustrating condition will never have a perfect medical cure. Insomnia is ultimately a side-effect of our consciousness, the price we pay for being so incessantly self-aware. It is, perhaps, the quintessential human frailty, a reminder that the Promethean talent of the human mind -- this strange ability to think about itself -- is both a blessing and a burden.
Posted by Jonah Lehrer at 9:32 PM • 31 Comments • 0 TrackBacks
February 26, 2010
I've got an article on the upside of depression in the latest New York Times Magazine. If you'd like to learn more about this controversial theory, I'd suggest reading the original paper, "The Bright Side of Being Blue: Depression as an adaptation for analyzing complex problems," by Paul Andrews and Andy Thomson. Here's my lede:
The Victorians had many names for depression, and Charles Darwin used them all. There were his "fits" brought on by "excitements," "flurries" leading to an "uncomfortable palpitation of the heart" and "air fatigues" that triggered his "head symptoms." In one particularly pitiful letter, written to a specialist in "psychological medicine," he confessed to "extreme spasmodic daily and nightly flatulence" and "hysterical crying" whenever Emma, his devoted wife, left him alone.
While there has been endless speculation about Darwin's mysterious ailment -- his symptoms have been attributed to everything from lactose intolerance to Chagas disease -- Darwin himself was most troubled by his recurring mental problems. His depression left him "not able to do anything one day out of three," choking on his "bitter mortification." He despaired of the weakness of mind that ran in his family. "The 'race is for the strong,' " Darwin wrote. "I shall probably do little more but be content to admire the strides others made in Science."
Darwin, of course, was wrong; his recurring fits didn't prevent him from succeeding in science. Instead, the pain may actually have accelerated the pace of his research, allowing him to withdraw from the world and concentrate entirely on his work. His letters are filled with references to the salvation of study, which allowed him to temporarily escape his gloomy moods. "Work is the only thing which makes life endurable to me," Darwin wrote and later remarked that it was his "sole enjoyment in life."
For Darwin, depression was a clarifying force, focusing the mind on its most essential problems. In his autobiography, he speculated on the purpose of such misery; his evolutionary theory was shadowed by his own life story. "Pain or suffering of any kind," he wrote, "if long continued, causes depression and lessens the power of action, yet it is well adapted to make a creature guard itself against any great or sudden evil." And so sorrow was explained away, because pleasure was not enough. Sometimes, Darwin wrote, it is the sadness that informs as it "leads an animal to pursue that course of action which is most beneficial." The darkness was a kind of light.
The mystery of depression is not that it exists -- the mind, like the flesh, is prone to malfunction. Instead, the paradox of depression has long been its prevalence. While most mental illnesses are extremely rare -- schizophrenia, for example, is seen in less than 1 percent of the population -- depression is everywhere, as inescapable as the common cold. Every year, approximately 7 percent of us will be afflicted to some degree by the awful mental state that William Styron described as a "gray drizzle of horror . . . a storm of murk." Obsessed with our pain, we will retreat from everything. We will stop eating, unless we start eating too much. Sex will lose its appeal; sleep will become a frustrating pursuit. We will always be tired, even though we will do less and less. We will think a lot about death.
The persistence of this affliction -- and the fact that it seemed to be heritable -- posed a serious challenge to Darwin's new evolutionary theory. If depression was a disorder, then evolution had made a tragic mistake, allowing an illness that impedes reproduction -- it leads people to stop having sex and consider suicide -- to spread throughout the population. For some unknown reason, the modern human mind is tilted toward sadness and, as we've now come to think, needs drugs to rescue itself.
The alternative, of course, is that depression has a secret purpose and our medical interventions are making a bad situation even worse. Like a fever that helps the immune system fight off infection -- increased body temperature sends white blood cells into overdrive -- depression might be an unpleasant yet adaptive response to affliction. Maybe Darwin was right. We suffer -- we suffer terribly -- but we don't suffer in vain.
Posted by Jonah Lehrer at 1:08 PM • 60 Comments • 0 TrackBacks