Does psychotherapy work for depressive symptoms in cancer patients?

The futility of debating bad science in letters to the editor

My post at PLOS Mind the Brain summarizes criticisms of a meta-analysis of psychological interventions for depressive symptoms in cancer patients that was organized by the Society of Behavioral Medicine. The authors systematically searched the literature, but found too few studies to justify their sweeping conclusion: that strength of evidence for psychotherapy for depressive symptoms among cancer patients  warrants widespread dissemination of existing treatments and implementation routine care.

letter-to-editor-scaled500

More evidence letters to the editor are obsolete. Viva PubMed Commons!

Like a lot of professionals concerned about delivering effective psychosocial care to cancer patients, I wish that the evidence was that strong. But Ilook to the available evidence to see if my wish is justified. It is not. Finding that the available evidence is not extensive or strong can justify funding more research. Prematurely claiming that we have sufficient evidence can lead to these scarce research resources being shifted elsewhere and questions not getting answered.

I previously provided my concerns to the authors in a Letter to the Editor published in JNCI JNCIand the authors responded. It was not that long ago, but since then I have become convinced of the futility of letters to the editor as a form of post-publication peer review. The problems are that

  • Journals place such stringent limitations on the number of words in a letter that at best only one or two criticisms can be delivered effectively.
  • Whereas letters receive often stringent review, responses from authors are not necessarily peer-reviewed by anyone, as long as libel is avoided.
  • The authors can write whatever they want, knowing that no further response from critics is allowed.
  • A letter to the editor is usually published months after an article has appeared, even when they are submitted after the article was published. Readers are unlikely to bother to look up the original article or even to recall it.
  • Readers newly accessing the article that has been critiqued may not even become aware that there has been a letter to the editor offering decisive criticism of what is claimed in the target article.

Restrictions on letters to the editor remain, but have become obsolete with the development of journal webpages and the possibility of rapid response without cutting into page allocations for the journal. Some restrictions on letters are simply a matter of inertia. Editors of traditional journals published in paper additions generally did not like letters to the editor because they consumed scarce page allocations and were unlikely to generate the citations that original article would. That is no longer a problem. A main source of resistance to letters of the editor remains, in my experience, that editors are quite defensive and loathe to accept letters that reflected badly on their decisions. And they consider publishing such letters disloyal to reviewers if the reviewers are made to look bad for overlooking obvious flaws.

Bottom line is that I no longer consider letters to the editor very effective contributions to post publication peer review. I once promoted them as a way of teaching critical appraisal skills, but regret it. Fortunately we now have  personal and lab blogs and PubMed Commons and group blogs for junior persons like Mental Elf.

Anyway, in my letter to the editor, I criticized the authors’ misclassification of collaborative care interventions for depression as psychotherapy. Collaborative care interventions are designed to improve the overall quality of care for depression being provided, including the availability medication and the quality with which it is prescribed, monitored, and followed up. They involve reorganizing whole systems of care.

Lots more than psychotherapy is being provided in a collaborative care intervention and not all patients in the intervention arm get psychotherapy. And just who gets therapy within the collaborative care intervention is not determined by further random assignment.

Many of the patients assigned to the intervention arm  receive antidepressant medication, either in conjunction with therapy or alone. Furthermore, many patients assigned to the control group in collaborative care intervention studies have to pay for any treatment and may have other difficulties accessing quality care besides not being able to pay for it. Comparisons between collaborative care interventions and control groups thus  do not produce meaningful estimates of the effect sizes for psychotherapy. Among the 60 or so collaborative care studies and probably 30 or more systematic reviews and meta-analyses, I have never seen collaborative care considered to be psychotherapy for the purpose of calculating effect sizes.

The authors replied to my criticism:

First, we contend that including collaborative care RCTs …was well reasoned. Our goal (p. 991) was to examine the efficacy of RCTs testing various therapeutic approaches rather than specific psychotherapies. Collaborative care (CC) interventions are well suited for primary care (1) and are gaining traction in oncology (2). Secondary processes in CC, such as education about depression, are common components of psychotherapy (3). In the three CC trials, patients were randomly assigned to CC or usual care. We emphasized (p. 1000) that patients do not invariably receive psychotherapy in a CC model but rather can receive psychotherapy, medication, or both. Most CC patients received psychotherapy, with or without medication. Having treatment options better represents the naturalistic context and fosters successful dissemination to practice.

This reply brings to mind an expression I picked up from an old college housemate: you’re shuckin’ me, man.shuck n jive

Surely the authors know this is nonsense and would not say it in other contexts. Let’s ignore that the three collaborative care interventions that they included were not sustained in the clinical settings after conclusion of their demonstration research projects. That combinations of psychotherapy and medication occur in the natural environment are irrelevant to evaluations of whether psychotherapy is efficacious. That requires randomized trials in which the availability of psychotherapy is the key controlled variable. It is not in a collaborative care RCT.

I further criticized the inclusion of studies with sample sizes so small that it was statistically improbable that they would get a positive effect even if they were evaluating a potent active treatment.

They replied

 Allowing RCTs with relatively small sample sizes, which coincides with our inclusion of pharmacologic studies and the well-documented knowledge of substantial attrition in pharmacologic RCTs for major depressive disorder (6), reflects a decision about which active debate exists in the meta-analytic literature (7). Our use of Hedges’ g, which corrects for small sample bias, and findings from our elected safeguards of examining publication bias, the fail-safe N, and whether the psychotherapeutic RCT effects varied as a function of trial attrition all suggest a stable overall effect size.

Again, this is shuckin and jivin’. With such a small number of studies, Hedges’ G is an

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCkQtwIwAA&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DtDc9gKt9Ly4&ei=PZ5WU9eXOsygsQS464GYBA&usg=AFQjCNFWOXPIzovinTct1GqIDF0L7DGWWQ&sig2=a665KUEwfvwIxZcfG-ZRpw

Art Garfunkel’s Mr Shuck ‘N Jive http://tinyurl.com/k7wbwo4

ineffective correction. As I noted in my longer blog post, meta-analysis experts reject the validity of fail-safe N. It is explicitly discouraged by training materials for the Cochrane Collaboration which is the first place that many researchers, clinicians, and policymakers go to find authoritative meta-analyses.

Finally, I challenged the authors’ including two separate effect sizes from the same study, violating the requirement for statistical independence.

The authors replied

 As we stated (p. 992), because interventions were distinct, we calculated two separate effect sizes for trials containing two intervention groups, which violates the assumption of independent effect sizes. We conducted sensitivity analyses to address this issue; separate analyses including only the largest or the smallest effect size from those studies did not substantially influence the findings (p. 999).

Analyses with such small numbers of small studies do not correct a violation of the basic assumptions of the statistics of meta-analyses. Furthermore, if the authors are going to include two effect sizes from the same study, why not include three? As I noted in my longer blog posts, one of the conditions counted as an intervention group was supportive therapy, normally considered a control group in psychotherapy research. If it had been considered a control/comparison for CBT, the effect size CBT would have been negative because patients receiving supportive therapy actually had better outcomes. I would not want to make too much of this finding because of the trial being underpowered, but certainly once again, the authors are simply wrong and counting two, not one effect sizes from the three that were available. Their selection of the two serves their bias in seeking evidence consistent with a strong effect for psychotherapy.

I get frustrated again just reviewing the exchange that was precipitated by my letter to the editor. I think I should return to it the next time I get an overwhelming urge to write a letter to the editor, rather than just blog or post something at PubMed Commons.PubMed

Importantly, PubMed Commons allows ongoing, continuous peer review for the life of an article. Letters to the editor allow only a duel with authors guaranteed  the last shot.

Why get into a duel with the authors guaranteed the last shot? Go to PubMed Commons for a fairer match.

Why get into a duel with the authors guaranteed the last shot? Go to PubMed Commons for a fairer match.

But forget two-shot duels, participate yourself in postpublication peer review by expressing your opinion about this article at PubMed Commons. I have, come see.

And for links to the actual RCTs discussed here, go here.

Five studies claimed to demonstrate efficacy of psychotherapy for depression in cancer patients

demand evidence and thinkAt my primary blog site, PLOS Mind the Brain I am critically discussing an article for which this post provides some resources helpful for readers in forming their own opinions.

Hart, S. L., Hoyt, M. A., Diefenbach, M., Anderson, D. R., Kilbourn, K. M., Craft, L. L., … & Stanton, A. L. (2012). Meta-analysis of efficacy of interventions for elevated depressive symptoms in adults diagnosed with cancer. Journal of the National Cancer Institute, 104(13), 990-1004.

The authors declare psychotherapy to be superior to control contentions in relieving the depressive symptoms of cancer patients and that they are ready to be rolled out into routine care.

I hope that is the case, but I depend on evidence for my judgment. I withhold judgment when the evidence is insufficient. Indeed, if we decide to soon that we have enough evidence, were discouraged from doing the research necessary to produce more evidence.

The the meta-analysis is seriously flawed and the authors’ evaluation of the literature is premature. Their conclusion serves to promote the services of the sponsoring organization’s membership with the claim of being evidence-based without the interventions having earned this status.

These authors claim to found only five relevant studies reported in six papers after an exhaustive review of the literature. Three of the studies ( 1-3 below )are quite inappropriate for addressing whether it is psychotherapy is effective with cancer patients. They involve substantial reorganizations of care and provision of medication. Any effects of psychotherapy cannot be as separate out.

In two of the three collaborative care studies, patients in the intervention group, but not the control group got free treatment. Differences in whether patients had to pay for treatment probably explained the very low utilization by the control group. The remaining two studies are pitifully too small for making firm recommendations.

In the PLOS Mind the Brain post, I referred to these studies by their numbers below.

1. Strong, V., Waters, R., Hibberd, C., Murray, G., Wall, L., Walker, J., … & Sharpe, M. (2008). Management of depression for people with cancer (SMaRT oncology 1): a randomised trial. The Lancet, 372(9632), 40-48.

http://www.lancet.com/journals/lancet/article/PIIS0140-6736(08)60991-5

This is one of the articles that should not have been counted has psychotherapy. I do not find other articles in the literature where the study is counted as psychotherapy. I am sure that the authors would agree with me they did a lot more than provide psychotherapy.

The study aimed to assess the efficacy and cost of Depression Care for People with Cancer, a nurse-delivered complex intervention that was designed to treat major depressive disorder in patients who have cancer.

The intervention is described as

In addition to usual care, patients in the intervention group were offered a maximum of 10 one-to-one sessions over 3 months, preferably at the cancer centre but by telephone or at patients’ homes if they were unable to attend the centre.

The intervention Depression Care for People with Cancer, included education about depression and its treatment (including antidepressant medication); problem-solving treatment to teach the patients coping strategies designed to overcome feelings of helplessness; and communication about management of major depressive disorder with each patient’s oncologist and primary-care doctor. For 3 months after the treatment sessions progress was monitored by monthly telephone calls. This monitoring used the nine-item Patient HealthQuestionnaire (PHQ-9)16 to assess the severity of depression.

The intervention produced an overall modest reduction in depressive symptoms. This treatment effect was sustained at 6 and 12 months. The intervention also improved anxiety and fatigue but not pain or physical functioning.

The authors conclude that the intervention offers a model for the management of major depressive disorder in patients with cancer and other medical disorders who are attending specialist medical services that is feasible, acceptable, and potentially cost effective.

2a. Ell, K., Xie, B., Quon, B., Quinn, D. I., Dwight-Johnson, M., & Lee, P. J. (2008). Randomized controlled trial of collaborative care management of depression among low-income patients with cancer. Journal of Clinical Oncology, 26(27), 4488-4496.

http://jco.ascopubs.org/content/26/27/4488.full.pdf

This is another collaborative care study that should not have been counted has psychotherapy. It is reported in two articles, the second being longer term follow-up.

It examined the effectiveness of the Alleviating Depression Among Patients With Cancer (ADAPt-C) collaborative care management for major depression or dysthymia.

Intervention patients had access for up to 12 months to a depression clinical specialist (supervised by a psychiatrist) who offered education, structured psychotherapy, and maintenance/relapse prevention support. The psychiatrist prescribed antidepressant medications for patients preferring or assessed to require medication.

Study patients were 472 low-income, predominantly female Hispanic patients with cancer age ≥ 18 years with major depression (49%), dysthymia (5%), or both (46%). Patients were randomly assigned to intervention (n = 242) or enhanced usual care (EUC; n = 230).

At 12 months, 63% of intervention patients had a 50% or greater reduction in depressive symptoms from baseline as assessed by the Patient Health Questionnaire-9 (PHQ-9) depression scale compared with 50% of EUC patients (odds ratio [OR] = 1.98; 95% CI, 1.16 to 3.38; P = .01).

The study concludes that ADAPt-C collaborative care is feasible and results in significant reduction in depressive symptoms, improvement in quality of life, and lower pain levels compared with EUC for patients with depressive disorders in a low-income, predominantly Hispanic population in public sector oncology clinics.

2a. Ell, K., Xie, B., Kapetanovic, S., Quinn, D. I., Lee, P. J., Wells, A., & Chou, C. P. (2011). One-year follow-up of collaborative depression care for low-income, predominantly Hispanic patients with cancer. Psychiatric Services, 62(2), 162-170.

http://journals.psychiatryonline.org/article.aspx?articleid=102180

This is the follow up report for the collaborative care study that is described above. It should not be counted as providing an estimate of the effect size for psychotherapy.

The study assessed longer-term outcomes of low-income patients with cancer (predominantly female and Hispanic) after treatment in a collaborative model of depression care or in enhanced usual care.

This RCT was conducted in “safety-net oncology clinics”, recruited 472 patients with major depression symptoms. Patients were randomly assigned to a 12-month intervention (a depression care manager and psychiatrist provided problem-solving therapy, antidepressants, and symptom monitoring and relapse prevention) or enhanced usual care (control group) were interviewed at 18 and 24 months after enrollment.

At 24 months, 46% of patients in the intervention group and 32% in the control group had a ≥50% decrease in depression score over baseline (odds ratio=2.09, 95% confidence interval=1.13—3.86; p=.02); intervention patients had significantly better social (p=.03) and functional (p=.01) well-being. Treatment receipt among intervention patients declined (72%, 21%, and 18% at 12, 18, and 24 months, respectively); few control group patients reported receiving any treatment (10%, 6%, and 13%, respectively). Significant differences in receipt of counseling or antidepressants disappeared at 24 months. Depression recurrence was similar between groups (intervention, 36%; control, 39%). .

The study concludes collaborative care reduced depression symptoms and enhanced quality of life; however, results call for ongoing depression symptom monitoring and treatment for low-income cancer survivors.

3. Dwight-Johnson, M., Ell, K., & Lee, P. J. (2005). Can collaborative care address the needs of low-income Latinas with comorbid depression and cancer? Results from a randomized pilot study. Psychosomatics, 46(3), 224-232.

http://www.sciencedirect.com/science/article/pii/S0033318205700852

This is a modest pilot study that served as the basis for the large-scale study reported above.

55 low-income Latina patients with breast or cervical cancer and comorbid depression were randomly assigned to receive collaborative care as part of the Multifaceted Oncology Depression Program or usual care. Relative to patients in the usual care condition, patients receiving collaborative care were more likely to show ≥50% improvement in depressive symptoms as measured by the Personal Health Questionnaire (OR = 4.51, 95% CI=1.07–18.93). Patients in the collaborative care program were also more likely to show improvement in emotional well-being (increase of 2.15) as measured by the Functional Assessment of Cancer Therapy Scale than were those receiving usual care (decrease of 0.50) (group difference=2.65, 95% CI: 0.18–5.12). Despite health system, provider, and patient barriers to care, these initial results suggest that patients in public sector oncology clinics can benefit from onsite depression treatment.

http://www.sciencedirect.com/science/article/pii/S0033318205700852

If we exclude the three collaborative care studies above, has we should, we are left with these two modest studies.

Savard, J., Simard, S., Giguère, I., Ivers, H., Morin, C. M., Maunsell, E., … & Marceau, D. (2006). Randomized clinical trial on cognitive therapy for depression in women with metastatic breast cancer: psychological and immunological effects. Palliative & supportive care, 4(03), 219-237.

http://journals.cambridge.org/abstract_S1478951506060305

Forty-five women were randomly assigned to either individual cognitive therapy (CT) or to a waiting-list control (WLC) condition. CT was composed of eight weekly sessions of CT and three booster sessions administered at 3-week intervals following the end of treatment.

Patients treated with CT had significantly lower scores on the Hamilton Depression Rating Scale at posttreatment compared to untreated patient, as well as reduction of associated symptoms including anxiety, fatigue, and insomnia symptoms. These effects were well sustained at the 3- and 6-month follow-up evaluations. CT for depression did not appear to have a significant impact on immune functioning.

Evans, R. L., & Connis, R. T. (1995). Comparison of brief group therapies for depressed cancer patients receiving radiation treatment. Public health reports, 110(3), 306.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1382125/pdf/pubhealthrep00054-0076.pdf

A total of 72 depressed cancer patients were randomly assigned to one of three conditions–cognitive-behavioral treatment, social support, or a no-treatment control condition. Before and after intervention and at 6-month followup, study participants were individually assessed by using measures of symptom distress. Relative to the comparison group, both the cognitive-behavioral and social support therapies resulted in less depression, hostility, and somatization.

The social support intervention also resulted in fewer psychiatric symptoms and reduced maladaptive interpersonal sensitivity and anxiety. It was concluded that both group therapies can reduce symptoms of distress for depressed persons undergoing radiation treatment for cancer. Both forms of therapy resulted in improvements in psychosocial function (compared with no treatment at all), but social support groups demonstrated more changes that were evident at 6-month followup.

This article considered the support group to be an active intervention. Many studies of psychotherapy would consider such a group to be a control condition. We certainly did in a study of primary care patients. If the support group is reclassified as a comparison/control condition for cognitive-behavioral treatment, then the cognitive behavioral treatment is reduced to a negative effect size.

Yup, that’s all folks. Do you think these studies are sufficient evidence to justify sweeping conclusions and rolling out psychotherapy  into routine cancer care without further ado?that's all

 

 

How committed is British clinical psychology to science and evidence?

demand evidence and thinkAlthough some psychoanalysts claim I am part of a plot of CBT proponents against them, I am not and have never been a CBT therapist. Indeed, Aaron T Beck and I became friends because of his overtures and appreciation for my sustained critique of his cognitive therapy for depression.

I respect the commitment of American CBT researchers and clinicians to evidence and to both evidence- and science- based arguments. They are at the forefront of advocacy for evidence-based practice in the United States.

I have never thought much about differences in the CBT community the US versus the UK. I had limited exposure to CBT in the UK, other than a visit to University of Exeter where we had some lively debate followed by exceptionally good partying. I was left thinking that the CBT community in the UK is a bit different than that in the United States. They were a bit more nondefensive in hearing criticism– after all, they had invited me to give a keynote– and passionate in countering it. I had so much fun that I felt guilty accepting my speakers fee.

None of my past experience prepared me for the debate in the UK concerning CBT for persons with unmedicated schizophrenia.

When I took aim at both the press coverage and the Lancet article in a blog post, it was promptly hacked by someone who had intimate knowledge of the study and first claimed to be one of the authors before retracting that claim. The problem was so bad that I complained to a UK Internet provider that identified the source of the hacking in the neighborhood of one of the authors of that study. Apparently, this IP address had been the object of numerous complaints from others.

Now there is an embarrassing blog post that further demonstrates a lack of commitment to science and evidence by British clinical psychology.

A national scandal: psychological therapies for psychosis are helpful, but unavailable

The authors are Peter Kinderman and Anne Cooke, and they approvingly cites the recent Lancet study as indicating that CBT is a promising alternative to antipsychotic medication for treatment of schizophrenia.

 In view of the downsides of antipsychotics it comes as something of a relief that there is a possible alternative.  Psychological approaches such as cognitive behaviour therapy (or CBTp, the ‘p’ standing for psychosis) have become increasingly popular. NICE (the National Institute for Care Excellence) is sufficiently convinced of the effectiveness of these approaches to recommend that they should be offered to everyone with a diagnosis of schizophrenia.  Traditionally they have been offered in addition to drugs, but a recent trial suggests that they might also be promising as an alternative.

If only that were true, a lot of us would be happier. But the study is basically a null trial dressed up as positive, starting with the hyped abstract.

There are formal standards for evaluating the process of producing guidelines and their output recommendations. Judged by these criteria, the NICE recommendations are woefully deficient and represent the consensus of professionals with self-interests at stake, rather than carefully assembled and considered best evidence, keeping in mind all of the limitations of this evidence. However, Brit psychologists seem hell-bent on waxing enthusiastically about these NICE guidelines, but then neglecting any balance in suggesting there must be an either/or choice in CBT versus antipsychotic medication.

Kinderman repeatedly argues from authority. I challenged him to refute the arguments contained in my PLOS Mind the Brain blog posts (1, 2) about the Lancet trial not producing usable data, but I can’t expect a response. He is a rather retiring fellow, once evidence is introduced.

Turning to the Maudsley debate, Kinderman finds it most worthy of comment that

all four debaters were white, male, middle class academics.

I thought that that was a funny thing for a Brit white boy to say.Certainly he would not get away with this in Philly. But then I looked on Google and found that like Steve Martin in the Jerk, Kinderman had been a poor black child born to sharecroppers.

jerk2Kinderman also dismissed skeptics of CBT for psychosis as “NHS clinicians with offices overflowing with drug company freebies.” I do not know if the audience at the Maudsley debate was taking notes with pens provided by the pharmaceutical companies, I only have the audio podcast. But I think that Kinderman’s raising of conflict of interest is an excellent idea, but it should be extended to the many psychologists who benefit from premature marketing of CBT for psychosis and those who hope for increases in demands for their services.

Kinderman repeatedly challenges evidence with anecdote. He presents himself as a champion of the Service Users (SU) perspective and suggest that random reports from the community trump systematic assessment of their response to treatment.

I wonder how service users feel about Kinderman and promoters of CBT appropriating their voice. And what do they think about Kinderman setting such a tone for the debate with his attacks on critics? Surely, he must have thought that they were watching.

Kinderman brings to mind Henry McGrath, a British advocate of complementary and alternative medicine with whom I clashed at the 2013 International Psycho-Oncology Conference in Rotterdam.

Me: do not you think it is outrageous when practitioners of complementary and alternative medicine cite users’ experience with alternative treatments like the coffee enemas that Prince Charles favors? Do you not you worry this kind of nonsense can discourage desperate cancer patients from seeking effective, but on pleasant treatments?

Henry: But I have certainly heard from users that they have gotten considerable benefit from coffee enemas. I cannot speak to their effectiveness.

Me: Really? Do they prefer cappuccino, lattes, or merely double espressos?

Some of Kinderman’s blog is simply silliness and can read in good fun. But then he says

The problem with many trials, and therefore with meta-analyses too, is that professionals decide in advance what they are going to measure and what counts as a ‘good’ outcome.

This is a rejection of the uncontroversial idea that investigators in clinical trials should commit themselves to primary outcomes and analyses in preregistration that occurs before the trials are done or the meta-analyses are conducted. Psychotherapy research is plagued by investigators having a battery of outcome measures and then choosing after the trial only the most favorable to emphasize.

Does Kinderman have no appreciation for what Archie Cochrane accomplished? Ben Goldacre’s campaigning for pharmaceutical companies to preregister and report all trials?

Kinderman titles his blog “a national scandal…” From across the pond I think that the British national scandal is there is such ignorance among British clinical psychologists about basic science-and evidence-based argumentation. He ends with a lament that

Kinderman

Despite being born to black sharecropper parents, Peter Kinderman appears to be white as an adult.

The issue is not one of overselling, it’s that psychological therapies are shamefully underprovided.

Maybe, Peter, that is what the accumulation of evidence will justify, but you doing very poorly with the evidence at hand.

The comments by Liverpool psychologist Richard Bentall are more disconcerting than Kinderman. He grunts “Duh” and dismisses arguments and evidence with “You don’t understand” without specifying what is misunderstood. Like Kinderman and Cooke, Bentall heavily relies on argument by declaration. This or that point with which he disagrees is demolished by declaring it demolished. I get the sense that for theses Brits at least, reading their blogs and comments is supposed to be strictly a spectator sport for readers.

Reacting to the first postings of criticism of his pontifications, Bentall angrily responded

I’m a bit tired of this debate, which is being pursued by a lot of people who don’t understand the pragmatics of RCTs, or who have never themselves tried to do anything to improve the well-being of psychiatric patients, or who are not clinicians and have never been in the position of not knowing what to do when faced with a patient who seems to be doing badly on standard treatments (if Keith Laws and I arrive together at St Peter’s Gate and there’s only room for one more entrant I won’t be too anxious). In a such a situation, a sensible clinician will ask herself ‘is CBT worth a shot?’ and the answer will be ‘yes’.

There is so much that Bentall says that I would like to dispute or at least demand the evidence for his claims so everyone can decide for themselves. For instance he claims that treatment as usual is conservative and reduces the possibility of finding an effect for an active treatment. Really? where is the evidence?

Rory Byrne, one of the authors of the Lancet study unleashed a lot of vituperation in the comments, but then someone other than Byrne quickly deleted his comments.

Another commentator stated

 One of the biggest problems is that we are so dominated by a medical discourse that it is as if we can employ no other language in our critique of psychotherapeutic endeavour. It may be the case that some researchers believe that it is only by ‘medicalising’ research and attempting to think of psychotherapy as being akin to a drug, that they think their efforts will be taken seriously. Doubtless, they are perhaps more likely to gain research monies if they do this. Like it or not, psychotherapy is not a medical treatment and should not be evaluated as such.

Really? A solid RCT represents not medicalization, but fairest testing of whether an intervention works. A well done psychotherapy RCT is well done experimental psychology.

Update: after being away from his blog for a while Peter Kinderman has reemerged with a complaint

 I am a little disturbed by the tone of some of the comments, and I think that personal invective is inappropriate and should be avoided…Let’s retain some dignity here.

A call for dignity? Peter, how disingenuous of you. You were the one who shamelessly appropriated the voices of those who are not “white, male, academics” as well as “users of services.” It was you who sweepingly characterized skeptics about the efficacy of CBTp as whores for the pharmaceutical industry. I think you should reflect on the tone that you set and the confusing message you give to consumers and their families.

 

 

 

BMJ Rapid Responses, PubMed Commons, and Blogging

Why I said no to revising my submission to BMJ Rapid Response

bmj-logo-ogI said that I would submit my last blog post to BMJ Rapid Responses and I did. But here is the response that I received:

Dear Professor Coyne,

Thank you for your rapid response (copied below).

We would like to post if you would you resubmit concentrating on the science. The talk of spinning data suggests that the authors have a hidden agenda, which we cannot post without proof and is anyway difficult to prove and tends to add more heat than light to a debate..

Thank you very much for your help.

Best wishes,

Sharon

Sharon, thank you for consideration of my Rapid Response. After giving your condition serious thought, I have decided not to resubmit.

Posting a Rapid Response from BMJ website that has advantages, beyond the obvious one of getting considered for publication in the print edition. Anyone going to the BMJ website and looking at an article will see that a response has been made and can choose to read it. BMJ has been an important innovator in this respect. It should serve as a model for the many journals lagging far behind in post publication peer review.

Sharon, as you are well aware, I have taken advantage of the BMJ Rapid Response option numerous times. On some of occasions, you have suggested that I had to tone something down in order to be posted. I have always done and, in hindsight, sometimes with beneficial effects.

Rapid Responses used to be the only game in town for someone wanting to respond to a BMJ article. A commentator had to comply with requests to edit comments or risk the post disappearing into oblivion.

But then blogs came along as an alternative offering a more unfettered opportunity to comment. Blogs have usually carried the disadvantages of not having pre-post peer-reviewed, not having the same access afforded a Rapid Response attached to an article, or not being archived or indexed. But things are changing and some of these disadvantages are beginning to be eliminated.

PubMedThen PubMed Commons emerged, which allowed posting of comments that would be available to anyone accessing a BMJ article or any other of 23 million articles through PubMed. It has less restriction and grants more discretion to commentators. It also allows commentators to post links to blog posts, where points could be elaborated.

PubMed Commons is a challenging alternative to letters to the editor and also takes away control of comments from editors and journals. It competes with BMJ Rapid Responses, but I would not be surprised if it stimulated other journals to adopt the model of BMJ to better accommodate post publication peer review.

It may seem the high ground to insist that Rapid Responses stick to the science, but it is naïve and misleading. BMJ, like Nature and Science, does not simply publish just data, and certainly not all of the data that their projects generate. The data that are presented are selective, framed, and interpreted. Ironically, the author of the BMJ article claiming the link between fast food outlets and obesity has stated elsewhere that you can never take authors’ role out of the data they choose to present and interpret in their articles, and that authors have a strong hand in creating or knocking down “obesogenic realities.”

As for the issue of “spin,” the concept can be operationalized and scientifically studied. spin noNumerous examples of spin can be cited. For instance, it is been shown that much exaggeration in media coverage can be traced to spin in the abstracts of scientific articles. Over a series of blog posts, I showed that was the case with a recent Lancet study of CBT for schizophrenia (1, 2, 3). And before that  there was that churnaled study of a breakthough blood test for postpartum depression.

It is the interest of  science and the public to point out spin in abstracts and the body of articles, and, in the case of this particular BMJ article, to point to discrepancies between what is said in the abstract and what is said in the results section. Too often, readers and the media rely only on abstracts and few people bother to make comparisons between abstracts and what was actually said in the text, which could substantially contradict or at least qualify what is said in the abstracts.

My intended Rapid Response raised methodological and statistical points about key aspects of the article on fast food outlets and obesity, but was also intended to alert readers to the spin in the abstract of the BMJ article and invited readers to decide for themselves.

But I did more than complain about the author, I complained about policies of BMJ that serve to encourage, reward, and maybe even insist on spin. I have in mind the requirement that cross-sectional observational studies claim clinical or public health importance, particularly in the absence of a requirement that they preregister hypotheses, outcomes to be examined, candidate covariates and analytic plan.

So this time, at least, I will sacrifice the advantages that go with posting a Rapid Response at the BMJ website, but I would keep writing a series of blog posts about the article. When the post at PLOS Mind the Brain has been uploaded, I will then post a PubMed Commons comment. Anyone accessing the BMJ article through PubMed will have the opportunity to view and respond to my comment and learn about it in my blog.

PubMed Commons is truly revolutionary. It builds on the earlier act of PubMed making MEDLINE abstracts universally accessible, and not just limited to academics having the gateway of University libraries. Previously, journals, maybe even the BMJ, fought to use copyright of abstracts to keep this from happening. I recall cheering the dramatic moment on American national TV when Vice President Al Gore publicly made use of PubMed to learn more about his wife’s illness.

BMJ Rapid Responses has also been revolutionary in its conception. But now BMJ must response to the challenge posed by PubMed Commons or risk being left behind. I am sure that the journal is up to the task, and I again thank you for offering the opportunity to revise my proposed BMJ Rapid Response.

Yours,

Jim Coyne

Takeaway food outlet/obesity link: Spun data, faulty policy recommendations in BMJ

fast food outlets As I will review in an upcoming post at PLOS Mind the Brain, the link between the availability of fast food outlets and obesity is weak and probably explained by a variety of other factors, including poverty and restrictions on opportunities to purchase, store, cook, and consume food from supermarkets,  as well as competing demands and preferences

My Mind the Brainslize of  post will discuss some of the reactions that surprised me when I tweeted about this BMJ article.

My expression of skepticism about the ethics or effectiveness of restricting people’s access to fast food outlets had encountered mostly hostile responses. I even lost a few followers. I realized that because it was the early morning and I was in Europe, most of the responses were coming from the UK or continental Europe while Americans were still sleeping.

I later posted something on Facebook, including a link to the article.There the responses were a more sympathetic “take your hands off my slice of pizza”coming from Americans. Thre was much more of a sense that eating is a basic human need, access to food is a basic human right, and restrictions set on that right can become a matter of social justice, particularly when the restrictions occur without the involvement of those who are most affected. Of course, eating is very different than smoking or the misunderstood Second Amendment right “to bear arms.”

I underestimated the cultural differences between the UK and the US in willingness to impose top-down restrictions on people’s freedom of choice. We are, after all, as Winston Churchill observed, two nations separated by a common language.

[The following was uploaded as a Rapid Response at BMJ, awaiting approval March 28, 2014]

There are incentives for researchers to produce findings that support decisions to which policy makers are already committed. Top down decisions to control obesity by restricting access to fast food outlets are a prime example.

Burgoine and colleagues1 previously demonstrated how researchers can serve up illusory ‘obesogenic realities’ to order with arbitrary methodological and statistical decisions and selective reporting. Now Burgoine and other colleagues2 have presented a prime example of this, declaring in their abstract: “Government strategies to promote healthier diets through planning restrictions for takeaway food could be most effective if focused around the workplace.”

This unqualified recommendation is based on a cross-sectional observational survey study of a rural area of England. One needs to read the abstract carefully in conjunction with the actual results to recognize spin being applied to results that even then do not justify this bold recommendation. The abstract selectively reports findings concerning exposure to fast food outlets and consumption of fast food in the work environment. These effects amount to only 5.3 g per day when individuals in lower quartile of exposure are compared to those in the highest. In the results section, we learn that the association for the home environment is more modest and not dose-response. Further, the association for along commuting routes is nonsignificant.

These patterns are examined in multiple linear regression analyses vulnerable to addition or exclusion of arbitrarily selected and poorly measured control variables. For instance, in the results section we learn “In models that omitted supermarket exposure as a covariate, the associations between combined takeaway food outlet exposure, consumption of takeaway food, and body mass index were attenuated towards the null (web figs 5 and 6, upper right panels, respectively). “

The authors are to be faulted for failing to present basic bivariate relationships, reliance on complex multiple linear regression models of cross-sectional data that poorly capture environments or individuals, and an abstract that does not meet results in a way that is accurate or representative.

However, British Medical Journal shares blame for promoting faulty policy recommendations with weak evidence. The journal encourages authors of observational studies to identify clinical and public health implications without noting strength of evidence or the need for intervening randomized trials evaluating interventions between observational studies and policy recommendations. Readers need to ask: would BMJ have accepted this article if the authors had not made policy recommendations unwarranted by their data? If the article were nonetheless accepted, would similar media coverage have been generated by an appropriately modest acknowledgment that within the limits of their data, the authors did not find strong support for limiting fast food outlets as a way of controlling obesity?

Authors of observational studies commonly make recommendations for clinical practice and policy without calling for an RCT.3 BMJ should stop its complicity by reconsidering its policy of calling for clinical and public health implications from observational studies. If the journal policy must persist, then a policy should be implemented of requiring preregistration of design and analysis plans for observational studies,4 just as done with clinical trials. And frank admission of the limitations of cross-sectional observational data should be clearly acknowledged, starting in abstracts.

01. Burgoine T, Alvanides S, Lake AA. Creating ‘obesogenic realities’; do our methodological choices make a difference when measuring the food environment? Int J Health Geogr 2013;12:1-9.
02. Burgoine, T., Forouhi, N. G., Griffin, S. J., Wareham, N. J., & Monsivais, P. Associations between exposure to takeaway food outlets, takeaway food consumption, and body weight in Cambridgeshire, UK: population based, cross sectional study. BMJ: 2014, 348.
03. Prasad, V., Jorgenson, J., Ioannidis, J., & Cifu, A. (2013). Observational studies often make clinical practice recommendations: an empirical evaluation of authors’ attitudes. J Clin Epi, 2013; 66(4), 361-366.
04. Thomas, L., & Peterson, E. D. The value of statistical analysis plans in observational research: defining high-quality research from the start. JAMA, 2012; 308(8), 773-774.

Who’s to blame for inaccurate media coverage of study of therapy for persons with schizophrenia?

“I’m in competition with literally hundreds of stories every day, political and economic stories of compelling interest…we have to almost overstate, we have to come as close as we came within the boundaries of truth to dramatic, compelling statement. A weak statement will go no place.”

—-Journalist interviewed for JA Winsten, Science and media: the boundaries of truth

Hyped, misleading media coverage of a study in Lancet of CBT for persons with unmedicated schizophrenia left lots of clinicians, policymakers, and especially persons with schizophrenia and their family members confused.

Did the study actually showed that psychotherapy was as effective as medication for schizophrenia? NO!

Did the study demonstrate that persons with schizophrenia could actually forgo medication with nasty side effects and modest effectiveness and just get on with their life with the help of CBT? NO!

In this blog post, I will scrutinize that media coverage and then distribute blame for its inaccuracies.

At PLOS Mind the Brain, I’ve been providing a detailed analysis of this complex study that was not particularly transparently reported. I will continue to do so shortly. You can consult that analysis here, but briefly:

The small-scale, exploratory study was a significant, but not overwhelming contribution to the literature. It showed that persons with unmedicated schizophrenia could be recruited to a clinical trial for psychotherapy. But it also highlighted the problems of trying to conduct such a study.

  • Difficulties getting enough participants resulted in a very mixed sample combining young persons who had not yet been diagnosed with schizophrenia but who were in early intervention programs with older patients who were refusing medication after a long history of living with schizophrenia.
  • The treatment as usual combined settings with enhanced services including family therapy and cognitive therapy with other settings where anyone who refused medication might be denied any services. The resulting composite “treatment as usual” was not very usual and so did not make for a generalizable comparison.
  • Many of the participants in both the CBT and control group ended up accepting medication before the end of the trial, complicating any distinguishing of the effects of the CBT versus medication.
  • The trial was too small, had too many participants lost from follow up to be used to determine effect sizes for CBT.
  • But, if we must, at the end of the treatment, there were no significant differences between persons randomized to CBT and those remaining in routine care!

The official press release from University of Manchester was remarkably restrained in its claims, starting with its title

Cognitive therapy “safe and acceptable” to treat schizophrenia

There were no claims of comparisons with medication.

And the press release contained a quote from the restrained in tentative editorial in Lancet written by Oliver Howes from the Institute of Psychiatry, London:

“Morrison and colleagues’ findings provide proof of concept that cognitive therapy is an alternative to antipsychotic treatment. Clearly this outcome will need further testing, but, if further work supports the relative effectiveness of cognitive therapy, a comparison between such therapy and antipsychotic treatment will be needed to inform patient choice. If positive, findings from such a comparison would be a step change in the treatment of schizophrenia, providing patients with a viable alternative to antipsychotic treatment for the first time, something that is sorely needed.”

But unfortunately the rest of this excellent editorial was, like the Lancet report of the study itself, locked behind a pay wall. Then came the BBC.

BBC coverage

bbc newsMany of us first learned of this trial from a BBC story that headlined

“Talk therapy as good as drugs for schizophrenia.”

Inexplicably, when the BBC story was accessed a few days later, the headline had been changed to

“Talk therapy moderately effective for schizophrenia.”

There was no explanation, and the rest of the new item was not changed. Creepy.  Orwellian.

The news item contained a direct quote from Tony Morrison, the principal investigator for the study:

We found cognitive behavioural therapy did reduce symptoms and it also improved personal and social function and we demonstrated very comprehensively it is a safe and effective therapy.

Wait a minute: Was it really demonstrated very comprehensively that CBT was an effective therapy in this trial? Are we talking about the same Lancet study?

The quote is an inaccurate summary of the findings of the study.  But it is quite consistent with misrepresentations of the results of the study in the abstract. Here as elsewhere in the media coverage that would be rolling out, almost no one seemed to scrutinize the actual results of the study, only buy into the abstract.

Shilling at Science Media Centre: Thou shalt not shill.

Science Media Centre ran a briefing of the study for journalists and posted an

Expert reaction to CBT for schizophrenia

A joint quote [how you get a joint quote?] from Professor Andrew Gumley, University of duoGlasgow and Professor Matthias Schwannauer, University of Edinburgh proclaimed the study “groundbreaking and landmark.” They praised its “scientific integrity,” citing the study’s pre-registration that prevented “cherrypicking” of findings to put the study in the best light.

As I’ll be showing in my next Mind the Brain post,  the “pre-registration” actually occurred after data collection had started. Preregistration is supposed to enforce specificity of hypotheses and precommit the investigators to evaluate particular primary outcomes at particular times But preregistration of this trial avoided designation of the particular timepoint at which outcome would be assessed. So it did not prevent cherry picking.

Instead, it allowed the authors to pick the unusual strategy of averaging outcome across five assessments. What they then claimed to represent the results of the trial was biased by

  • a last assessment point when most patients were not longer followed,
  • many of the improved patients were on medication,
  • and results affected by a deterioration of the remaining control patients, not improvement in the intervention patients.

Again, let me remind everybody, at the end of the intervention period—the usual time point for evaluating a treatment, there were no differences between CBT and treatment as usual.

shillI have previously cited this joint quote from Gumley and Schwannauer as evidence that experts were weighing in about this study without being familiar with it. I apologize, I was wrong.

Actually, these two guys were quite familiar with the study because they are collaborators with Tony Morrison in a new study that is exactly the kind that they ended proposing as the needed next step. They weren’t unfamiliar, they were hiding a conflict of interest in praising the study and calling for what they were already doing with Tony Morrison. Come on, Andy and Matt, no shilling allowed!

When everybody else was getting drunk with enthusiasm, somebody had to become the designated driver and stay soberdesignated driver two.

As it has done in the past when there is a lot of media hype, NHS Choices offered an exceptionally sophisticated, restrained assessment of the study. This source missed the lack of differences between intervention and control group at the end of the treatment, but it provided an exceptionally careful review of the actual study not just its abstract. It ended up catching a number of other important limitations that almost nobody seemed to be noticing. And it warned

However, importantly, it does not show that it is better or equivalent to antipsychotic medication. The participants continued to have moderate levels of illness despite receiving therapy.

Media coverage got more exaggerated in its headlines.

Wired.UK.com: “Talking therapy could help schizophrenic sufferers that refuse drugs”

This story added a provocative direct quote from Tony Morrison in a sidebar:

“Without medication the results were almost exactly the same as the average effect size you see in antipsychotic drug trials”

This is simply an untrue and irresponsible statement.

TopNews Arab Emirates: CT Acts as Great Treatment for Schizophrenia Spectrum Disorder Patients Unwilling to take Antipsychotics—

Some of the most misleading headlines appeared at sources that consumers would think they could trust

Medical News Today Cognitive therapy ‘an effective treatment option’ for schizophrenia

West (Welfare, society, territory): Cognitive therapy better than drugs for schizophrenia.

GPonline.com: CBT for schizophrenia an effective alternative to antipsychotics, study finds

Nursing Times: Study suggests drug-free therapy hope for schizophrenia

And the prize goes to AAA’s Science for the most horrible misrepresentation of the study and its implications

Science: Schizophrenia: Time to Flush the Meds?

So who’s to blame? Lots of blame to go around

It has been empirically demonstrated that lots of the distortion in medical and scientific journalism starts with distorted abstracts. That was certainly the case here, where the abstract gave a misleading portrayal of the findings of the study that persisted unchallenged and even got amplified. The authors can be faulted, but so can Lancet for not enforcing CONSORT for abstracts or even their requirement of trial design and primary outcomes be preregistered.

The authors should be further faulted for their self-promoting, but baseless claims that their study indicated anything about a comparison between cognitive therapy and neuroleptics. The comparison did not occur and the sample in this study was very different than the population studied it in research on neuroleptic medication.

Furthermore, if Lancet is going to promote newsworthy studies, including in this case, with podcasts, they have a responsibility to take down the pay walls keeping readers from examining the evidence for themselves. BMJ has already adopted a policy of open access for clinical trials and meta-analysis. It is time other journals follow suit.

For me, when the most striking things about media coverage is its boring, repetitive sameness. Lazy journalists simply churnaled or outright plagiarized what they found on the web. The competition was not in content, but in coming up with outrageous headlines.

It is absolutely shameful some of the most outrageous headlines were associated with sources that ostensibly deserve respect. In this instance, they do not. And someone should reel their marketing departments or whoever chooses their headlines.

There is a lot to blame to go around, but there is also room for some praise.

NHS choicesNHS Choices gets a prize for responsible journalism and careful research that goes beyond what the investigators themselves say about their study. I recommend the next time the media gets into a frenzy about particular medical research, that consumers run and look at what NHS Choices has to say.

Stay tuned for my upcoming PLOS Mind the Brain post that will continue discussion of this trial. Among other things, I will show that the investigator group knew what they were doing in constructing such an inappropriate control/comparison group. They gave evidence that they believed that a more suitable comparison with befriending or simple supportive counseling might not have revealed an effect. It is a pity, because I think that investigators should go for appropriate comparisons rather than getting an effect.

I will identify the standards that the investigator group for this trial has applied to other research. I will show that if they apply the same standards to their own study, it is seriously deficient except as a small preliminary, exploratory study that cannot be used to estimate the effects of cognitive therapy. But the study is nonetheless important in showing how hard it will be to do a rigorously controlled study for a most important question.

Happyism: Nick Brown’s Attaining national happiness through chemistry

 happy faces

Donald Trump: “I don’t pursue happiness, I pursue money.”
Phil: “why?”
Donald Trump: “Because money buys me the things I want, such as material goods, and pretty women.”
Phil: “And why do you want to get these things”
Donald Trump: “Because they make me happy.”

In Happyism: The Creepy New Economics of Pleasure, Deirdre N. McCloskey provides a history of the new happyology

From the 1950s to the 1970s, economists such as George Katona and Bernard M.S. van Pragg, and sociologists such as Hadley Cantril and Norman Bradburn, and then in the 1980s, on a big scale, psychologists such as Martin E.P. Seligman, Norbert Schwarz, Frank Fujita, Richard J. Davidson, the Diener family (Ed, Carol, Marissa, and Robert), and of course Daniel Kahneman re-started the once-abandoned project of measuring happiness quantitatively. In the 1990s, some economists, a subgroup in the new “behavioral economics,” delightedly joined the psychologists in measuring happiness as self-reported declarations of one’s level of happiness, and assigning self-reported numbers to them, and adding them up, and averaging them across people and eras. Some of the quantitative hedonists have taken to recommending governmental policy for you and me on the basis of their 1-2-3 studies; and some of them are having influence in and on the Obama administration.

And a thorough shredding of the non-science:

At the most lofty level of scientific method, the hedonicists cheekily and with foreknowledge mix up a “non-interval” scale with an “interval” scale. If you like the temperature in Chicago today better than the one on January 15, you might be induced by the interviewer to assign 2.76 to today and a 1.45 to January 15. But such an assignment is of course arbitrary in God’s eyes. It is not a measure in her view of the difference even in your heart (since to her all hearts are open) between a nice day and a cold day. By contrast, an interval scale, such as Fahrenheit or Celsius temperature on the two days in question, does measure, 1-2-3. God doesn’t care which scale you use for hedonics as long as it’s an interval scale. Non-interval scales merely rank (and classifications merely arrange). We couldn’t base a physics on asking people whether today was “hot, nice, or cold” and expect to get anything quantitative out of it.

Recording the percentage of people who say they are happy will tell you something, to be sure, about how people use words. It’s worth learning. We cannot ever know whether your experience of the color red is the same as mine, no matter how many brain scans we take. (The new hedonism is allied, incidentally, with the new brain science, which merrily takes the brain for the mind.) Nor can we know what red or happiness 1-2-3 is in the mind of God, the “objective happiness” that Kahneman speaks of as though he knew it. We humans can only know what we claim to see and what we can say about it. What we can know is neither objective nor subjective, but (to coin a word) “conjective.” It is what we know together in our talk, such as our talk about our happiness. Con-jective: together thrown. No science can be about the purely objective or the purely subjective, which are both unattainable.

And here comes the knock out punch:

If a man tormented by starvation and civil war in South Sudan declares that he is “happy, no, very happy, a regular three, mind you,” we have learned something about the human spirit and its sometimes stirring, sometimes discouraging, oddity. But we inch toward madness if we go beyond people’s lips and claim to read objectively, or subjectively, their hearts in a 1-2-3 way that is comparable with their neighbors or comparable with the very same South Sudanese man when he wins an immigration lottery and gets to Albany.

Happyism: TheCreepy New Economics of Pleasure is an erudite long longread. Don’t be fooled. Do not be drawn in by its seductive start with a light Peanuts Lucy and Linus story unless you have a bit of time on your hands.  Maybe some well spent time, but is there something  you’d rather be doing that would make you—well– happy?

So, maybe you don’t  have that kind of time to spare, but still up for being provoking to think different about happyology, maybe articulating your vague discomfort with the whole thing.

Then, here is a very smart, but mercifully short blog post by Nick Brown (@sTeamTrae), the guy who wentNick Brown from relative obscurity to international attention with his thorough debunking of positive psychology’s core positivity ratio.

Nick explores the murky quantitative relationship between national happiness and antidepressant consumption. It may sound ponderous, but he is brief and will keep you amused….

Thanks for sharing, Nick.

Attaining national happiness through chemistry

Nick Brown

Apparently, Scandinavia is big in the UK right now.  (Something “foreign” is always big in the UK, it seems.)  And when something is big, the backlash will be along very soon, as exemplified by this Guardian article that I came across last week.  So far, so predictable.  But while I was reading the introductory section of that article, before getting to the dissection of the dark side of each individual Nordic country, this stood out:

[T]he Danes … claim to be the happiest people in the world, but why no mention of the fact they are second only to Iceland when it comes to consuming anti-depressants?

Hold on a minute.  When those international happiness/wellbeing surveys come out each year, Denmark is always pretty close to the top.  So I checked a couple of surveys of happiness (well-being, etc.).  The United Nations World Happiness Report 2013 [PDF] ranks Denmark #1.  The OECD Better Life Index 2013 rather coyly does not give an immediate overall ranking, but by taking the default option of weighting all the various factors equally, you get a list headed by Australia, Sweden, and Canada, with Denmark in seventh place (still not too shabby).  The OECD report adds helpful commentary ; for example, here you can read that “Denmark, Iceland and Japan feel the most positive in the OECD area, while Turkey, Estonia and Hungary show lower levels of happiness.”

Capture.2So what’s with the antidepressants?  Well, it turns out that the OECD has been researching that as well.  Here is a chart listing antidepressant consumption in 2011, in standardised doses per head of population per day, for 23 OECD member states.  Let’s see.  Denmark is pretty near the top.  (It’s not second, as mentioned in the Guardian article above, because the author of that piece was using the 2010 chart.)  And the other top consumers?  Iceland first (remember, Iceland is among the “most positive [countries] in the OECD area”), followed by Australia, Canada, and (after Denmark) Sweden.  That’s right: the top three countries for happiness according to the UN are among the top five consumers of antidepressants in the OECD’s survey(*).  And those countries showing “lower levels of happiness”?  Two of the three (Estonia and Hungary) are on the antidepressant list – right near the bottom.  Perhaps they’d be happier if they just took some more pills?

I decided to see if I could apply a little science here.  I wanted to quantify the relationship between antidepressant consumption and happiness/wellbeing/etc. So I built a dataset (available on request) with a rank-order number for each of the 23 countries in the antidepressant survey, on each of several measures.  Then I asked SPSS to give me the Spearman’s rho correlation between consumption of antidepressants and each of these measures.  Here are the results [Click on image to enlarge]:

Capture

Note that this is not a case of “everything being correlated with everything else” (Paul Meehl).  Only certain measures from the OECD survey are significantly correlated with antidepressant consumption.  (I encourage you to explore the measures that I didn’t include.)

Ah, I hear you say, but this is only correlational.  When two variables, A and B, are correlated, there are usually several possible explanations.  A might cause B, B might cause A, or A and B might be caused by C.  So in this case, consuming antidepressants might make people feel happy and healthy; or, being happy and healthy might make people consume antidepressants; or, some other social factor might cause people to consume antidepressants and report that they feel happy and healthy.  I’ll let you decide which of those sounds plausible to you.

Now, what does this prove?  Probably not very much; I’m not going to make any truth claims on the basis of some cute numbers.  After all, it’s been “shown” that autism correlates better than .99 with sales of organic food.  But here’s a thought experiment for you: Imagine what the positive psychology people would be telling us if the results had been the other way around — that is, if Australia and Denmark and Canada had the lowest levels of antidepressant consumption.  Do you think it’s just remotely possible that we might have heard something about that by now?

(*) The OECD has 34 member states, of which some, such as the USA and Switzerland, do not appear in the antidepressant consumption report.  All correlations reported in this post are based on comparisons in rank order among the 23 countries for which antidepressant consumption data are available.

Antidepressant consumption data here [XLS]
UN World Happiness Report 2013 here [PDF]
OECD Better Life Index 2013 data here