Five studies claimed to demonstrate efficacy of psychotherapy for depression in cancer patients

demand evidence and thinkAt my primary blog site, PLOS Mind the Brain I am critically discussing an article for which this post provides some resources helpful for readers in forming their own opinions.

Hart, S. L., Hoyt, M. A., Diefenbach, M., Anderson, D. R., Kilbourn, K. M., Craft, L. L., … & Stanton, A. L. (2012). Meta-analysis of efficacy of interventions for elevated depressive symptoms in adults diagnosed with cancer. Journal of the National Cancer Institute, 104(13), 990-1004.

The authors declare psychotherapy to be superior to control contentions in relieving the depressive symptoms of cancer patients and that they are ready to be rolled out into routine care.

I hope that is the case, but I depend on evidence for my judgment. I withhold judgment when the evidence is insufficient. Indeed, if we decide to soon that we have enough evidence, were discouraged from doing the research necessary to produce more evidence.

The the meta-analysis is seriously flawed and the authors’ evaluation of the literature is premature. Their conclusion serves to promote the services of the sponsoring organization’s membership with the claim of being evidence-based without the interventions having earned this status.

These authors claim to found only five relevant studies reported in six papers after an exhaustive review of the literature. Three of the studies ( 1-3 below )are quite inappropriate for addressing whether it is psychotherapy is effective with cancer patients. They involve substantial reorganizations of care and provision of medication. Any effects of psychotherapy cannot be as separate out.

In two of the three collaborative care studies, patients in the intervention group, but not the control group got free treatment. Differences in whether patients had to pay for treatment probably explained the very low utilization by the control group. The remaining two studies are pitifully too small for making firm recommendations.

In the PLOS Mind the Brain post, I referred to these studies by their numbers below.

1. Strong, V., Waters, R., Hibberd, C., Murray, G., Wall, L., Walker, J., … & Sharpe, M. (2008). Management of depression for people with cancer (SMaRT oncology 1): a randomised trial. The Lancet, 372(9632), 40-48.

http://www.lancet.com/journals/lancet/article/PIIS0140-6736(08)60991-5

This is one of the articles that should not have been counted has psychotherapy. I do not find other articles in the literature where the study is counted as psychotherapy. I am sure that the authors would agree with me they did a lot more than provide psychotherapy.

The study aimed to assess the efficacy and cost of Depression Care for People with Cancer, a nurse-delivered complex intervention that was designed to treat major depressive disorder in patients who have cancer.

The intervention is described as

In addition to usual care, patients in the intervention group were offered a maximum of 10 one-to-one sessions over 3 months, preferably at the cancer centre but by telephone or at patients’ homes if they were unable to attend the centre.

The intervention Depression Care for People with Cancer, included education about depression and its treatment (including antidepressant medication); problem-solving treatment to teach the patients coping strategies designed to overcome feelings of helplessness; and communication about management of major depressive disorder with each patient’s oncologist and primary-care doctor. For 3 months after the treatment sessions progress was monitored by monthly telephone calls. This monitoring used the nine-item Patient HealthQuestionnaire (PHQ-9)16 to assess the severity of depression.

The intervention produced an overall modest reduction in depressive symptoms. This treatment effect was sustained at 6 and 12 months. The intervention also improved anxiety and fatigue but not pain or physical functioning.

The authors conclude that the intervention offers a model for the management of major depressive disorder in patients with cancer and other medical disorders who are attending specialist medical services that is feasible, acceptable, and potentially cost effective.

2a. Ell, K., Xie, B., Quon, B., Quinn, D. I., Dwight-Johnson, M., & Lee, P. J. (2008). Randomized controlled trial of collaborative care management of depression among low-income patients with cancer. Journal of Clinical Oncology, 26(27), 4488-4496.

http://jco.ascopubs.org/content/26/27/4488.full.pdf

This is another collaborative care study that should not have been counted has psychotherapy. It is reported in two articles, the second being longer term follow-up.

It examined the effectiveness of the Alleviating Depression Among Patients With Cancer (ADAPt-C) collaborative care management for major depression or dysthymia.

Intervention patients had access for up to 12 months to a depression clinical specialist (supervised by a psychiatrist) who offered education, structured psychotherapy, and maintenance/relapse prevention support. The psychiatrist prescribed antidepressant medications for patients preferring or assessed to require medication.

Study patients were 472 low-income, predominantly female Hispanic patients with cancer age ≥ 18 years with major depression (49%), dysthymia (5%), or both (46%). Patients were randomly assigned to intervention (n = 242) or enhanced usual care (EUC; n = 230).

At 12 months, 63% of intervention patients had a 50% or greater reduction in depressive symptoms from baseline as assessed by the Patient Health Questionnaire-9 (PHQ-9) depression scale compared with 50% of EUC patients (odds ratio [OR] = 1.98; 95% CI, 1.16 to 3.38; P = .01).

The study concludes that ADAPt-C collaborative care is feasible and results in significant reduction in depressive symptoms, improvement in quality of life, and lower pain levels compared with EUC for patients with depressive disorders in a low-income, predominantly Hispanic population in public sector oncology clinics.

2a. Ell, K., Xie, B., Kapetanovic, S., Quinn, D. I., Lee, P. J., Wells, A., & Chou, C. P. (2011). One-year follow-up of collaborative depression care for low-income, predominantly Hispanic patients with cancer. Psychiatric Services, 62(2), 162-170.

http://journals.psychiatryonline.org/article.aspx?articleid=102180

This is the follow up report for the collaborative care study that is described above. It should not be counted as providing an estimate of the effect size for psychotherapy.

The study assessed longer-term outcomes of low-income patients with cancer (predominantly female and Hispanic) after treatment in a collaborative model of depression care or in enhanced usual care.

This RCT was conducted in “safety-net oncology clinics”, recruited 472 patients with major depression symptoms. Patients were randomly assigned to a 12-month intervention (a depression care manager and psychiatrist provided problem-solving therapy, antidepressants, and symptom monitoring and relapse prevention) or enhanced usual care (control group) were interviewed at 18 and 24 months after enrollment.

At 24 months, 46% of patients in the intervention group and 32% in the control group had a ≥50% decrease in depression score over baseline (odds ratio=2.09, 95% confidence interval=1.13—3.86; p=.02); intervention patients had significantly better social (p=.03) and functional (p=.01) well-being. Treatment receipt among intervention patients declined (72%, 21%, and 18% at 12, 18, and 24 months, respectively); few control group patients reported receiving any treatment (10%, 6%, and 13%, respectively). Significant differences in receipt of counseling or antidepressants disappeared at 24 months. Depression recurrence was similar between groups (intervention, 36%; control, 39%). .

The study concludes collaborative care reduced depression symptoms and enhanced quality of life; however, results call for ongoing depression symptom monitoring and treatment for low-income cancer survivors.

3. Dwight-Johnson, M., Ell, K., & Lee, P. J. (2005). Can collaborative care address the needs of low-income Latinas with comorbid depression and cancer? Results from a randomized pilot study. Psychosomatics, 46(3), 224-232.

http://www.sciencedirect.com/science/article/pii/S0033318205700852

This is a modest pilot study that served as the basis for the large-scale study reported above.

55 low-income Latina patients with breast or cervical cancer and comorbid depression were randomly assigned to receive collaborative care as part of the Multifaceted Oncology Depression Program or usual care. Relative to patients in the usual care condition, patients receiving collaborative care were more likely to show ≥50% improvement in depressive symptoms as measured by the Personal Health Questionnaire (OR = 4.51, 95% CI=1.07–18.93). Patients in the collaborative care program were also more likely to show improvement in emotional well-being (increase of 2.15) as measured by the Functional Assessment of Cancer Therapy Scale than were those receiving usual care (decrease of 0.50) (group difference=2.65, 95% CI: 0.18–5.12). Despite health system, provider, and patient barriers to care, these initial results suggest that patients in public sector oncology clinics can benefit from onsite depression treatment.

http://www.sciencedirect.com/science/article/pii/S0033318205700852

If we exclude the three collaborative care studies above, has we should, we are left with these two modest studies.

Savard, J., Simard, S., Giguère, I., Ivers, H., Morin, C. M., Maunsell, E., … & Marceau, D. (2006). Randomized clinical trial on cognitive therapy for depression in women with metastatic breast cancer: psychological and immunological effects. Palliative & supportive care, 4(03), 219-237.

http://journals.cambridge.org/abstract_S1478951506060305

Forty-five women were randomly assigned to either individual cognitive therapy (CT) or to a waiting-list control (WLC) condition. CT was composed of eight weekly sessions of CT and three booster sessions administered at 3-week intervals following the end of treatment.

Patients treated with CT had significantly lower scores on the Hamilton Depression Rating Scale at posttreatment compared to untreated patient, as well as reduction of associated symptoms including anxiety, fatigue, and insomnia symptoms. These effects were well sustained at the 3- and 6-month follow-up evaluations. CT for depression did not appear to have a significant impact on immune functioning.

Evans, R. L., & Connis, R. T. (1995). Comparison of brief group therapies for depressed cancer patients receiving radiation treatment. Public health reports, 110(3), 306.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1382125/pdf/pubhealthrep00054-0076.pdf

A total of 72 depressed cancer patients were randomly assigned to one of three conditions–cognitive-behavioral treatment, social support, or a no-treatment control condition. Before and after intervention and at 6-month followup, study participants were individually assessed by using measures of symptom distress. Relative to the comparison group, both the cognitive-behavioral and social support therapies resulted in less depression, hostility, and somatization.

The social support intervention also resulted in fewer psychiatric symptoms and reduced maladaptive interpersonal sensitivity and anxiety. It was concluded that both group therapies can reduce symptoms of distress for depressed persons undergoing radiation treatment for cancer. Both forms of therapy resulted in improvements in psychosocial function (compared with no treatment at all), but social support groups demonstrated more changes that were evident at 6-month followup.

This article considered the support group to be an active intervention. Many studies of psychotherapy would consider such a group to be a control condition. We certainly did in a study of primary care patients. If the support group is reclassified as a comparison/control condition for cognitive-behavioral treatment, then the cognitive behavioral treatment is reduced to a negative effect size.

Yup, that’s all folks. Do you think these studies are sufficient evidence to justify sweeping conclusions and rolling out psychotherapy  into routine cancer care without further ado?that's all

 

 

How committed is British clinical psychology to science and evidence?

demand evidence and thinkAlthough some psychoanalysts claim I am part of a plot of CBT proponents against them, I am not and have never been a CBT therapist. Indeed, Aaron T Beck and I became friends because of his overtures and appreciation for my sustained critique of his cognitive therapy for depression.

I respect the commitment of American CBT researchers and clinicians to evidence and to both evidence- and science- based arguments. They are at the forefront of advocacy for evidence-based practice in the United States.

I have never thought much about differences in the CBT community the US versus the UK. I had limited exposure to CBT in the UK, other than a visit to University of Exeter where we had some lively debate followed by exceptionally good partying. I was left thinking that the CBT community in the UK is a bit different than that in the United States. They were a bit more nondefensive in hearing criticism– after all, they had invited me to give a keynote– and passionate in countering it. I had so much fun that I felt guilty accepting my speakers fee.

None of my past experience prepared me for the debate in the UK concerning CBT for persons with unmedicated schizophrenia.

When I took aim at both the press coverage and the Lancet article in a blog post, it was promptly hacked by someone who had intimate knowledge of the study and first claimed to be one of the authors before retracting that claim. The problem was so bad that I complained to a UK Internet provider that identified the source of the hacking in the neighborhood of one of the authors of that study. Apparently, this IP address had been the object of numerous complaints from others.

Now there is an embarrassing blog post that further demonstrates a lack of commitment to science and evidence by British clinical psychology.

A national scandal: psychological therapies for psychosis are helpful, but unavailable

The authors are Peter Kinderman and Anne Cooke, and they approvingly cites the recent Lancet study as indicating that CBT is a promising alternative to antipsychotic medication for treatment of schizophrenia.

 In view of the downsides of antipsychotics it comes as something of a relief that there is a possible alternative.  Psychological approaches such as cognitive behaviour therapy (or CBTp, the ‘p’ standing for psychosis) have become increasingly popular. NICE (the National Institute for Care Excellence) is sufficiently convinced of the effectiveness of these approaches to recommend that they should be offered to everyone with a diagnosis of schizophrenia.  Traditionally they have been offered in addition to drugs, but a recent trial suggests that they might also be promising as an alternative.

If only that were true, a lot of us would be happier. But the study is basically a null trial dressed up as positive, starting with the hyped abstract.

There are formal standards for evaluating the process of producing guidelines and their output recommendations. Judged by these criteria, the NICE recommendations are woefully deficient and represent the consensus of professionals with self-interests at stake, rather than carefully assembled and considered best evidence, keeping in mind all of the limitations of this evidence. However, Brit psychologists seem hell-bent on waxing enthusiastically about these NICE guidelines, but then neglecting any balance in suggesting there must be an either/or choice in CBT versus antipsychotic medication.

Kinderman repeatedly argues from authority. I challenged him to refute the arguments contained in my PLOS Mind the Brain blog posts (1, 2) about the Lancet trial not producing usable data, but I can’t expect a response. He is a rather retiring fellow, once evidence is introduced.

Turning to the Maudsley debate, Kinderman finds it most worthy of comment that

all four debaters were white, male, middle class academics.

I thought that that was a funny thing for a Brit white boy to say.Certainly he would not get away with this in Philly. But then I looked on Google and found that like Steve Martin in the Jerk, Kinderman had been a poor black child born to sharecroppers.

jerk2Kinderman also dismissed skeptics of CBT for psychosis as “NHS clinicians with offices overflowing with drug company freebies.” I do not know if the audience at the Maudsley debate was taking notes with pens provided by the pharmaceutical companies, I only have the audio podcast. But I think that Kinderman’s raising of conflict of interest is an excellent idea, but it should be extended to the many psychologists who benefit from premature marketing of CBT for psychosis and those who hope for increases in demands for their services.

Kinderman repeatedly challenges evidence with anecdote. He presents himself as a champion of the Service Users (SU) perspective and suggest that random reports from the community trump systematic assessment of their response to treatment.

I wonder how service users feel about Kinderman and promoters of CBT appropriating their voice. And what do they think about Kinderman setting such a tone for the debate with his attacks on critics? Surely, he must have thought that they were watching.

Kinderman brings to mind Henry McGrath, a British advocate of complementary and alternative medicine with whom I clashed at the 2013 International Psycho-Oncology Conference in Rotterdam.

Me: do not you think it is outrageous when practitioners of complementary and alternative medicine cite users’ experience with alternative treatments like the coffee enemas that Prince Charles favors? Do you not you worry this kind of nonsense can discourage desperate cancer patients from seeking effective, but on pleasant treatments?

Henry: But I have certainly heard from users that they have gotten considerable benefit from coffee enemas. I cannot speak to their effectiveness.

Me: Really? Do they prefer cappuccino, lattes, or merely double espressos?

Some of Kinderman’s blog is simply silliness and can read in good fun. But then he says

The problem with many trials, and therefore with meta-analyses too, is that professionals decide in advance what they are going to measure and what counts as a ‘good’ outcome.

This is a rejection of the uncontroversial idea that investigators in clinical trials should commit themselves to primary outcomes and analyses in preregistration that occurs before the trials are done or the meta-analyses are conducted. Psychotherapy research is plagued by investigators having a battery of outcome measures and then choosing after the trial only the most favorable to emphasize.

Does Kinderman have no appreciation for what Archie Cochrane accomplished? Ben Goldacre’s campaigning for pharmaceutical companies to preregister and report all trials?

Kinderman titles his blog “a national scandal…” From across the pond I think that the British national scandal is there is such ignorance among British clinical psychologists about basic science-and evidence-based argumentation. He ends with a lament that

Kinderman

Despite being born to black sharecropper parents, Peter Kinderman appears to be white as an adult.

The issue is not one of overselling, it’s that psychological therapies are shamefully underprovided.

Maybe, Peter, that is what the accumulation of evidence will justify, but you doing very poorly with the evidence at hand.

The comments by Liverpool psychologist Richard Bentall are more disconcerting than Kinderman. He grunts “Duh” and dismisses arguments and evidence with “You don’t understand” without specifying what is misunderstood. Like Kinderman and Cooke, Bentall heavily relies on argument by declaration. This or that point with which he disagrees is demolished by declaring it demolished. I get the sense that for theses Brits at least, reading their blogs and comments is supposed to be strictly a spectator sport for readers.

Reacting to the first postings of criticism of his pontifications, Bentall angrily responded

I’m a bit tired of this debate, which is being pursued by a lot of people who don’t understand the pragmatics of RCTs, or who have never themselves tried to do anything to improve the well-being of psychiatric patients, or who are not clinicians and have never been in the position of not knowing what to do when faced with a patient who seems to be doing badly on standard treatments (if Keith Laws and I arrive together at St Peter’s Gate and there’s only room for one more entrant I won’t be too anxious). In a such a situation, a sensible clinician will ask herself ‘is CBT worth a shot?’ and the answer will be ‘yes’.

There is so much that Bentall says that I would like to dispute or at least demand the evidence for his claims so everyone can decide for themselves. For instance he claims that treatment as usual is conservative and reduces the possibility of finding an effect for an active treatment. Really? where is the evidence?

Rory Byrne, one of the authors of the Lancet study unleashed a lot of vituperation in the comments, but then someone other than Byrne quickly deleted his comments.

Another commentator stated

 One of the biggest problems is that we are so dominated by a medical discourse that it is as if we can employ no other language in our critique of psychotherapeutic endeavour. It may be the case that some researchers believe that it is only by ‘medicalising’ research and attempting to think of psychotherapy as being akin to a drug, that they think their efforts will be taken seriously. Doubtless, they are perhaps more likely to gain research monies if they do this. Like it or not, psychotherapy is not a medical treatment and should not be evaluated as such.

Really? A solid RCT represents not medicalization, but fairest testing of whether an intervention works. A well done psychotherapy RCT is well done experimental psychology.

Update: after being away from his blog for a while Peter Kinderman has reemerged with a complaint

 I am a little disturbed by the tone of some of the comments, and I think that personal invective is inappropriate and should be avoided…Let’s retain some dignity here.

A call for dignity? Peter, how disingenuous of you. You were the one who shamelessly appropriated the voices of those who are not “white, male, academics” as well as “users of services.” It was you who sweepingly characterized skeptics about the efficacy of CBTp as whores for the pharmaceutical industry. I think you should reflect on the tone that you set and the confusing message you give to consumers and their families.

 

 

 

BMJ Rapid Responses, PubMed Commons, and Blogging

Why I said no to revising my submission to BMJ Rapid Response

bmj-logo-ogI said that I would submit my last blog post to BMJ Rapid Responses and I did. But here is the response that I received:

Dear Professor Coyne,

Thank you for your rapid response (copied below).

We would like to post if you would you resubmit concentrating on the science. The talk of spinning data suggests that the authors have a hidden agenda, which we cannot post without proof and is anyway difficult to prove and tends to add more heat than light to a debate..

Thank you very much for your help.

Best wishes,

Sharon

Sharon, thank you for consideration of my Rapid Response. After giving your condition serious thought, I have decided not to resubmit.

Posting a Rapid Response from BMJ website that has advantages, beyond the obvious one of getting considered for publication in the print edition. Anyone going to the BMJ website and looking at an article will see that a response has been made and can choose to read it. BMJ has been an important innovator in this respect. It should serve as a model for the many journals lagging far behind in post publication peer review.

Sharon, as you are well aware, I have taken advantage of the BMJ Rapid Response option numerous times. On some of occasions, you have suggested that I had to tone something down in order to be posted. I have always done and, in hindsight, sometimes with beneficial effects.

Rapid Responses used to be the only game in town for someone wanting to respond to a BMJ article. A commentator had to comply with requests to edit comments or risk the post disappearing into oblivion.

But then blogs came along as an alternative offering a more unfettered opportunity to comment. Blogs have usually carried the disadvantages of not having pre-post peer-reviewed, not having the same access afforded a Rapid Response attached to an article, or not being archived or indexed. But things are changing and some of these disadvantages are beginning to be eliminated.

PubMedThen PubMed Commons emerged, which allowed posting of comments that would be available to anyone accessing a BMJ article or any other of 23 million articles through PubMed. It has less restriction and grants more discretion to commentators. It also allows commentators to post links to blog posts, where points could be elaborated.

PubMed Commons is a challenging alternative to letters to the editor and also takes away control of comments from editors and journals. It competes with BMJ Rapid Responses, but I would not be surprised if it stimulated other journals to adopt the model of BMJ to better accommodate post publication peer review.

It may seem the high ground to insist that Rapid Responses stick to the science, but it is naïve and misleading. BMJ, like Nature and Science, does not simply publish just data, and certainly not all of the data that their projects generate. The data that are presented are selective, framed, and interpreted. Ironically, the author of the BMJ article claiming the link between fast food outlets and obesity has stated elsewhere that you can never take authors’ role out of the data they choose to present and interpret in their articles, and that authors have a strong hand in creating or knocking down “obesogenic realities.”

As for the issue of “spin,” the concept can be operationalized and scientifically studied. spin noNumerous examples of spin can be cited. For instance, it is been shown that much exaggeration in media coverage can be traced to spin in the abstracts of scientific articles. Over a series of blog posts, I showed that was the case with a recent Lancet study of CBT for schizophrenia (1, 2, 3). And before that  there was that churnaled study of a breakthough blood test for postpartum depression.

It is the interest of  science and the public to point out spin in abstracts and the body of articles, and, in the case of this particular BMJ article, to point to discrepancies between what is said in the abstract and what is said in the results section. Too often, readers and the media rely only on abstracts and few people bother to make comparisons between abstracts and what was actually said in the text, which could substantially contradict or at least qualify what is said in the abstracts.

My intended Rapid Response raised methodological and statistical points about key aspects of the article on fast food outlets and obesity, but was also intended to alert readers to the spin in the abstract of the BMJ article and invited readers to decide for themselves.

But I did more than complain about the author, I complained about policies of BMJ that serve to encourage, reward, and maybe even insist on spin. I have in mind the requirement that cross-sectional observational studies claim clinical or public health importance, particularly in the absence of a requirement that they preregister hypotheses, outcomes to be examined, candidate covariates and analytic plan.

So this time, at least, I will sacrifice the advantages that go with posting a Rapid Response at the BMJ website, but I would keep writing a series of blog posts about the article. When the post at PLOS Mind the Brain has been uploaded, I will then post a PubMed Commons comment. Anyone accessing the BMJ article through PubMed will have the opportunity to view and respond to my comment and learn about it in my blog.

PubMed Commons is truly revolutionary. It builds on the earlier act of PubMed making MEDLINE abstracts universally accessible, and not just limited to academics having the gateway of University libraries. Previously, journals, maybe even the BMJ, fought to use copyright of abstracts to keep this from happening. I recall cheering the dramatic moment on American national TV when Vice President Al Gore publicly made use of PubMed to learn more about his wife’s illness.

BMJ Rapid Responses has also been revolutionary in its conception. But now BMJ must response to the challenge posed by PubMed Commons or risk being left behind. I am sure that the journal is up to the task, and I again thank you for offering the opportunity to revise my proposed BMJ Rapid Response.

Yours,

Jim Coyne

Takeaway food outlet/obesity link: Spun data, faulty policy recommendations in BMJ

fast food outlets As I will review in an upcoming post at PLOS Mind the Brain, the link between the availability of fast food outlets and obesity is weak and probably explained by a variety of other factors, including poverty and restrictions on opportunities to purchase, store, cook, and consume food from supermarkets,  as well as competing demands and preferences

My Mind the Brainslize of  post will discuss some of the reactions that surprised me when I tweeted about this BMJ article.

My expression of skepticism about the ethics or effectiveness of restricting people’s access to fast food outlets had encountered mostly hostile responses. I even lost a few followers. I realized that because it was the early morning and I was in Europe, most of the responses were coming from the UK or continental Europe while Americans were still sleeping.

I later posted something on Facebook, including a link to the article.There the responses were a more sympathetic “take your hands off my slice of pizza”coming from Americans. Thre was much more of a sense that eating is a basic human need, access to food is a basic human right, and restrictions set on that right can become a matter of social justice, particularly when the restrictions occur without the involvement of those who are most affected. Of course, eating is very different than smoking or the misunderstood Second Amendment right “to bear arms.”

I underestimated the cultural differences between the UK and the US in willingness to impose top-down restrictions on people’s freedom of choice. We are, after all, as Winston Churchill observed, two nations separated by a common language.

[The following was uploaded as a Rapid Response at BMJ, awaiting approval March 28, 2014]

There are incentives for researchers to produce findings that support decisions to which policy makers are already committed. Top down decisions to control obesity by restricting access to fast food outlets are a prime example.

Burgoine and colleagues1 previously demonstrated how researchers can serve up illusory ‘obesogenic realities’ to order with arbitrary methodological and statistical decisions and selective reporting. Now Burgoine and other colleagues2 have presented a prime example of this, declaring in their abstract: “Government strategies to promote healthier diets through planning restrictions for takeaway food could be most effective if focused around the workplace.”

This unqualified recommendation is based on a cross-sectional observational survey study of a rural area of England. One needs to read the abstract carefully in conjunction with the actual results to recognize spin being applied to results that even then do not justify this bold recommendation. The abstract selectively reports findings concerning exposure to fast food outlets and consumption of fast food in the work environment. These effects amount to only 5.3 g per day when individuals in lower quartile of exposure are compared to those in the highest. In the results section, we learn that the association for the home environment is more modest and not dose-response. Further, the association for along commuting routes is nonsignificant.

These patterns are examined in multiple linear regression analyses vulnerable to addition or exclusion of arbitrarily selected and poorly measured control variables. For instance, in the results section we learn “In models that omitted supermarket exposure as a covariate, the associations between combined takeaway food outlet exposure, consumption of takeaway food, and body mass index were attenuated towards the null (web figs 5 and 6, upper right panels, respectively). “

The authors are to be faulted for failing to present basic bivariate relationships, reliance on complex multiple linear regression models of cross-sectional data that poorly capture environments or individuals, and an abstract that does not meet results in a way that is accurate or representative.

However, British Medical Journal shares blame for promoting faulty policy recommendations with weak evidence. The journal encourages authors of observational studies to identify clinical and public health implications without noting strength of evidence or the need for intervening randomized trials evaluating interventions between observational studies and policy recommendations. Readers need to ask: would BMJ have accepted this article if the authors had not made policy recommendations unwarranted by their data? If the article were nonetheless accepted, would similar media coverage have been generated by an appropriately modest acknowledgment that within the limits of their data, the authors did not find strong support for limiting fast food outlets as a way of controlling obesity?

Authors of observational studies commonly make recommendations for clinical practice and policy without calling for an RCT.3 BMJ should stop its complicity by reconsidering its policy of calling for clinical and public health implications from observational studies. If the journal policy must persist, then a policy should be implemented of requiring preregistration of design and analysis plans for observational studies,4 just as done with clinical trials. And frank admission of the limitations of cross-sectional observational data should be clearly acknowledged, starting in abstracts.

01. Burgoine T, Alvanides S, Lake AA. Creating ‘obesogenic realities’; do our methodological choices make a difference when measuring the food environment? Int J Health Geogr 2013;12:1-9.
02. Burgoine, T., Forouhi, N. G., Griffin, S. J., Wareham, N. J., & Monsivais, P. Associations between exposure to takeaway food outlets, takeaway food consumption, and body weight in Cambridgeshire, UK: population based, cross sectional study. BMJ: 2014, 348.
03. Prasad, V., Jorgenson, J., Ioannidis, J., & Cifu, A. (2013). Observational studies often make clinical practice recommendations: an empirical evaluation of authors’ attitudes. J Clin Epi, 2013; 66(4), 361-366.
04. Thomas, L., & Peterson, E. D. The value of statistical analysis plans in observational research: defining high-quality research from the start. JAMA, 2012; 308(8), 773-774.

Who’s to blame for inaccurate media coverage of study of therapy for persons with schizophrenia?

“I’m in competition with literally hundreds of stories every day, political and economic stories of compelling interest…we have to almost overstate, we have to come as close as we came within the boundaries of truth to dramatic, compelling statement. A weak statement will go no place.”

—-Journalist interviewed for JA Winsten, Science and media: the boundaries of truth

Hyped, misleading media coverage of a study in Lancet of CBT for persons with unmedicated schizophrenia left lots of clinicians, policymakers, and especially persons with schizophrenia and their family members confused.

Did the study actually showed that psychotherapy was as effective as medication for schizophrenia? NO!

Did the study demonstrate that persons with schizophrenia could actually forgo medication with nasty side effects and modest effectiveness and just get on with their life with the help of CBT? NO!

In this blog post, I will scrutinize that media coverage and then distribute blame for its inaccuracies.

At PLOS Mind the Brain, I’ve been providing a detailed analysis of this complex study that was not particularly transparently reported. I will continue to do so shortly. You can consult that analysis here, but briefly:

The small-scale, exploratory study was a significant, but not overwhelming contribution to the literature. It showed that persons with unmedicated schizophrenia could be recruited to a clinical trial for psychotherapy. But it also highlighted the problems of trying to conduct such a study.

  • Difficulties getting enough participants resulted in a very mixed sample combining young persons who had not yet been diagnosed with schizophrenia but who were in early intervention programs with older patients who were refusing medication after a long history of living with schizophrenia.
  • The treatment as usual combined settings with enhanced services including family therapy and cognitive therapy with other settings where anyone who refused medication might be denied any services. The resulting composite “treatment as usual” was not very usual and so did not make for a generalizable comparison.
  • Many of the participants in both the CBT and control group ended up accepting medication before the end of the trial, complicating any distinguishing of the effects of the CBT versus medication.
  • The trial was too small, had too many participants lost from follow up to be used to determine effect sizes for CBT.
  • But, if we must, at the end of the treatment, there were no significant differences between persons randomized to CBT and those remaining in routine care!

The official press release from University of Manchester was remarkably restrained in its claims, starting with its title

Cognitive therapy “safe and acceptable” to treat schizophrenia

There were no claims of comparisons with medication.

And the press release contained a quote from the restrained in tentative editorial in Lancet written by Oliver Howes from the Institute of Psychiatry, London:

“Morrison and colleagues’ findings provide proof of concept that cognitive therapy is an alternative to antipsychotic treatment. Clearly this outcome will need further testing, but, if further work supports the relative effectiveness of cognitive therapy, a comparison between such therapy and antipsychotic treatment will be needed to inform patient choice. If positive, findings from such a comparison would be a step change in the treatment of schizophrenia, providing patients with a viable alternative to antipsychotic treatment for the first time, something that is sorely needed.”

But unfortunately the rest of this excellent editorial was, like the Lancet report of the study itself, locked behind a pay wall. Then came the BBC.

BBC coverage

bbc newsMany of us first learned of this trial from a BBC story that headlined

“Talk therapy as good as drugs for schizophrenia.”

Inexplicably, when the BBC story was accessed a few days later, the headline had been changed to

“Talk therapy moderately effective for schizophrenia.”

There was no explanation, and the rest of the new item was not changed. Creepy.  Orwellian.

The news item contained a direct quote from Tony Morrison, the principal investigator for the study:

We found cognitive behavioural therapy did reduce symptoms and it also improved personal and social function and we demonstrated very comprehensively it is a safe and effective therapy.

Wait a minute: Was it really demonstrated very comprehensively that CBT was an effective therapy in this trial? Are we talking about the same Lancet study?

The quote is an inaccurate summary of the findings of the study.  But it is quite consistent with misrepresentations of the results of the study in the abstract. Here as elsewhere in the media coverage that would be rolling out, almost no one seemed to scrutinize the actual results of the study, only buy into the abstract.

Shilling at Science Media Centre: Thou shalt not shill.

Science Media Centre ran a briefing of the study for journalists and posted an

Expert reaction to CBT for schizophrenia

A joint quote [how you get a joint quote?] from Professor Andrew Gumley, University of duoGlasgow and Professor Matthias Schwannauer, University of Edinburgh proclaimed the study “groundbreaking and landmark.” They praised its “scientific integrity,” citing the study’s pre-registration that prevented “cherrypicking” of findings to put the study in the best light.

As I’ll be showing in my next Mind the Brain post,  the “pre-registration” actually occurred after data collection had started. Preregistration is supposed to enforce specificity of hypotheses and precommit the investigators to evaluate particular primary outcomes at particular times But preregistration of this trial avoided designation of the particular timepoint at which outcome would be assessed. So it did not prevent cherry picking.

Instead, it allowed the authors to pick the unusual strategy of averaging outcome across five assessments. What they then claimed to represent the results of the trial was biased by

  • a last assessment point when most patients were not longer followed,
  • many of the improved patients were on medication,
  • and results affected by a deterioration of the remaining control patients, not improvement in the intervention patients.

Again, let me remind everybody, at the end of the intervention period—the usual time point for evaluating a treatment, there were no differences between CBT and treatment as usual.

shillI have previously cited this joint quote from Gumley and Schwannauer as evidence that experts were weighing in about this study without being familiar with it. I apologize, I was wrong.

Actually, these two guys were quite familiar with the study because they are collaborators with Tony Morrison in a new study that is exactly the kind that they ended proposing as the needed next step. They weren’t unfamiliar, they were hiding a conflict of interest in praising the study and calling for what they were already doing with Tony Morrison. Come on, Andy and Matt, no shilling allowed!

When everybody else was getting drunk with enthusiasm, somebody had to become the designated driver and stay soberdesignated driver two.

As it has done in the past when there is a lot of media hype, NHS Choices offered an exceptionally sophisticated, restrained assessment of the study. This source missed the lack of differences between intervention and control group at the end of the treatment, but it provided an exceptionally careful review of the actual study not just its abstract. It ended up catching a number of other important limitations that almost nobody seemed to be noticing. And it warned

However, importantly, it does not show that it is better or equivalent to antipsychotic medication. The participants continued to have moderate levels of illness despite receiving therapy.

Media coverage got more exaggerated in its headlines.

Wired.UK.com: “Talking therapy could help schizophrenic sufferers that refuse drugs”

This story added a provocative direct quote from Tony Morrison in a sidebar:

“Without medication the results were almost exactly the same as the average effect size you see in antipsychotic drug trials”

This is simply an untrue and irresponsible statement.

TopNews Arab Emirates: CT Acts as Great Treatment for Schizophrenia Spectrum Disorder Patients Unwilling to take Antipsychotics—

Some of the most misleading headlines appeared at sources that consumers would think they could trust

Medical News Today Cognitive therapy ‘an effective treatment option’ for schizophrenia

West (Welfare, society, territory): Cognitive therapy better than drugs for schizophrenia.

GPonline.com: CBT for schizophrenia an effective alternative to antipsychotics, study finds

Nursing Times: Study suggests drug-free therapy hope for schizophrenia

And the prize goes to AAA’s Science for the most horrible misrepresentation of the study and its implications

Science: Schizophrenia: Time to Flush the Meds?

So who’s to blame? Lots of blame to go around

It has been empirically demonstrated that lots of the distortion in medical and scientific journalism starts with distorted abstracts. That was certainly the case here, where the abstract gave a misleading portrayal of the findings of the study that persisted unchallenged and even got amplified. The authors can be faulted, but so can Lancet for not enforcing CONSORT for abstracts or even their requirement of trial design and primary outcomes be preregistered.

The authors should be further faulted for their self-promoting, but baseless claims that their study indicated anything about a comparison between cognitive therapy and neuroleptics. The comparison did not occur and the sample in this study was very different than the population studied it in research on neuroleptic medication.

Furthermore, if Lancet is going to promote newsworthy studies, including in this case, with podcasts, they have a responsibility to take down the pay walls keeping readers from examining the evidence for themselves. BMJ has already adopted a policy of open access for clinical trials and meta-analysis. It is time other journals follow suit.

For me, when the most striking things about media coverage is its boring, repetitive sameness. Lazy journalists simply churnaled or outright plagiarized what they found on the web. The competition was not in content, but in coming up with outrageous headlines.

It is absolutely shameful some of the most outrageous headlines were associated with sources that ostensibly deserve respect. In this instance, they do not. And someone should reel their marketing departments or whoever chooses their headlines.

There is a lot to blame to go around, but there is also room for some praise.

NHS choicesNHS Choices gets a prize for responsible journalism and careful research that goes beyond what the investigators themselves say about their study. I recommend the next time the media gets into a frenzy about particular medical research, that consumers run and look at what NHS Choices has to say.

Stay tuned for my upcoming PLOS Mind the Brain post that will continue discussion of this trial. Among other things, I will show that the investigator group knew what they were doing in constructing such an inappropriate control/comparison group. They gave evidence that they believed that a more suitable comparison with befriending or simple supportive counseling might not have revealed an effect. It is a pity, because I think that investigators should go for appropriate comparisons rather than getting an effect.

I will identify the standards that the investigator group for this trial has applied to other research. I will show that if they apply the same standards to their own study, it is seriously deficient except as a small preliminary, exploratory study that cannot be used to estimate the effects of cognitive therapy. But the study is nonetheless important in showing how hard it will be to do a rigorously controlled study for a most important question.

Happyism: Nick Brown’s Attaining national happiness through chemistry

 happy faces

Donald Trump: “I don’t pursue happiness, I pursue money.”
Phil: “why?”
Donald Trump: “Because money buys me the things I want, such as material goods, and pretty women.”
Phil: “And why do you want to get these things”
Donald Trump: “Because they make me happy.”

In Happyism: The Creepy New Economics of Pleasure, Deirdre N. McCloskey provides a history of the new happyology

From the 1950s to the 1970s, economists such as George Katona and Bernard M.S. van Pragg, and sociologists such as Hadley Cantril and Norman Bradburn, and then in the 1980s, on a big scale, psychologists such as Martin E.P. Seligman, Norbert Schwarz, Frank Fujita, Richard J. Davidson, the Diener family (Ed, Carol, Marissa, and Robert), and of course Daniel Kahneman re-started the once-abandoned project of measuring happiness quantitatively. In the 1990s, some economists, a subgroup in the new “behavioral economics,” delightedly joined the psychologists in measuring happiness as self-reported declarations of one’s level of happiness, and assigning self-reported numbers to them, and adding them up, and averaging them across people and eras. Some of the quantitative hedonists have taken to recommending governmental policy for you and me on the basis of their 1-2-3 studies; and some of them are having influence in and on the Obama administration.

And a thorough shredding of the non-science:

At the most lofty level of scientific method, the hedonicists cheekily and with foreknowledge mix up a “non-interval” scale with an “interval” scale. If you like the temperature in Chicago today better than the one on January 15, you might be induced by the interviewer to assign 2.76 to today and a 1.45 to January 15. But such an assignment is of course arbitrary in God’s eyes. It is not a measure in her view of the difference even in your heart (since to her all hearts are open) between a nice day and a cold day. By contrast, an interval scale, such as Fahrenheit or Celsius temperature on the two days in question, does measure, 1-2-3. God doesn’t care which scale you use for hedonics as long as it’s an interval scale. Non-interval scales merely rank (and classifications merely arrange). We couldn’t base a physics on asking people whether today was “hot, nice, or cold” and expect to get anything quantitative out of it.

Recording the percentage of people who say they are happy will tell you something, to be sure, about how people use words. It’s worth learning. We cannot ever know whether your experience of the color red is the same as mine, no matter how many brain scans we take. (The new hedonism is allied, incidentally, with the new brain science, which merrily takes the brain for the mind.) Nor can we know what red or happiness 1-2-3 is in the mind of God, the “objective happiness” that Kahneman speaks of as though he knew it. We humans can only know what we claim to see and what we can say about it. What we can know is neither objective nor subjective, but (to coin a word) “conjective.” It is what we know together in our talk, such as our talk about our happiness. Con-jective: together thrown. No science can be about the purely objective or the purely subjective, which are both unattainable.

And here comes the knock out punch:

If a man tormented by starvation and civil war in South Sudan declares that he is “happy, no, very happy, a regular three, mind you,” we have learned something about the human spirit and its sometimes stirring, sometimes discouraging, oddity. But we inch toward madness if we go beyond people’s lips and claim to read objectively, or subjectively, their hearts in a 1-2-3 way that is comparable with their neighbors or comparable with the very same South Sudanese man when he wins an immigration lottery and gets to Albany.

Happyism: TheCreepy New Economics of Pleasure is an erudite long longread. Don’t be fooled. Do not be drawn in by its seductive start with a light Peanuts Lucy and Linus story unless you have a bit of time on your hands.  Maybe some well spent time, but is there something  you’d rather be doing that would make you—well– happy?

So, maybe you don’t  have that kind of time to spare, but still up for being provoking to think different about happyology, maybe articulating your vague discomfort with the whole thing.

Then, here is a very smart, but mercifully short blog post by Nick Brown (@sTeamTrae), the guy who wentNick Brown from relative obscurity to international attention with his thorough debunking of positive psychology’s core positivity ratio.

Nick explores the murky quantitative relationship between national happiness and antidepressant consumption. It may sound ponderous, but he is brief and will keep you amused….

Thanks for sharing, Nick.

Attaining national happiness through chemistry

Nick Brown

Apparently, Scandinavia is big in the UK right now.  (Something “foreign” is always big in the UK, it seems.)  And when something is big, the backlash will be along very soon, as exemplified by this Guardian article that I came across last week.  So far, so predictable.  But while I was reading the introductory section of that article, before getting to the dissection of the dark side of each individual Nordic country, this stood out:

[T]he Danes … claim to be the happiest people in the world, but why no mention of the fact they are second only to Iceland when it comes to consuming anti-depressants?

Hold on a minute.  When those international happiness/wellbeing surveys come out each year, Denmark is always pretty close to the top.  So I checked a couple of surveys of happiness (well-being, etc.).  The United Nations World Happiness Report 2013 [PDF] ranks Denmark #1.  The OECD Better Life Index 2013 rather coyly does not give an immediate overall ranking, but by taking the default option of weighting all the various factors equally, you get a list headed by Australia, Sweden, and Canada, with Denmark in seventh place (still not too shabby).  The OECD report adds helpful commentary ; for example, here you can read that “Denmark, Iceland and Japan feel the most positive in the OECD area, while Turkey, Estonia and Hungary show lower levels of happiness.”

Capture.2So what’s with the antidepressants?  Well, it turns out that the OECD has been researching that as well.  Here is a chart listing antidepressant consumption in 2011, in standardised doses per head of population per day, for 23 OECD member states.  Let’s see.  Denmark is pretty near the top.  (It’s not second, as mentioned in the Guardian article above, because the author of that piece was using the 2010 chart.)  And the other top consumers?  Iceland first (remember, Iceland is among the “most positive [countries] in the OECD area”), followed by Australia, Canada, and (after Denmark) Sweden.  That’s right: the top three countries for happiness according to the UN are among the top five consumers of antidepressants in the OECD’s survey(*).  And those countries showing “lower levels of happiness”?  Two of the three (Estonia and Hungary) are on the antidepressant list – right near the bottom.  Perhaps they’d be happier if they just took some more pills?

I decided to see if I could apply a little science here.  I wanted to quantify the relationship between antidepressant consumption and happiness/wellbeing/etc. So I built a dataset (available on request) with a rank-order number for each of the 23 countries in the antidepressant survey, on each of several measures.  Then I asked SPSS to give me the Spearman’s rho correlation between consumption of antidepressants and each of these measures.  Here are the results [Click on image to enlarge]:

Capture

Note that this is not a case of “everything being correlated with everything else” (Paul Meehl).  Only certain measures from the OECD survey are significantly correlated with antidepressant consumption.  (I encourage you to explore the measures that I didn’t include.)

Ah, I hear you say, but this is only correlational.  When two variables, A and B, are correlated, there are usually several possible explanations.  A might cause B, B might cause A, or A and B might be caused by C.  So in this case, consuming antidepressants might make people feel happy and healthy; or, being happy and healthy might make people consume antidepressants; or, some other social factor might cause people to consume antidepressants and report that they feel happy and healthy.  I’ll let you decide which of those sounds plausible to you.

Now, what does this prove?  Probably not very much; I’m not going to make any truth claims on the basis of some cute numbers.  After all, it’s been “shown” that autism correlates better than .99 with sales of organic food.  But here’s a thought experiment for you: Imagine what the positive psychology people would be telling us if the results had been the other way around — that is, if Australia and Denmark and Canada had the lowest levels of antidepressant consumption.  Do you think it’s just remotely possible that we might have heard something about that by now?

(*) The OECD has 34 member states, of which some, such as the USA and Switzerland, do not appear in the antidepressant consumption report.  All correlations reported in this post are based on comparisons in rank order among the 23 countries for which antidepressant consumption data are available.

Antidepressant consumption data here [XLS]
UN World Happiness Report 2013 here [PDF]
OECD Better Life Index 2013 data here

My response to the hacking of this blog site (Updated 2/14)

anonSomeone hiding behind the name Anon hacked my recent blog post in an attempt to force me to take the post down.

Since the incident, I learned the hacker had also been posting irrelevant comments on at least four blogs of other people and was blocked by them.

This blog post is a response to the hacking. I introduce a policy for my handling of any further rapid stream of comments from one person and provide replies to some of Anon’s comments found amidst the 21 or so s/he posted.

The earlier blog post

On February 2, I uploaded a blog post Offering US$500 to authors of Lancet article… The post challenged the authors of a controversial Lancet article to accept a wager. Their article concerned a clinical trial evaluating behavior therapy for unmedicated patients with schizophrenia spectrum disorder. My wager was that I could show that an effect size of 6.5 (I originally  reported it inaccurately as 6.9, but both numbers are way out of expected values) in their abstract did not represent an accurate summary of the outcome of the clinical trial.

I felt the claims made in the article in general were outrageous and irresponsible. Worse, the claims were amplified in direct quotes attributed to the authors in the media.

I sought to get the authors engaged in a discussion with a growing chorus of critics, which up until now they had refused. If they did not respond to critics or my wager, I would seek a retraction from Lancet. The article was just that bad.

I proposed that I would pick a charity to which I would donate the US $500 if I was wrong and they could pick a bona fide charity to which they would donate if they could not prove me wrong. The, charities, science, and a public confused by the media reports would benefit from the wager.

Since then I solicited nominations for my charity. I chose an Irish charity offering psychiatric services to the deaf. I look forward to the authors of the Lancet article choosing a bona fide charity to which they would contribute if wrong, or maybe others putting up a similar amount to get the wager going. Think of it, if you think that I have wrong, you can separate a fool from his money.

After the blog post, I continued to engage social media, joining others in calling for a retraction of the article and apologies to the media, and if not that, release of basic data from the Lancet article so others could independently evaluate it.

As of this writing, a group of us have received an offer from Lancet to write a letter to the editor and BBC has changed the headline of their story from “Talking therapies effective as Drugs” to “Talking therapies moderately effective.” That is progress, but there is still work to be done.

An ironic note

The  claim in the abstract of the Lancet article approximated one of the effect sizes claimed in a controversial meta-analysis appearing in JAMA. That meta-analyses was seriously flawed and intended to preserve insurance payments for long-term psychodynamic/psychoanalytic therapy in Germany. My recent blog post at PLOS Mind the Brain discussed the JAMA meta-analysis at length and zeroed in on the effect size as just one indicator of voodoo statistics.

What now seems ironic in relation to my current concerns with the Lancet CBT study, I had earlier joined with Aaron T. Beck and other colleagues in writing an extended critique of this JAMA article. Psychonalysts  condemned our critique as representing a plot  to discredit psychoanalytic research from sources with a conflict of interest. They assumed we were promoters of CBT who were protecting our brand against the suggestion that it was inferior to long term psychodynamic/psychoanalytic therapy.

Anyone who knows me can recognize the absurdity of that. Indeed, Dr. Beck and I became friends after he made overtures to me after I had delivered one of the most detailed and stringent critiques of the approach ever.

Now I find myself confronting what appeared to be another improbable effect size claimed by persons promoting CBT. Given the agitation on social media in which I am participating, maybe I will face a complaint that I am involved in some sort of plot to discredit CBT.

The hacking

The stream of comments were soon longer than my blog post. The banner on my blog webpage  was confusing to visitors because it seemed to imply that the string of comments placed by the hacker were part of a new blog post by me. The banner first read

13 Thoughts on “Offering US$500 wager to authors of Lancet article: help me pick a suitable charity”

But then the 13 was changed to 18 and then 21. The rising numbers corresponded to the increasing flood of comments posted so quickly that I was unable to insert responses attached to the individual posts. If I tried, my comments just ended up being displayed out of sequence to the hacker. Furthermore, I was unable for some reason to respond to some of the comments.

The hacker’s post of a claim that I was arguing with the studies study statistician (I am now sure that I was not) did not have a reply option.

This influx of comments was not an effort to engage me in any kind of dialogue . It left no room for that. The hacker’s comments tmoved to demanding that I pay US$500, apologize to the authors of the Lancet article, and take down my blog post.

To boot, I was going out to dinner and my failure to keep responding might suggest that I was avoiding a debate.

Not having confronted this kind of thing before, I had to think what would best policy to someone using comments on my blog to disrupt it. I decide that I will now limit commentators to two comments in a row without me making a response. If they go to three, they will be blocked.

I welcome advice from other bloggers as to how they handle such issues

Attacked by a Gish Gallop technique?

DuanegishpicSomeone indicated on my Facebook wall that maybe I was being subject to a Gish Gallop attack. I had to look that up.

The Gish Gallop, named after creationist Duane Gish, is the debating technique of drowning the opponent in such a torrent of half-truths, lies, and straw-man arguments that the opponent cannot possibly answer every falsehood in real time. The term was coined by Eugenie Scott of the National Center for Science Education. Sam Harris describes the technique as “starting 10 fires in 10 minutes.”

No, the hacker was actually making some intelligent points, but just at a rate that was deliberately disrupting my ability to respond.

Below I am going to respond to a sampling of the hacker’s comments. The hacker actually gave me the benefit of some things to think about in crafting a future post at PLOS Mind the Brain.

effect size of 6.9
“The term ‘effect size’ is frequently used in the social sciences, particularly in the context of meta-analysis. Effect sizes typically, though not always, refer to versions of the standardized mean difference. It is recommended that the term ‘standardized mean difference’ be used in Cochrane reviews in preference to ‘effect size’ to avoid confusion with the more general medical use of the latter term as a synonym for ‘intervention effect’ or ‘effect estimate’. The particular definition of standardized mean difference used in Cochrane reviews is the effect size known in social science as Hedges’ (adjusted) g.”
http://handbook.cochrane.org/chapter_9/9_2_3_2_the_standardized_mean_difference.html

This first message was quite reasonable and invited a response from me. Maybe my accepting it allowed the hacker to post a stream of comments, raising the numbers on the misleading banner.

My reply is that I would certainly agree with this definition. But when readers encounter “effect size,” they generally assume what is intended is “standardized mean difference” unless there is some indication to the contrary. A reader encountering a claimed effect size of 6.5 in an abstract is not given much context, except that in an abstract one typically expects a conventional standardized mean difference.

After my original blog post, I noted at another blog site, Prof Shitij Kapur   indicated  similarly being startled by the figure in the Lancet abstract.

However, I would like to draw attention to wording in the abstract and the editorial that need further clarification. The abstract and editorial state that there is an effect size of 6.52. There are few treatments in medicine that have effect sizes of 1, so 6.5 would be massive indeed! In actual fact the between group effect size is 0.46 (and on the full reading of the paper the authors rightly report this). However, a casual reader may take away a mistaken impression that of the effect size as being 6.5 from just reading the abstract. The value 6.52 comes from the statistical modelling they used and is technically the ‘treatment effect’ in a statistical model, and NOT the effect size in the conventional use of that word.

This statement was validating, but I disagree with the 0.46. It was based on a follow up that by the end lost more patients than were retained.  More about that later.

It is important to note that:

  • Most people who encounter an abstract electronic bibliographic source like PubMed do not actually click to read or download the actual article.
  • Often people offering authoritative judgments about an article in the media express views that suggest they have not actually read the article.
  • Many exaggerations in press coverage can be traced to hype in abstracts.
  • So, articles become known by what is said in their abstracts, as much is what is said in the body.
  • Readers of the abstract assume what is labeled as the effect size is a number being provided to allow comparisons to other trials. Much of the psychotherapy intervention literature is concerned–one might even say pathologically obsessed–with effect sizes for comparisons between treatments and checking against the false reassurance provided by Jakob Cohen’s arbitrary designation of small, medium, large.

The next comment was

http://www.academia.edu/167213/Standardized_or_simple_effect_size_What_should_be_reported
For most purposes simple (unstandardized) effect size is more robust and versatile than standardized effect size. Guidelines for deciding what effect size metric to use and how to report it are outlined. Foremost among these are: i) a preference for simple effect size over standardized effect size, and ii) the use of confidence intervals to indicate a plausible range of values the effect might take. Deciding on the appropriate effect size statistic to report always requires careful thought and should be influenced by the goals of the researcher, the context of the research and the potential needs of readers

Well, if you click on the link the hacker provided, you are taken to a manuscript that proposes a reliance on simple unstandardized effect sizes, rather than standardized ones. That is an interesting proposition worthy of consideration, but it is not common practice. Furthermore, if authors currently wants to provide a simple effect size, they would presumably label it a mean difference between groups in order to avoid confusion.

I pointed out to the hacker

The Lancet article authors never specified that this was a simple effect size. Furthermore, it is not at all clear how the authors calculated this particularly because at the end of follow-up less than half of the patients remained in what was already a small sample to start. Assumptions of nonrandom loss to follow-up no longer apply and so this is a misleading and even bogus effect size. I will say more about that in future blogs.

The hacker responded to my argument:

But the quotes suggest otherwise, and you know that’s true. You have shot out in ignorance and acted in a highly non-collegiate way. I suppose $500 is a lot of money. perhaps you need to appoint someone else to judge whether you need to cough up? I suggest Simon Wessely. why don’t you tweet him?

I know Simon Wessely and know that he has had death threats over his interpretation of a trial of CBT chronic fatigue. I do not think he wants to step back into the fray late judgments about a trial of CBT for unmedicated schizophrenics.

The hacker next posted:

One other thing. Your point seems to be that there exists a sizeable number of people who both understand effect sizes and think it possible to have a standardized effect size of 6.9? I think the two are probably mutually exclusive. If everyone in the CBT group scored zero on the PANSS after therapy then, assuming no change in the TAU group, the effect size would not be any bigger than 5. Keep on banging this drum if you like though James – you’ll be the only one to suffer.

Well, actually I got quite been acrimonious debate with the psychoanalytic/psychodynamic community when my colleagues and I suggested that an effect size of 6.9 was prima facie evidence of nonsense. And respected authorities such as the editor of JAMA and the best-selling author-psychiatrist Peter Kramer had obviously uncritically accepted this effect size.

But I think it was time to exit from discussion.

Postscript: After I repaired my blog site, I announced on Twitter that I would not accept on Gaslight-1944interrupted streams of comments of three or more. I immediately got a string of three from someone who identified himself as Inspector Brian Cameron. Maybe another coincidence, but a Google check reveals that it is same name as the Scotland Yard inspector in the classic movie about trying to drive someone insane, Gaslight.

I find parallels between the kinds  of reactions that criticism of CBT elicit in the kinds of reactions that criticisms of Irving Kirsch elicited a while ago. I think I am fully cognizant of the limits on the safety and efficacy of antidepressants. In some circles I am known for the fuss I make about the low quality of routine care with antidepressants in the community in casual overprescribing without adequate follow up. My criticisms of Irving Kirsch’s declaration that antidepressants were no more effective than placebo received a lot of spamming and insertions of links to strange places in the comment sections of my blog posts.

I should expect strange behavior when I poke at strongly held views from the fringe and from ideologues. One of my first blog posts at PLOS Mind the Brain concern the antidepressant wars. Maybe when the dust settles on the CBT controversy, I can write something on the war to enthrone CBT over other therapies, but also to assert its superiority to medication.

Update (Friday, February 14, 2014): I realize that there is been some controversy on Twitter concerning what some consider to be my unusual offer to engage the authors of the Lancet article with a wager. Some even consider it uncollegial or unprofessional,

 I did not invent the idea. Rather, I was inspired by one of the authors of thesore loser toLancet article having previously used the approach to engage authors of a meta-analysis in a discussion of the effect sizes they claimed. As I understand it, the author of the Lancet article lost 50 pounds . So, not only is there precedent, but it is disingenuous of the authors of the Lancet article to challenge others with a wager and refuse to accept one themselves. Maybe they are discouraged by their loss of the last wager.