Offering US$500 wager to authors of Lancet article: help me pick a suitable charity

bollocksIn my last blog post at PLOS Mind the Brain, I discussed claims by German psychoanalysts in the Journal of American Medical Association (JAMA) that long-term psychodynamic therapy was superior to shorter term therapies. One of the many things that attracted my attention to the article was a claim of an effect size of 6.9 that was unprecedented in the psychotherapy literature, certainly among peer-reviewed articles. I had intended that my next blog post would discuss reactions of critics to the JAMA article and responses from the psychoanalytic/psychodynamic community, as well as an independent attempt to replicate the meta-analysis.

However, once again my plans have been disrupted by the need — stop the press!—to respond to an article in Lancet. The last time I changed my planned order for blog posts was because a Lancet article had spun a trial with basically null results to a positive one for the treatment of anorexia. In my blog post, I showed how the Lancet anorexia paper actually demonstrated how little quite intensive treatment of women with anorexia accomplished.

This time the Lancet article concerns cognitive behavior therapy for unmedicated patients with schizophrenia spectrum disorder. The abstract of the article claimed — yup!– an effect size of 6.9 [Correction prompted by Marcus in the comments 6.52] .  So, I have postponed my follow-up post about long-term psychodynamic psychotherapy.

My next blog post at Mind the Brain will cover the Lancet article and the embarrassing uncritical response that it has received from prominent English psychologists. I will argue that the article is so terribly flawed that it is is suitable for retraction. At least an apology should be issued by the authors for having made irresponsible claims to the public via BBC.

Stay tuned. What I am doing in this post at my secondary blog is to issue to to one of the authors of the Lancet article, Professor of Psychology Tony Morrison a wager:

I will contribute US$500 to a bona fide UK or Irish charity if he and his co-authors can convincingly demonstrate the effect size of 6.9 [correction 6.5] that they reported best characterizes the outcome of their clinical trial in Lancet in a way that facilitates ready comparison to published reports of other treatments for schizophrenia.

Hype and exaggerations in abstracts frequently lead to distorted coverage by the press. Moreover, many more people view abstracts, particularly in electronic bibliographic sources such as Web of Science or PubMed than actually view or download the article. Much of the impression of the results of studies that professionals in the lay public have come only from the abstract.

I leave it to Professor Morrison to deal with his colleagues and come up with the US$500. If Professor Morrison and they cannot produce such information or if they refuse the wager, I will seek a formal retraction from Lancet.

I am frustrated with the failure of Professor Morrison and his co-authors to respond to reasonable criticisms of their work, which has so many important implications for what persons with severe mental illness believe are their effective treatment options. Professor Morrison has stated to BBC:

 We found cognitive behavioural therapy did reduce symptoms and it also improved personal and social function and we demonstrated very comprehensively it is a safe and effective therapy.

And then to Wired.Co.UK

I say

Bullocks!

I carefully have examined the Lancet paper and it appears that at the end of the delivery of the intervention, there were no differences between the intervention and a  poorly chosen control group. That is a similar conclusion to what Keith Laws documented.

The control group was described as treatment as usual, but the article did not report the nature and intensity of what patients actually received. Just what is treatment as usual?  The patients might been  encouraged by enrollment in a clinical  trial in which they have a 50% probability of getting a cognitive behavior therapy not to accept medication.

Treatment as usual for this population would otherwise  involve offering medication and its management, along with support, positive expectations and monitoring and perhaps formal psychotherapy.

More generally, there has been persuasive criticism of intervention trials that adopt as a  the control group as “treatment as usual” or “routine care” when what patients actually receive is no treatment or quite inadequate treatment. The problem with using such conditions as control groups is that investigators have no way of determining whether what patients received in the intervention condition simply compensated for the inadequacies in the exposure of the control group to basic clinical management.

This trial has the additional disadvantage that by the end of follow up, most patients had already been lost to assessment. The final small sample cannot be considered representative of those who were originally randomized to either the intervention or control group. But that is exactly the assumption being made by the authors in their analyses.

That is, the intervention condition might simply be delivering nonspecific positive expectations and support that should have been delivered in routine care but were not, because there was less intensive attention being offered and patients were not showing up. Any differences between the intervention group and adequate routine care are exaggerated and potentially unwarranted claims of being made about the efficacy of the active ingredients of the intervention.

Update Wednesday, February 12, 2014: I have just listened a couple times to the podcast from Professor Anthony Morrison. It left me convinced that not enough attention has been given to the control condition consisting of two very different sites. At one of the sites, patients were likely to get minimal attention because they were declining medication, even if in the context of a clinical trial. The second site was unusual in providing a number of options, including family therapy, but in some instances even cognitive behavior therapy. While one of these sites might have been appropriate, combining them into a single control condition was not. It just did not make for meaningful comparisons. The issue is further complicated by most patients no longer being available for follow-up at the end of the study. Who knows, but maybe being assigned to one of the sites offering little support to patients refusing medication was a reason for their nonrandom dropout. More on this later.

Maybe I am missing something and I will have to pay US$500 to a good cause. Regardless, this should prove be an interesting wager. I suppose I could offer to contribute more than US$500 because of a confidence that I will not have to pay out. But I do not want to frighten off Professor Morrison and his colleagues from the wager.

Please nominate you suggestions for a bona fide UK or Irish charity. I will leave to Professor Morrison to decide which charity he will want to contribute.

And stay tuned for my next PLOS Mind the Brain. I am only getting warmed up in my criticism of a terribly studied it was terribly flawed in its conception, conduct, and reporting, and irresponsibly reported in the media. The authors have considerable blame to share but so do primary English psychologists who embarrass themselves by commenting either without actually being the Lancet article or by leaving at home any critical faculties.

7 thoughts on “Offering US$500 wager to authors of Lancet article: help me pick a suitable charity

  1. “patients who are encouraged to not accept medication for schizophrenia”?

    This is a very serious allegation…. can you substantiate it? Have you read the trial protocol and ethical submissions?

    Like

    • It is ambiguous what actually happened except that by the end of follow up, most of the few patients assigned to CBT who had 75-100% improvement who were still around were now on meds. (see table 4).

      A comment from an American anti-psychiatry psychiatrist (http://www.madinamerica.com/2014/02/paradigm-shift/)

      “Not too long ago, I had a conversation with a senior researcher. I asked if he agreed – given the many problems associated with the neuroleptics – that we should be doing everything we can to identify those individuals who might get better without taking the drugs. His response was that this could never be studied. No IRB would ever approve a study.”

      I believe she is correct for the US. Would not have made it through IRB.

      My position is that offering patients participation in a clinical trial reassures then that they are not harmed by participating. That was not true for this study.

      BTW, in trials of CBT with medicated SC patients, there were 0 deaths with 1000+ patients, in this trial of cbt, there were 2 dead. You do the math.

      Like

  2. The effect size quoted (actually -6.52) is the difference in PANSS scores between the two groups across all timepoints. It’s fairly easy to see this if you look across the first line in Table 2. Admittedly, it’s confusing if you are expecting to see a standardised mean difference, but simple to roughly calculate that for yourself.

    Like

    • Perhaps. but by the end of the follow up most patients had been lost to follow up, so much so that the assumption of nonrandom loss is untenable. Imputation and multilevel modeling making that assumption become inappropriate under these conditions. But that is what these authors did. One could relax the assumption that loss was random, which would be appropriate, but that would would produce huge standard deviations, as you lose so many degrees of freedom. There is no way the model could stand that (in fact you should have needed 4 or 5 times as many participants to start with to have any chance of interesting results).

      Of the 17 patients assigned to remaining at the end of follow up, 3/4 with 75-100% improvement were on meds.

      CONSORT for abstracts requires that the outcome for primary outcome of the trial appear in the abstract. This abstract is inaccurate and misleading for multiple reasons, including the ambiguous “effect size” when “mean difference” would be more accurate and where readers should be able to expect standardized effect sizes for the assessment at end of trial, not the end of follow up.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s