Who’s to blame for inaccurate media coverage of study of therapy for persons with schizophrenia?

“I’m in competition with literally hundreds of stories every day, political and economic stories of compelling interest…we have to almost overstate, we have to come as close as we came within the boundaries of truth to dramatic, compelling statement. A weak statement will go no place.”

—-Journalist interviewed for JA Winsten, Science and media: the boundaries of truth

Hyped, misleading media coverage of a study in Lancet of CBT for persons with unmedicated schizophrenia left lots of clinicians, policymakers, and especially persons with schizophrenia and their family members confused.

Did the study actually showed that psychotherapy was as effective as medication for schizophrenia? NO!

Did the study demonstrate that persons with schizophrenia could actually forgo medication with nasty side effects and modest effectiveness and just get on with their life with the help of CBT? NO!

In this blog post, I will scrutinize that media coverage and then distribute blame for its inaccuracies.

At PLOS Mind the Brain, I’ve been providing a detailed analysis of this complex study that was not particularly transparently reported. I will continue to do so shortly. You can consult that analysis here, but briefly:

The small-scale, exploratory study was a significant, but not overwhelming contribution to the literature. It showed that persons with unmedicated schizophrenia could be recruited to a clinical trial for psychotherapy. But it also highlighted the problems of trying to conduct such a study.

  • Difficulties getting enough participants resulted in a very mixed sample combining young persons who had not yet been diagnosed with schizophrenia but who were in early intervention programs with older patients who were refusing medication after a long history of living with schizophrenia.
  • The treatment as usual combined settings with enhanced services including family therapy and cognitive therapy with other settings where anyone who refused medication might be denied any services. The resulting composite “treatment as usual” was not very usual and so did not make for a generalizable comparison.
  • Many of the participants in both the CBT and control group ended up accepting medication before the end of the trial, complicating any distinguishing of the effects of the CBT versus medication.
  • The trial was too small, had too many participants lost from follow up to be used to determine effect sizes for CBT.
  • But, if we must, at the end of the treatment, there were no significant differences between persons randomized to CBT and those remaining in routine care!

The official press release from University of Manchester was remarkably restrained in its claims, starting with its title

Cognitive therapy “safe and acceptable” to treat schizophrenia

There were no claims of comparisons with medication.

And the press release contained a quote from the restrained in tentative editorial in Lancet written by Oliver Howes from the Institute of Psychiatry, London:

“Morrison and colleagues’ findings provide proof of concept that cognitive therapy is an alternative to antipsychotic treatment. Clearly this outcome will need further testing, but, if further work supports the relative effectiveness of cognitive therapy, a comparison between such therapy and antipsychotic treatment will be needed to inform patient choice. If positive, findings from such a comparison would be a step change in the treatment of schizophrenia, providing patients with a viable alternative to antipsychotic treatment for the first time, something that is sorely needed.”

But unfortunately the rest of this excellent editorial was, like the Lancet report of the study itself, locked behind a pay wall. Then came the BBC.

BBC coverage

bbc newsMany of us first learned of this trial from a BBC story that headlined

“Talk therapy as good as drugs for schizophrenia.”

Inexplicably, when the BBC story was accessed a few days later, the headline had been changed to

“Talk therapy moderately effective for schizophrenia.”

There was no explanation, and the rest of the new item was not changed. Creepy.  Orwellian.

The news item contained a direct quote from Tony Morrison, the principal investigator for the study:

We found cognitive behavioural therapy did reduce symptoms and it also improved personal and social function and we demonstrated very comprehensively it is a safe and effective therapy.

Wait a minute: Was it really demonstrated very comprehensively that CBT was an effective therapy in this trial? Are we talking about the same Lancet study?

The quote is an inaccurate summary of the findings of the study.  But it is quite consistent with misrepresentations of the results of the study in the abstract. Here as elsewhere in the media coverage that would be rolling out, almost no one seemed to scrutinize the actual results of the study, only buy into the abstract.

Shilling at Science Media Centre: Thou shalt not shill.

Science Media Centre ran a briefing of the study for journalists and posted an

Expert reaction to CBT for schizophrenia

A joint quote [how you get a joint quote?] from Professor Andrew Gumley, University of duoGlasgow and Professor Matthias Schwannauer, University of Edinburgh proclaimed the study “groundbreaking and landmark.” They praised its “scientific integrity,” citing the study’s pre-registration that prevented “cherrypicking” of findings to put the study in the best light.

As I’ll be showing in my next Mind the Brain post,  the “pre-registration” actually occurred after data collection had started. Preregistration is supposed to enforce specificity of hypotheses and precommit the investigators to evaluate particular primary outcomes at particular times But preregistration of this trial avoided designation of the particular timepoint at which outcome would be assessed. So it did not prevent cherry picking.

Instead, it allowed the authors to pick the unusual strategy of averaging outcome across five assessments. What they then claimed to represent the results of the trial was biased by

  • a last assessment point when most patients were not longer followed,
  • many of the improved patients were on medication,
  • and results affected by a deterioration of the remaining control patients, not improvement in the intervention patients.

Again, let me remind everybody, at the end of the intervention period—the usual time point for evaluating a treatment, there were no differences between CBT and treatment as usual.

shillI have previously cited this joint quote from Gumley and Schwannauer as evidence that experts were weighing in about this study without being familiar with it. I apologize, I was wrong.

Actually, these two guys were quite familiar with the study because they are collaborators with Tony Morrison in a new study that is exactly the kind that they ended proposing as the needed next step. They weren’t unfamiliar, they were hiding a conflict of interest in praising the study and calling for what they were already doing with Tony Morrison. Come on, Andy and Matt, no shilling allowed!

When everybody else was getting drunk with enthusiasm, somebody had to become the designated driver and stay soberdesignated driver two.

As it has done in the past when there is a lot of media hype, NHS Choices offered an exceptionally sophisticated, restrained assessment of the study. This source missed the lack of differences between intervention and control group at the end of the treatment, but it provided an exceptionally careful review of the actual study not just its abstract. It ended up catching a number of other important limitations that almost nobody seemed to be noticing. And it warned

However, importantly, it does not show that it is better or equivalent to antipsychotic medication. The participants continued to have moderate levels of illness despite receiving therapy.

Media coverage got more exaggerated in its headlines.

Wired.UK.com: “Talking therapy could help schizophrenic sufferers that refuse drugs”

This story added a provocative direct quote from Tony Morrison in a sidebar:

“Without medication the results were almost exactly the same as the average effect size you see in antipsychotic drug trials”

This is simply an untrue and irresponsible statement.

TopNews Arab Emirates: CT Acts as Great Treatment for Schizophrenia Spectrum Disorder Patients Unwilling to take Antipsychotics—

Some of the most misleading headlines appeared at sources that consumers would think they could trust

Medical News Today Cognitive therapy ‘an effective treatment option’ for schizophrenia

West (Welfare, society, territory): Cognitive therapy better than drugs for schizophrenia.

GPonline.com: CBT for schizophrenia an effective alternative to antipsychotics, study finds

Nursing Times: Study suggests drug-free therapy hope for schizophrenia

And the prize goes to AAA’s Science for the most horrible misrepresentation of the study and its implications

Science: Schizophrenia: Time to Flush the Meds?

So who’s to blame? Lots of blame to go around

It has been empirically demonstrated that lots of the distortion in medical and scientific journalism starts with distorted abstracts. That was certainly the case here, where the abstract gave a misleading portrayal of the findings of the study that persisted unchallenged and even got amplified. The authors can be faulted, but so can Lancet for not enforcing CONSORT for abstracts or even their requirement of trial design and primary outcomes be preregistered.

The authors should be further faulted for their self-promoting, but baseless claims that their study indicated anything about a comparison between cognitive therapy and neuroleptics. The comparison did not occur and the sample in this study was very different than the population studied it in research on neuroleptic medication.

Furthermore, if Lancet is going to promote newsworthy studies, including in this case, with podcasts, they have a responsibility to take down the pay walls keeping readers from examining the evidence for themselves. BMJ has already adopted a policy of open access for clinical trials and meta-analysis. It is time other journals follow suit.

For me, when the most striking things about media coverage is its boring, repetitive sameness. Lazy journalists simply churnaled or outright plagiarized what they found on the web. The competition was not in content, but in coming up with outrageous headlines.

It is absolutely shameful some of the most outrageous headlines were associated with sources that ostensibly deserve respect. In this instance, they do not. And someone should reel their marketing departments or whoever chooses their headlines.

There is a lot to blame to go around, but there is also room for some praise.

NHS choicesNHS Choices gets a prize for responsible journalism and careful research that goes beyond what the investigators themselves say about their study. I recommend the next time the media gets into a frenzy about particular medical research, that consumers run and look at what NHS Choices has to say.

Stay tuned for my upcoming PLOS Mind the Brain post that will continue discussion of this trial. Among other things, I will show that the investigator group knew what they were doing in constructing such an inappropriate control/comparison group. They gave evidence that they believed that a more suitable comparison with befriending or simple supportive counseling might not have revealed an effect. It is a pity, because I think that investigators should go for appropriate comparisons rather than getting an effect.

I will identify the standards that the investigator group for this trial has applied to other research. I will show that if they apply the same standards to their own study, it is seriously deficient except as a small preliminary, exploratory study that cannot be used to estimate the effects of cognitive therapy. But the study is nonetheless important in showing how hard it will be to do a rigorously controlled study for a most important question.

21 thoughts on “Who’s to blame for inaccurate media coverage of study of therapy for persons with schizophrenia?

  1. Nice overview James🙂

    The NHS Choices Behind the Headlines service has long been a reliable reporter of the actual evidence that sits behind these frequently hyped news pieces. It’s great that you’ve highlighted them in this way. Their new Healthy Evidence forum is also well worth checking out (produced in collaboration with Sense About Science): https://healthunlocked.com/healthyevidence

    My understanding was that The Lancet made the paper and the editorial open access on, or fairly soon after Feb 6th. I think I got a tweet to that effect.

    You may recall that we also covered this story on the Mental Elf: http://www.thementalelf.net/treatment-and-prevention/medicines/antipsychotics/pilot-study-suggests-that-cbt-may-be-a-viable-alternative-to-antipsychotics-for-people-with-schizophrenia-or-does-it/

    Keep up the good work!



    • Thanks for this background and context. As an American watching from the Netherlands, I sort of stumbled into this press coverage without having a map.

      And, of course, the credit for starting all the critical scrutiny, cutting through the height, rests with Mental Elf. I become a great fan and I recommend ME to my junior and senior colleagues who need to be convinced about the value of blogs.

      I do not think we really have an equivalent to NHS Choices Behind the Headlines in the US, but we desperately needed.

      I am sure in the near future there will be other stories that start with the Mental Elf and get elaborated in my blog posts.



  2. Jim – its worth noting exactly what the authors of the paper say in their discussion:

    “Although this effect size is small to moderate, the size on psychiatric symptoms in our study is similar to the median effect size reported for overall symptoms in a large meta-analysis of 15 antipsychotic drugs versus placebo (median 0•44)”

    This is a green flag for anyone reporting on the study – the authors make a direct comparison of the relative ‘efficacy’ of CBT versus antipsychotic medication. It is of course nonsense on multiple levels.

    First because drug studies in the Leucht et al meta analysis compared drugs with placebo and the CBT study here compares with doing absolutely nothing (not even a placebo). This would clearly advantage a CBT comparison

    Second, the median effect size for antipsychotic hides the fact that for seven tested anti-psychotics, the ES was larger (clozapine 0•88, 0•73–1•03; amisulpride 0•66, 0•53–0•78; olanzapine 0•59, 0•53–0•65; risperidone 0•56, 0•50–0•63; paliperidone 0•50, 0•39–0•60; zotepine 0•49, 0•31–0•66; – and in the first four here almost certainly significantly larger than the questionable ‘mean’ reported for CBT. And in the remaining antipsychotics examined by Leucht et al, the effect size did not differ from the .46 reported by Morrison et al.

    Third, the meta analysis of Leucht is to end point of trial – if we look at endpoint in the CBT study (rather than the .46 across the whole trial including follow-up), we see the effect size for CBT in the Morrison study was PANSS total = -0.37 (95 CI -0.96 to 0.22); PANSS positive = -0.18 (95 CI -0.77 to 0.40) and PANSS negative= -0.45 (95 CI -1.04 to 0.14) – all three are nonsignificant. Compare this with all 15 antipsychotics examined by Leucht – every single one has a significant impact on symptoms

    Finally, the Leucht et al meta analysis is quite clear in suggesting that the “Results for our
    primary outcome challenge the dogma that the efficacy of all antipsychotic drugs is the same.” – so why lump them together in a comparison with CBT?

    So, the authors make the comparison themselves and it is one that is misleading on multiple levels


  3. ■But, if we must, at the end of the treatment, there were no significant differences between persons randomized to CBT and those remaining in routine care!

    At the end of therapy but not at follow up. What we see is an intial improvement in TAU group larger than the CBTp group then back to baseline and a sustained improvement for the CBTp group, they cross at the end of therapy.

    Professor Laws is bad cherry picking and misleading Please simply provde the data:

    TAU PANSS baseline 73 3 months 67(-6) 6m 63 (-10) 9m 68 (-5)12m 70 (-3)15 m71 (-2)
    CT baseline 63 3 months 60 (-3) 6m 58 (-5) 9m 59 (-4)12m 55 (-8)15m 57 (-6)

    Not great but significant, and significant that porofessor Laws is with holding the data and choosing his own point rather letting us see for ourselves.


    • Professor Laws can speak for himself, but I think that it is you who are cherrypicking. Where can you find precedent for ignoring results at the end of treatment to focus on “overall effect” which encompasses a follow up in which most patients were not followed. And how do you explain the fluctuation in outcomes for the minority of “treatment as usual” who were followed? And just which of this heterogeneous treatment as usual group are we talking about, anyway? These are voodoo statistics by any usual standard.


      • Here are the statistics:

        baseline 3months 6 months 9 months 12 months 15 months 18 months

        TAU 73 73 67 63 68 70 71
        CT 70 60 63 58 59 55 56

        This is an RCT so these are real people with real experiences, which group would you prefer a person to be in?

        I think the blog authors are being selective after the event that they choose data points in a way that makes them look non significant. The whole of the study is significant CT works at every data point and what is happening at the 3 month 6 month and 9 month period in therapy continues to effctive at the 12 , 15 and 18 month point. This is indeed a breakthrough and should be celebrated and funded.

        I am not sure why the authors of these blogs want to undiscover something helpful and significant this but this is how they do it:

        First why choose the 9 month point? Not the sequence?

        This the point where TAU group most improve, though not as much as CT.

        Then compare the 9 month point TAU with the CT 18 month point?

        Or in other words take the highest decrease in TAU and then use it to compare it different point.!!!This is a new methodology in science that I am unaware of

        Then compare the first 9 months in TAU group with the CT over 18 months?

        They pick and choose the data to compare with other points!!!!

        There is movement in TAU: they get better 73 > 63 they get worse 63 > 71
        and in CT: they get better 70> 58 they stay better 58 > 56

        Somehow the authors use the 9 month point twice in two different ways. you cannot have it both ways and first say 9 month no sign difference and the second 9 months the CT improvement is due to the TAU getting worse.Look 73 >72 = 70 >56 or 2 = 14 ?!! This is is not science it is not maths. A methodology that allows bloggers to pick and choose their figures from data in front then flip the way they interprete the data half way through!!! So it cuts both ways

        I would suggest that readers look at the original paper

        The truth is CBTp works.

        I am not sure if it oversold it is certainly underfunded..

        Let me correct one thing the nine month point is not a rotten cherry it is more akin to cherry blossom.

        There are no rotten cherries to Pick


    • Ben Cooper – You fail to address the points I am making here and prefer to refer to a discussion we had weeks back on the Mental Elf blog http://www.thementalelf.net/treatment-and-prevention/medicines/antipsychotics/pilot-study-suggests-that-cbt-may-be-a-viable-alternative-to-antipsychotics-for-people-with-schizophrenia-or-does-it/
      …where I invited you to go to my own post on the Lancet paper and expand your point – you failed to do so – why?

      I have also dealt with the ‘Rotten Cherry Picking’ argument of Professor Bentall here http://saraheknowles.wordpress.com/2014/02/14/find-the-gap/ – I note Prof Bentall has not replied!

      To save time I will deal with your ‘tangential’ point here though its all in my own blog http://keithsneuroblog.blogspot.co.uk/2014/02/my-bloody-valentine-cbt-for-unmedicated.html

      At 9 months (end of intervention) there were no significant differences between CBT and control on any PANSS measures (total, positive or negative symptoms). From the figures, you cite above, you are referring to 18 months follow up on PANSS total.
      As my post clearly shows and (contrary to your slur) does not hide, the effect sizes are as follows:
      PANSS total= -0.75 (95 CI -1.44 to -0.05); PANSS positive = -0.61 (95 CI -1.27 to 0.05); PANSS negative = -0.45 (95 CI -1.47 to -0.08)

      And to quote my post “PANSS positive is nonsignificant, while PANSS total and PANSS negative effect sizes are moderately sized, the lower end CIs are very close to zero (at -0.05 and -0.08) suggesting marginal significance”

      Clearly, the marginal difference at 18 months does not reflect improvement in CBT group but deterioration in the controls. The CBT group dont differ at 9 and 18 months, rather the controls show deterioration between 9 and 18 months – look at the their Table 2. (CBT at 9 months: 57.95; at 18 months 56.47; control at 9 months 63.26 and at 18 months 71.24)

      Hardly surprising that controls who choose not be medciated, receive not even a placebo & were often simply ‘discharged’ from care …show changes as large as the CBT group upwards (see at 9 months!) and then downwards (at 18 months)

      So, what exactly is your point? …You have none!
      I am witholding no data – all of my effect size calculations are replicable by anyone who goes to the Lancet paper- the data that should be in the public domain are the Morrison et al data!


  4. This article, if i remember correctly the CBT trial looked not only at symptoms via the PANSS but also other effects such as social isolation. Also if you consider the health effects of antipsychotics perhaps giving the clients more than the current “hobsons choice” might be beneficial. Just a thought.


    • Ard – yes the study measured many other variables – so you can take your ‘pick’, but to quote the authors themselves on their other measures: “Therapy did not significantly affect the amount of distress associated with delusional beliefs or voice hearing, or levels of depression,
      social anxiety, and self-rated recovery.”
      So, the patients perceived no reduction in distress and certainly no sense of recovery following CBT!


      • If we are going to quote then… “our findings show that cognitive therapy significantly reduced the severity of psychiatric symptoms in this population. Additionally, cognitive therapy significantly improved personal and social functioning and some dimensions of delusional (cognitive) and voice hearing (cognitive and physical).

        I’m not a statistician, but the paper seems to me to try and iron out some of its flaws for example with the assessment of the PANSS and realising that this is only a small sample.

        CBT might not be a panacea; I would be open to find out whether ACT, compassion focused, psychosomatic psychotherapy, anything really, that might help, but it needs to start somewhere.

        The previous research you quoted saying CBT was not useful as an adjunct to medication, was all the therapy to the same standard, you wouldn’t accept a cryptanalysis for medication of the tablets weren’t all the same strength, regimes etc? Again just a thought. There is a very good analysis in the BMJ, the comment page is helpful in this debate…..what determines a successful treatment?

        I do agree though with premise of your article though, that research can be misrepresented. Fortunately researchers (i hope) would goto the source rather than the media, when continuing their study’s.


      • Ard, There is not much to be “ironed out” when so few patients are left from those with which the trial started. The authors did a number of things to tease out a positive effect and some techniques would be considered statistical malpractice in such a small sample. The authors themselves and been quite critical of other studies, particularly of neuroleptic medication. You can extract from their critiques tools to look at their own study. If we apply their standards, their study is seriously deficient and produced no positive findings.

        For instance, I could take some comments that the principal investigator, Tony Morrison made at a talk available on YouTube and apply them to the study. Namely, with the fancy statistics that were used, most of the data available for the last measurement point was made up!

        See more on my coming blog at PLOS Mind the Brain.


  5. Ard
    The real issue is not whether ‘differences’ are reported …but what they mean! As I stated above, the ‘difference’ between CBT and controls at 18 months reflects a symptom increase in controls between 9 and 18 months (and not a CBT ‘improvement’). Furthermore, the largest symptom change shown by the CBT group i.e. between baseline and 18 months, does not differ from the random ‘symptom reduction’ shown by the controls between baseline and 9 months http://keithsneuroblog.blogspot.co.uk/2014/02/my-bloody-valentine-cbt-for-unmedicated.html
    So, no evidence exists to show that CBT has produced any benefit over normal random symptom fluctuation in unmedicated individuals

    Re small sample – the authors made a power calculation to derive their sample size – the problem for them is that the authors did and still do, spin the (drug-relative) ‘tolerability’ of CBT. Their protocol estimated at worst a 25% drop out and they got a 50% drop out (indeed 40% of the CBT group dropped out during the intervention itself). We might also note that they lost patients recruited to the study because of the extremely high number of serious adverse events – over 10%– and this is probably a large underestimate as they lost so many patients and have no idea what happened to them! Nonetheless, it did include 2 deaths – which is approximately 10 times higher than reported in 50 RCTs examining CBT with medicated patients). This hardly reads like a psychological intervention that is a highly tolerable alternative to medication

    You say “, you wouldn’t accept a cryptanalysis for medication of the tablets weren’t all the same strength, regimes etc” – The authors running RCTs have responsibility for the implementation of their own psychological intervention. So I’m not sure what your point is here?

    I was delighted to see you use the phrase du jour “CBT might not be a panacea” and that you are “open to find out whether ACT, compassion focused, psychosomatic psychotherapy, anything really, that might help, but it needs to start somewhere.” – I agree …we should be open to alternatives, but while the UK government (through NICE) and researchers invest their/our time and money in CBT for psychosis, then alternatives take much much longer to emerge, and we will continue to pursue CBTp – an intervention that has ‘hit the buffer’!

    Re ‘misrepresentation’, crucially in this case – referring to the paper itself is no help. It seems to me that part of the focus of this post is how it implicates the authors in fostering misrepresentation – as I quoted above, they make a clear and spurious parallel between CBT and antipsychotics – this is what I would predict the press to pick up on – and I suspect the authors would have thought likewise (especially given their experience with media attention over the pilot study) – so, the media might rightfully look back questioningly at the spinning of the authors themselves!


  6. Ben Cooper,
    What I say is very simple: a) at the end of the CBT intervention, the groups do not differ b) at the 18 month follow up, the CBT & TAU groups differ
    – but my point is that the difference does not reflect the impact of CBT – it reflects deterioration in the TAU group (you have yourself shown above that no improvement occurs in the CBT group between 9 and 18 months)
    It may also reflect the impact of medication -for example, 2 of 3 patients who showed the greatest symptom improvement (75-100% reduction) in the CBT group …were on medication!
    …So, you might prefer to be in the CBT group because you would be improving on medication

    I have presented some statistics to support my argument that CBT has no discernible impact here – why don’t you do the same to show I am wrong or how you are right

    …just saying “the truth is CBTp works” is not an argument and not convincing


  7. Quiz question “Is science a moveable feast?”
    1) Is Professor Laws using a new scientific methodology?
    2) if not why not? What is he doing?

    baseline 3months 6 months 9 months 12 months 15 months 18 months
    TAU 73 73 67 63 68 70 71
    CT 70 60 63 58 59 55 56

    TAU: they get better 73 > 63 they get worse 63 > 71
    and in CT: they get better 70> 58 they stay better 58 > 56

    You cannot have it both ways This is not science The science is in the Morrison paper.

    This statement is clearly based on

    The real issue is not whether ‘differences’ are reported …but what they mean! As I stated above, the ‘difference’ between CBT and controls at 18 months reflects a symptom increase in controls between 9 and 18 months (and not a CBT ‘improvement’). Furthermore, the largest symptom change shown by the CBT group i.e. between baseline and 18 months, does not differ from the random ‘symptom reduction’ shown by the controls between baseline and 9 months …So, no evidence exists to show that CBT has produced any benefit over normal random symptom fluctuation in unmedicated individuals”

    CBTp works this is science


  8. Pick flip and mix it
    I do not think you are being scientific in your critique of CBTp
    This is why:
    What you are doing is changing points and criteria as you want.

    In you blog on the Morrison paper and elsewhere you use this argument:
    “What I say is very simple: a) at the end of the CBT intervention, the groups do not differ b) at the 18 month follow up, the CBT & TAU groups differ
    – but my point is that the difference does not reflect the impact of CBT – it reflects deterioration in the TAU group”

    This is a statistical game of a chosen dividing point and applying different criteria to the two half’s…
    In this way you use the fluctuations in TAU work in two different ways
    The decrease fluctuations in PANSS base – 9 months you use one way ascribe it to being the same cause as CBTp
    The increase in PANSS (9 – 18 months) you use a different way it is deterioration in TAU
    then: “hey presto” CBTp no evidence.
    To be very clear and as transparent as possible:
    TAU 73 moving to 63 then back up to 72 Is clearly not the same as
    CBTp 70 moving to 58 then down again to 56
    The Morrison result still remains significant on the basis of science. CBTp works.
    Let me say it again to be clear:
    73 moving to 63 then back up to 72 Is clearly not the same 70 moving to 58 then down again to 56
    Dividing it half way and flipping the statistics by a change of attribution is not a known to me as a scientific methodology. Improvement then deterioration could be a) variation b) improvement deterioration you can’t cut up data and apply two different reasons as suits your argument where would that get us in science?
    This methodology is some strange entitlement to change other people’s discovery into “undiscovery” and then blog about it as if it were science; it is not scientific to do this.

    If researchers were allowed to do this then we would be back at the dark ages, I would be jumping up and down if CBTp researchers did this they don’t. You could use this methodology to “undiscover” all sorts of other things. If an antipsychotic control group deteriorated does it show the antipsychotic didn’t work?

    Come come


  9. Sorry, Ben. You don’t know what you are writing about.. First, Lancet, like most quality medical journals, requires that authors preregister their primary outcomes and points of assessment before they enroll their first participant. Somehow, Morrison and colleagues got away violating this requirement by registering after they started recruiting. Furthermore, they fail to register what outcome at what time point by which they would evaluate their intervention.. Second, the last point at which Morrison and colleagues attempted to assess all disciplines was at the end of the intervention, at nine months. Even with attrition in estimation of data, the differences between the intervention group and control group were not significant. Nonetheless, Morrison colleagues continued to collect outcome data and enter it into” overall” assessment. By the end of the assessment., Most participants were no longer available. I would agree with Morrison and colleagues’ assessment of what happens in other trials when that situation is reached. Namely, the investigators are now operating with most of their data being made up via highly questionable multivariate techniques that require much larger samples to be valid.

    In the interim period between the end of the intervention in the last assessment, scores in the intervention group were largely stable and any differences that were observed at the end (with the made up data included) were due to an unexplainable deterioration in the control group. Furthermore, most of the remaining patients in the study had received medication. That further complicates any evaluation.

    By conventional standards, this was a null trial, despite the inappropriate claims of the authors to the press and despite your conviction that CBT works for patients with psychosis.


  10. The critical analysis you are proposing is based sampling the control group and applying across the study to negate a significant result.

    This is clearly not science.

    Limitations to the study are expressed by Morrison and colleagues but they do not entitle critics to sample a control when you know the movements of the contol.

    This argument allows for a 10 point improvement in PANSS but the return towards baseline 8 point movement is excluded from the analysis.

    I would agree with the authors that a larger trial would be helpful


    • Ben – as James said – there is no substance to the point you make, but you seem intent to make it multiple times.
      Last time – its transitive logic (A is CBT and B is controls; 9 and 18 are months)
      If A9 = B9; and A9=A18, but B9>B18, then the only identifiable change is in B and clearly
      B9 = A18

      True the study is flawed – I would venture that it is possibly the most flawed RCT of CBT for psychosis ever published. CBT for psychosis authors expressing a few massively obvious (indeed, built-in) limitations in the vain hope of getting more funding for a bigger trial is common practice and funding bodies shoudl start to wise-up to it


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s