NJ Psychological Association challenges APA Clinical Practice Guideline for the Treatment of PTSD


quick takes

ptsd guidelinesThe APA guidelines can be found here 

From: Charity Wilkinson <wilkinson.charity@gmail.com>

Subject: [abct-members] APA PTSD Clinical Practice Guideline Being Questioned by NJPA

Date: December 22, 2017 at 7:44:44 PM CST

To: ABCT Member List <abct-members@lists.abct.org>

Reply-To: ABCT Member List <abct-members@lists.abct.org>

Dear Colleagues,

I’m writing to bring to your attention that the NJ Psychological Association issued statement today indicating that they sent a message to the APA expressing concern about the Clinical Practice Guideline for the Treatment of PTSD. This action was taken when a group of over 75 psychologists in NJ signed a letter opposing the Guideline. Though many of us sent statements to the NJPA supporting the Guideline, our statement was ignored.

The NJPA’s statement advocates for psychologists practicing from psychodynamic and other orientations who believe that their work has been wrongfully excluded. They have indicated that they fear the loss of their livelihood, insurance companies not funding their work, and the opportunity for clients to receive psychodynamic and other treatments that were not included. The statement also suggests that all treatments yield results and that RCT’s should not have been as strongly considered in the development of the Guideline.

I would ask that ABCT members and perhaps leadership create a statement in support of the APA PTSD Guideline.

Thank you for your consideration.


Charity Wilkinson-Truong

This is why APA has been so reluctant to take a stand and set guidelines about what is evidence-based psychotherapy and what is not.

See my post of a while ago (2012)

Troubles in the Branding of Psychotherapies as “Evidence Supported”


Probing the claim a black, working-class man would have to call 80 psychotherapists to get an appointment.

Study of returned calls from psychotherapists for requests for first appointments got lots of attention in social media but were claims accurate?

hands from da VinciA recent paper  reporting results of calls to psychotherapists for a first appointment got lots of attention in social media after a story in The Atlantic  made provocative statements about  its results.

Some of the claims in the Atlantic article resonated with readers assumptions about how difficult it is to get an appointment

Even for those with insurance, getting mental healthcare means fighting and through phone tag, payment confusion, and even outright discrimination

A lot of the attention to the Atlantic article was due to prominent display of the claim:

black man would

Sure, it is plausible that a black working-class man would have a harder time getting an appointment, but it really take 80 to get a first appointment?

The Atlantic article zeroed in the interaction between gender, class, and race that was presented as more complex in the actual report of the study

Among working-class callers, the study showed equal rates of appointment offers between white and black callers; if perceived race were causing class misidentification by therapists, then one would instead expect to see lower appointment offers for black working-class callers. If anything, the true race differences within the middle class may be slightly smaller than observed, and the class differences among blacks may be slightly larger than observed. Ultimately, the sizeable and statistically significant effects support the conclusion that there is a true disadvantage to black middle-class help seekers and all working-class help seekers, relative to middle-class whites.

So maybe class mattered more than race.

The apparent strength of findings might reflect methodological weaknesses of the study and the author’s stereotypes as much as the prejudices of the therapists who were called. The Atlantic article noted:

Heather Kugelmass, a doctoral student in sociology at Princeton University, selected 320 therapists from the directory of Empire Blue Cross Blue Shield’s HMO plan in New York City. She then had voice actors call them and leave voicemail messages saying they were depressed and anxious. They asked for a weekday evening appointment. She distinguished between different income groups by altering the vocabulary and grammar in the scripts, and she used studies on African-American vernacular and Black-accented English to craft the African-American callers’ scripts. The lower-income white callers spoke in a heavy, New York City accent. All of the callers mentioned they  had the insurance that the therapists purportedly accepted.

The Atlantic article acknowledged:

And it’s hard to purposefully make a person sound poor or black. In the working-class white script, for example, the actor said “hiya doc,” instead of “hello,” and mentioned “on the website I seen your name.” The working-class black script included flourishes that bordered on cartoonish, like “a’ight?” and “my numba.”

The Atlantic article drew some strong reactions, like from a psychologist from Australia, where there are different expectations of psychotherapists:

facebook shocked.

But a Minnesota psychiatrist offered a more sympathetic view of the therapists, noting that insurance companies and managed care share some of the responsibility for the difficulties those in need have in getting a first psychotherapy appointment.

only ways to keep doors open

Finally, the Atlantic article provided some relevant statistics

Between 30 and 50 percent of psychologists run their own practices, which allows them to largely control their own schedules, client rosters, and insurance networks. About 30 percent appear to accept no insurance at all, according to the American Psychological Association, a trade group for psychologists.


More than half of all counties in the U.S. have no practicing psychiatrists, psychologists, or social workers. In any given year, about one in five Americans has a mental illness, according to the National Alliance on Mental Illness, but nearly 60 percent of those people don’t get services.

And some good quotes, like:

“If it’s a market where you pretty much have to pay for yourself, the rich are always going to win,” Stanford University psychiatry professor Keith Humphreys told KQED recently

The original study reintroduced the concept of the YAVIS patient desired by therapists, something  I have discussed with respect to psychosocial care of cancer patients. The article said

Research suggests that psychotherapists (hereafter also called “therapists”) favor help seekers with the “YAVIS” attributes: young, attractive, verbal, intelligent, and successful (Tryon 1986). Consistent with the YAVIS hypothesis, Teasdale and Hill (2006) found that therapists prefer “psychologically minded” clients and those who share similar values and attitudes. These effects were independent of the demographic characteristics (including race) of the help seekers, but the results were survey based, so social desirability pressures may have influenced the results. In another study, black patients were rated by psychiatrists as “less psychologically minded” as well as “less articulate, competent, [and] introspective” than otherwise equivalent white patients (Geller 1988:124).

The Atlantic article

Not White, Not Rich, and Seeking Therapy

The original study

Kugelmass H. “Sorry, I’m Not Accepting New Patients” An Audit Study of Access to Mental Health Care. Journal of Health and Social Behavior. 2016 Jun;57(2):168-83.


Through a phone-based field experiment, I investigated the effect of mental help seekers’ race, class, and gender on the accessibility of psychotherapists. Three hundred and twenty psychotherapists each received voicemail messages from one black middle-class and one white middle-class help seeker, or from one black working-class and one white working-class help seeker, requesting an appointment. The results revealed an otherwise invisible form of discrimination. Middle-class help seekers had appointment offer rates almost three times higher than their working-class counterparts. Race differences emerged only among middle-class help-seekers, with blacks considerably less likely than whites to be offered an appointment. Average appointment offer rates were equivalent across gender, but women were favored over men for appointment offers in their preferred time range.

eBook_Mindfulness_345x550Preorders are being accepted for e-books providing skeptical looks at mindfulness and positive psychology, and arming citizen scientists with critical thinking skills. 

I will also be offering scientific writing courses on the web as I have been doing face-to-face for almost a decade. I want to give researchers the tools to get into the journals where their work will get the attention it deserves.

Sign up at my website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.


How APA’s rating of acceptance and commitment therapy for psychosis got downgraded from “strong” to “modest” efficacy

dodosA  few years ago my blog post caused a downgrading of ACT for psychosis that stuck. This shows the meaninglessness of APA ratings of psychotherapies as evidence-supported.

Steve Hayes came into my twitter feed urging me to take a fresh look at the evidence for the efficacy of acceptance and commitment therapy (ACT).

I clicked on the link he provided and I was underwhelmed.

I was particularly struck by the ratings of ACT by the American Psychological Association Division 12.

I also noticed that ACT for psychosis was still rated only modestly supported.

A few years ago ACT was rated “strongly supported.” This rating was immediately downgraded to “modestly supported “by my exposing a single study as being p-hacked in a series of blog posts and in discussions on Facebook.

That incident sheds light on the invalidity of ratings by the American Psychological Association Division 12 of the evidence-supported status of therapies.

Steve Hayes’ Tweet

edited steve hayes exchange

Clicking on the link Hayes provided took me to

State of the ACT Evidence



The APA ratings were prominently displayed above a continuously updated list of reviews and studies.

American Psychological Association, Society of Clinical Psychology (Div. 12), Research Supported Psychological Treatments:

Chronic Pain – Strong Research Support
Depression – Modest Research Support
Mixed anxiety – Modest Research Support
Obsessive-Compulsive Disorder – Modest Research Support
Psychosis – Modest Research Support
For more information on what the “modest” and “strong” labels mean, click here

Only ACT for Chronic Pain was rated as having strong support. But that rating seemed to be contradicted by the newest systematic review that was listed:

Simpson PA, Mars T, Esteves JE. A systematic review of randomised controlled trials using Acceptance and commitment therapy as an intervention in the management of non-malignant, chronic pain in adults. International Journal of Osteopathic Medicine. 2017 Jun 30;24:18-31.

That review was unable to provide a meta analysis because of the poor quality of the 10 studies that were available and their heterogeneity.

My previous complaints about how the evidence for treatments as evaluated by APA

There are low thresholds for professional groups such as the American Psychological Association Division 12 or governmental organizations such as the US Substance Abuse and Mental Health Services Administration (SAMHSA) declaring treatments to be “evidence-supported.” Seldom are any treatments deemed ineffective or harmful by these groups.

Professional groups have conflicts of interest in wanting their members to be able to claim the treatments they practice are evidence-supported, while not wanting to restrict practitioner choice with labels of treatment as ineffective. Other sources of evaluation like SAMHSA depend heavily and uncritically on what promoters of particular psychotherapies submit in applications for “evidence supported status.”

My account of how my blogging precipitated a downgrading of ACT for psychosis

Now you see it, now, you don’t: “Strong evidence” for the efficacy of acceptance and commitment therapy for psychosis

On September 3, 2012 the APA Division 12 website announced a rating of “strong evidence” for the efficacy of acceptance and commitment therapy for psychosis. I was quite skeptical. I posted links on Facebook and Twitter to a series of blog posts (1, 23) in which I had previously debunked the study claiming to demonstrate that a few sessions of ACT significantly reduced rehospitalization of psychotic patients.

David Klonsky, a friend on FB who maintains the Division 12 treatment website quickly contacted me and indicated that he would reevaluate the listing after reading my blog posts and that he had already contacted the section editor to get her evaluation. Within a day, the labeling was changed to “designation under re-review as of 9/3/12”and it is now (10/16/12) “modest research support.”

My exposure of a small, but classic study of ACT for psychosis having been p-hacked

The initial designation of ACT as having “strong evidence” for psychosis was mainly based on a single, well promoted study, claims for which made it all the way to Time magazine when it was first published.

Bach, P., & Hayes, S.C. (2002). The use of acceptance and commitment therapy to prevent the rehospitalization of psychotic patients: A randomized controlled trial. Journal of Consulting and Clinical Psychology, 70, 1129-1139.

Of course, the designation of strong evidence requires support of two randomized trials, but the second trial was a modest attempt at replication of this study and was explicitly labeled as a pilot study.

The Bach and Hayes article has been cited 175 times as of 10/21/12  according to ISI Web of Science, mainly  for claims that appear in its abstract: patients receiving up to four sessions of an ACT intervention had “a rate of rehospitalization half that of TAU [treatment as usual] participants over a four-month follow-up [italics added].” This would truly be a powerful intervention, if these claims are true. And my check of the literature suggests that these claims are almost universally accepted. I’ve never seen any skepticism expressed in peer reviewed journals about the extraordinary claim of cutting rehospitalization in half.

  • It is not clear that rehospitalization was originally set as the primary outcome, and so there is a possible issue of a shifting primary outcome, a common tactic in repackaging a null trial as positive. Many biomedical journals require that investigators publish their protocols with a designated primary outcome before they enter the first patient into a trial. That is a strictly enforced requirement  for later publication of the results of the trial. But that is not yet usually done for RCTs testing psychotherapies.The article is based on a dissertation. I retrieved a copy andI found that  the title of it seemed to suggest that symptoms, not rehospitalization, were the primary outcome: Acceptance and Commitment Therapy in the Treatment of Symptoms of Psychosis.
  • Although 40 patients were assigned to each group, analyses only involved 35 per group. The investigators simply dropped patients from the analyses with negative outcomes that are arguably at least equivalent to rehospitalization in their seriousness: committing suicide or going to jail. Think about it, what should we make of a therapy that prevented rehospitalization but led to jailing and suicides of mental patients? This is not only a departure from intention to treat analyses, but the loss of patients is nonrandom and potentially quite relevant to the evaluation of the trial. Exclusion of these patients have substantial impact on the interpretation of results: the 5 patients missing from the ACT group represented 71% of the reported rehospitalizations  and the 5 patients missing from the TAU group represent 36% of the reported rehospitalizations in that group.
  • Rehospitalization is not a typical primary outcome for a psychotherapy study. But If we suspend judgment for a moment as to whether it was the primary outcome for this study, ignore the lack of intent to treat analyses, and accept 35 patients per group, there is still not a simple, significant difference between groups for rehospitalization. The claim of “half” is based on voodoo statistics.
  • The trial did assess the frequency of psychotic symptoms, an outcome that is closer to what one would rely to compare to this trial with the results of other interventions. Yet oddly, patients receiving the ACT intervention actually reported more, twice the frequency of symptoms compared to patients in TAU. The study also assessed how distressing hallucinations or delusions were to patients, what would be considered a patient oriented outcome, but there were no differences on this variable. One would think that these outcomes would be very important to clinical and policy decision-making and these results are not encouraging.

Another study, which has been cited 64 times [at the time] according to ISI Web of Science, rounded out the pair needed for a designation of strong support:

Gaudiano, B.A., & Herbert, J.D. (2006). Acute treatment of inpatients with psychotic symptoms using acceptance and commitment therapy: Pilot results. Behaviour Research and Therapy, 44, 415-437.

Appropriately framed as a pilot study, this study started with 40 patients and only delivered three sessions of ACT. The comparison condition was enhanced treatment as usual consisting of psychopharmacology, case management, and psychotherapy, as well as milieu therapy. Follow-up data were available for all but 2 patients. But this study is hardly the basis for rounding out a judgment of ACT as efficacious for psychosis.

  • There were assessments with multiple conventional psychotic symptom and functioning measures, as well as ACT-specific measures. The only conventional measure to achieve significance was distress related to hallucinations and there were no differences in ACT specific measures. There were no significant differences in rehospitalization.

  • The abstract puts a positive spin on these findings: “At discharge from the hospital, results suggest that short-term advantages in effect of symptoms, overall improvement, social impairment, and distress associated with hallucinations. In addition, more participants in the ACT condition reach clinically significant symptom improvement at discharge. Although four-month rehospitalization rates were lower in the ACT group, these differences did not reach statistical significance.”

I noted at the time:

The provisional designation of ACT as having strong evidence of efficacy for psychosis could have had important consequences. Clinicians and policymakers could decide that merely providing three sessions of ACT is a sufficient and empirically validated approach to keep chronic mental patients from returning to the hospital and maybe even make discharge decisions based on whether patients had received ACT. But the evidence just isn’t there that ACT prevents rehospitalization, and when the claim is evaluated against what is known about the efficacy of psychotherapy for psychotics, it appears to be an unreasonable claim bordering on the absurd.



Patients writing about their health condition were abused by a peer reviewer and silenced by The BMJ


giphyRecently I blogged about authors who were informed by The BMJ that they must keep a review confidential despite having submitted a manuscript under the journal’s laudable policy of open peer review. I did not actually post the review in its entirety because doing so might to make it more difficult for the editors of The BMJ to provide timely, appropriate amends, including an apology to the authors.

I contacted a number of The BMJ editorial staff, starting with the action editor who handled the manuscript. I either got no response or I was told the silencing of the authors concerning their mistreatment was not their doing or anything with which they were willing to intervene.

I am now providing the review, which is patently unprofessional, despite coming from a psychiatrist.

being bmjThe reviewer challenges whether the authors relied on self-diagnosis of chronic fatigue syndrome, perhaps bolstered by doctor shopping until they found agreement.

There were two reviews of this manuscript sent to the authors. Reviewer 1  was succinct and positive:

This is a very well-written and incisive analysis. I would however like to invite authors to comment whether the selection of patients by Oxford criteria might have also partially exaggerated the benefit of trial interventions with graded exercises and CBT.

Should patients submitting manuscripts concerning health conditions provide proof of their diagnoses, such as medical records or letters from their physicians?

Should The BMJ apologize to these patients and their academic collaborator co-authors, given that no such apology has been forthcoming from the Action Editor?

The Editor, Navjoyt Ladher, was trained at Kings College, London where Simon Wessely and Trudie Chalders are faculty.

The reviewer trained at Kings College as well.  He  collaborated  with Simon Wessely on articles concerning chronic fatigue syndrome. I am not revealing his name, but in his review, he indicates that he is a psychiatrist who has mostly practiced in the west of Scotland.

I start by calling attention to some noteworthy points in this 2700 word review. The numbers refer to 13 of the highlighted passages in the review that follows.

  1. The reviewer recommends the manuscript be published without the authors being given the opportunity of revision. The intent of this is that it would draw Rapid Responses protesting what patients with chronic fatigue syndrome have to put up with, including from patients who, like the authors, have the condition.
  2. The reviewer dislikes this paper and yet still wants it to be published.
  3. The reviewer claims the manuscript insults and demeans other patients.
  4. If the paper is published and the PACE investigators don’t respond as the reviewer hopes, the reviewer will post a comment to the authors “Shame on you.”
  5. The authors should just move on and be done with the PACE trial.
  6. The reviewer notes that the paper is billed as a collaboration between patients and scientists, but questions whether any of the authors qualify as “clinicians” or “scientists.”
  7. The reviewer expresses doubts that the patients meet criteria for chronic fatigue syndrome.
  8. The reviewer reiterates the doubt the patients meet criteria for chronic fatigue syndrome and suggests that they were erroneously self-diagnosed.
  9. The reviewer suggests that the authors were erroneously self-diagnosed and went doctor-shopping until they found agreement.
  10. After earlier mentioning that he had not obtained the author’s published review, he questions whether it is a major review.
  11. The reviewer asserts that the PACE investigators can defend the recovery rates they claimed in the PACE trial.
  12. The reviewer questions whether the authors are merely writing about themselves rather than persons with diagnosed chronic fatigue syndrome.
  13. The reviewer claims the authors insult patients with genuine chronic fatigue syndrome when they challenge Wessely’s model emphasizing “fearful cognitions.”

patients should be seenReviewer 2

One of the nice things about providing a referee’s report (and it is even nicer when I am the recipient) is when a fresh look at a manuscript provides simple suggestions that lead to clear improvements in the eventual published paper.  For the 1st time in the three decades that I have been refereeing scientific papers  [1]I am going to recommend publication of a manuscript but only on the condition that no changes are made based on any of my comments or questions or criticisms or praise.  This is because the tone of this paper and certain of its content provide a fascinating illustration of some of the problems that surround the management of people with severe and prolonged fatigue states.  As well as making some quite interesting points these authors provide examples of what people with severe and prolonged fatigue have to put up with – even from other people who themselves have severe and prolonged fatigue.  If this manuscript is published in the BMJ this will give the chance for correspondents in the Rapid Responses section to point out some of its flaws and hopefully a valuable debate will follow that will contribute to a more thoughtful approach to this whole difficult field. 

There has been and continues to be a great deal of scientific controversy surrounding the PACE trial of the treatment of chronic fatigue syndrome(s) using graded exercise therapy vs cognitive behavioural therapy vs adaptive pacing therapy all in addition to standardised care delivered by doctors with specialist experience of chronic fatigue syndrome(s).  The back and forth scientific controversy makes fascinating reading as does this manuscript by …

In my opinion, the differing scientific interpretations of this trial have little or nothing to do with the participants’ scientific training and expertise.  Rather, scientific stances are dependent on people’s personal background and/or their clinical training and/or their clinical experience of assessing and treating patients with severe fatigue states and/or their own personal experience of ill health and/or the illness experience of family members and/or their personal experience of clinical care especially the care they have received from doctors. I will try – briefly and I hope not too boringly – to weave into this referee’s report some of my own background which will perhaps give some understanding of why, overall, [2] I dislike this paper and yet still want it to be published in a widely-read journal.

Whether to publish this paper will not be a straightforward decision for the editors.  It is certainly not an original contribution to the scientific literature.  A paper with similar content by some of the same authors appeared only a couple of months ago in the journal “Fatigue, Biomedicine Health & Behaviour”.  However, if it is published in the BMJ then this would provide a readily accessible update on some of the continuing controversies surrounding the diagnosis, prognosis and treatment of people with severe and prolonged fatigue.  Because of the Rapid Responses/Correspondence section of the BMJ it will give the investigators in the PACE trial the opportunity to put their different scientific interpretations to a wider audience.  They have already published these different interpretations in “Fatigue, Biomedicine, Health & Behaviour”.  I am still waiting for that paper from my librarian but I am certain that these scientists are more than able to defend their trial.  An up to date defence of the PACE trial with balanced consideration of its strengths and weaknesses will be very helpful for clinicians and policy makers if it is published in the readily accessible BMJ.com.  This will, in turn, make it easier when clinicians and health service managers try to improve services for the wide range of patients with severe and prolonged fatigue.

The main reason I would like this paper to be accepted is because of some careless use of language by the authors.  This can happen to anybody and I am sure these authors do not really mean what they say.  [3]They have however insulted and demeaned a subgroup of people with severe and prolonged fatigue (see below).  This is in spite of the fact that [the authors] are themselves in poor health due to severely fatiguing illnesses.  [4] I am sure this will be picked up by the PACE investigators and other readers of the BMJ and highlighted in the Rapid Responses.  If not, then I will write in and tell the authors that they should know better and I think I would end with a “Shame on you”.   

If this manuscript is published in the BMJ and the authors receive the appropriate criticism (and credit) for some the things they have said I hope they will then get on with applying their skills and intelligence and insights to some proper work that will have a chance of genuinely benefiting patients with chronic fatigue syndromes and other overlapping disorders.  [5]It is about time that they moved on from their obsessive (in the non-psychiatric use of the term) poring over the results of a good (albeit imperfect) randomised controlled trial (…and name me one perfect trial that exists in any clinical field).

Here are some comments on the manuscript that may be helpful to the editors in deciding whether or not to publish this non-original paper in a major general journal.  I am hoping that these comments will help draw out how interesting this paper will be to many readers of the BMJ.

1)            These authors make no attempt whatsoever to acknowledge the heterogeneity of patients who are labelled with a diagnosis of chronic fatigue syndrome (and, in my experience, the even greater heterogeneity of the smaller number of patients who are labelled using the diagnostic concept of myalgic encephalomyelitis).

2)           [6] These authors make a big deal of the fact that their paper is the result of collaboration between patients and scientists.  I am still unsure whether that should be clinical scientists.  [authors]– do you have any clinical training and experience?

3)            From what I can gather from this paper and their other writings [authors]  seem to believe that the best definition of chronic fatigue syndrome is whatever condition it is that has led to ill health in [the authors].

4)           [7]  I am sorry to hear of these authors’ ill health.  I hope they will not be upset when I say that I do not accept their diagnosis of chronic fatigue syndrome.  I do not accept at face value anybody’s declared diagnosis of chronic fatigue syndrome until I have done my own history and examination.  The reasons for this are as follows.  When I was a junior registrar in neurology in 1981/1982 our team investigated referrals – including self referrals – of patients with severe fatigue.  They received a battery of investigations from brain scans through lumbar puncture through visual evoked responses to muscle biopsy.  Every test would be normal.  We never took a social history and never carried out a mental state examination.  Come to think of it we never even took a proper past medical history.  The patients would then be told that they had a condition called myalgic encephalomyelitis and would be sent home to rest with the prognosis that they would not improve but that there may be a cure in the future with advances in neurovirology.  As I have written before .. and I have thought to myself on many occasions since – may God forgive me for the part I played in destroying the lives of some of these vulnerable patients. 

I later did clinical work with referrals with severe fatigue in three different clinical and geographical settings between 1988 and 2011.  I found then that a substantial minority (for a period it was the majority) of patients with a diagnosis of chronic fatigue syndrome and myalgic encephalomyelitis had readily diagnosable conditions using basic knowledge of general medicine and general psychiatry.  Sometimes my rediagnosis/reformulation would lead on to effective treatment.  However, some patients would reject these alternative explanations.  Some, sadly, would be angry towards me and state that I was not taking their illness seriously.  It was very disconcerting and it raised major worries about the inadequacies of my clinical communication skills when I would tell a patient with a diagnosis of chronic fatigue syndrome or, more usually, myalgic encephalomyelitis that their profound fatigue and other serious symptoms were better explained by, for example, their recurrent diabetic ketoacidosis; or the systemic effects of their known severe rheumatoid arthritis and its treatment; or their severe psychotic illness and its treatment; or profound depressive illness; or crippling panic disorder; or Reiter’s disease; or obsessive compulsive disorder with co-morbid depression; or malnutrition due to anorexia nervosa; or the temporary aftermath of newly diagnosed and treated severe thyroid disease…….I could go on and on – and then be told by the patient that I was not taking their ill health seriously.

I accept that my experience must have been unrepresentative since I was working from psychiatric outpatient clinics and for most of this time I was based in the West of Scotland that was the epicentre of myalgic encephalomyelitis movement.  However, I cannot believe that I am the only doctor who encountered this phenomenon.  How can I be sure that [the authors]  do not have ill health that could be much better categorised using any one of a range of much more straightforward diagnoses?

5)            [8] Who diagnosed [the authors’] condition?  I do not expect an answer since their medical history is and should remain confidential.  However I have to raise the possibility that they are self diagnosed.  For a while at my clinic a clear majority of diagnoses of chronic fatigue syndrome or myalgic encephalomyelitis had been made by the patient themselves or by a relative or friend or neighbour and not by any doctor or other clinician.  Many (but not all) of these patients had other obvious reasons for their fatigue after taking a full history and doing a physical examination and mental state examination.  I accept that such high frequency of self diagnosis/lay diagnosis may not be found in other clinics but, once again, I cannot believe that I am the only clinician who has encountered this phenomenon.

6)           [9]If they have been diagnosed with chronic fatigue syndrome by a doctor was this by their own doctor?  There certainly used to be a phenomenon whereby patients with a self diagnosis of chronic fatigue syndrome and (even more so) myalgic encephalomyelitis used to go doctor shopping until they found a doctor who agreed with their own diagnosis.  In my experience many (but not all) of these patients had readily diagnosable and fairly straightforward alternative diagnoses. 

7)            I notice in the manuscript that [one of the authors]  says she has “recently co-authored a major critical review of the concept of psychological causation in medicine…”.[10] [Author], I mean no disrespect, but is it for you to say that your review is a major one?  I hope you will not mind my saying that I found it to be very unbalanced and highly selective in its use of the medical literature.  Once again, I want this manuscript as it stands to appear in a widely-read journal so that interested readers will be able to get a clearer view of [this author’s]  thinking and I hope it will be criticised appropriately.

8)            I think [the authors]  are right to question the number of subjects in whom there has been “recovery”.  They may not be aware that the word “recovery” has been hijacked throughout mental health services by the very influential and international Recovery Movement.  This is a positive example of a powerful and creative collaboration between service users and clinicians.  [The authors]  should look it up.  The Recovery Movement has contributed to improved care for many patients especially those with chronic psychotic illnesses.  I think I was the last person involved in mental health services in Scotland – patient or clinician – to argue against some uncritical approaches of the Recovery Movement but I ended up giving in.  I find myself talking to certain patients about recovery – who have no chance of recovery in the dictionary sense of the word.  [11]The PACE investigators will easily be able to defend themselves in regard to recovery in their trial since they have operationally defined what they meant by their use of this word and they have been very clear about changes they made in their definitions.  Nevertheless, [An author] and colleagues do have a point about how reported “recovery rates” could be misleading.     

9)    [The authors] are right that the PACE trial is not definitive.  I have only seen the PACE investigators say this on one occasion and in one paper although I may have missed some other instances.  I think this was just some careless use of language.  Give them a break.

10)   There is something unpleasant in the tone of this article (although I am also being influenced here by outpourings on the Internet).  I cannot help but get the impression that the authors were punching the air when they thought they had come up with support for their views that certain treatment methods for certain people with serious ill health were not as helpful as others (both patients and clinicians) had hoped.  [12] Are [the patient authors]  absolutely sure that they are writing about syndromes of chronic fatigue?  Are they sure they are not simply writing about themselves?

11)         [13] This brings me to the insulting language about certain patients with severe and prolonged fatigue states.  [The patient authors]  are absolutely certain that their condition is not influenced by psychological troubles or social stressors.  This does not give them and their co-authors any right to belittle other patients and understate the severity of illness in those with a diagnosis of chronic fatigue syndrome where such factors are playing a major part in their ill health.  These authors write “…the CBT programme considers patients’ concerns about exercise to be merely ‘fearful cognitions’ that need addressing”.  Earlier in the article they state “This model proposes that there is no major ongoing disease process in CFS – merely deconditioning due to recent inactivity, and its various consequences”.  Merely?  Have they never met anybody with near 100% disability due to mere fearful cognitions?  I have.  Have they never spoken with patients who are at risk of dying due to mere fearful cognitions and their consequences?  I have.  XWilshire and her colleagues should be ashamed of themselves and I think their shameful language should be published and then condemned.  They should then be given the opportunity to write what they really mean and apologise to those patients about whom they have been so offensive.  This is the main reason why I want this article to appear in a widely read general journal that has an active Correspondence/Rapid Responses section. 

I hope these comments are helpful to the editors.  There is such widespread and heated debate around this subject that I think it would be helpful for some of it to appear in a widely-read journal.  I hope the editors will agree with me and publish this article – then stand back and see how the debate unfolds in the Rapid Responses/Correspondence.  I think this will lead in the end to more people with severe and chronic fatigue getting better care packages – which will include for some cognitive behavioural therapy and for others graded exertion therapy and for yet others perhaps even adaptive pacing therapy and for others none of the above but with high quality informed medical care for everybody.  Hopefully it will also contribute to efforts to raise funding for more research projects that will join the PACE trial in giving us guidance about how to treat and how not to treat individual patients with severe fatigue be it explained or unexplained or simple or complicated.



Rosanne Cash: Resisting a diagnosis of medically unexplained symptoms, being found to have a brain tumor

A moving video provides Rosanne Cash’s testimonial to the power of science over superstition and pseudoscience.

The Grammy award winner suffered over a decade from headaches eventually diagnosed as a result of a rare Chiari I malformation and syringomyelia.

Before getting successful brain surgery, she had to resist misdiagnosis by professionals and New Age healers, some of whom suggested that her not-as-yet unexplained symptoms were a psychosomatic condition and even her fault.

Rosanne Cash credits her eventual diagnosis and successful treatment to the power of science and to strategy of “persist and verify.”

Persist and verify… The power that we abdicate to others out of our insecurity — to others who insult us with their faux-intuition or their authoritarian smugness — that comes back to hurt us so deeply… But the power we wrest from our own certitude — that saves us.

Brain Pickings

Every week I look forward to the arrival of Brain Pickings on Sunday with its free wonderful curated selection of highbrow, but incredibly engaging readings. You can subscribe to weekly alerts here.

This week’s selection was a reading by Rosanne Cash of a poem by Adrienne Rich, “Power,” a tribute to Marie Curie. The poem itself is a great treat, but I’m recommending Rosanne Cash’s first few minutes of very introduction. But I am confident that you will continue to the end of the short poetry reading and hear of the heroism of Marie Curie.

A rare and misunderstood condition

You can find out more about her decade-long struggle to confirm a diagnosis that she’d already provisionally made of herself, as well as the details of her condition here.

The Diagnosis of medically unexplained symptoms

Medically unexplained symptoms (MUS) is a horribly unvalidated psychiatric diagnosis that leads to a cessation of any search for a physical basis for a patient’s complaints. You can learn more about MUS in a blog post by Allen Frances and Suzzy Chapman, Mislabeling Medical Illness As Mental Disorder.

Instead MUS leads to speculations about the primacy of  psychological factors in maintaining and exacerbating her condition.

The diagnosis is not ruled out by actually confirming that a patient has one or multiple physical health conditions, nor even a prescription for medications that may explain some or all the symptoms and complaints.

The diagnosis is applied on the basis of a professional deciding that the life of a  patient like Roseanne had become subsumed by her preoccupation with her complaints.

Many serious physical health conditions initially manifest themselves in vague and intermittent symptoms that could lead to a diagnosis of medically unexplained symptoms.

If a friend or family member informed me that a professional had provided a diagnosis of medically unexplained symptoms, I would suggest they run from that professional and seek appropriate medical care.

Seeking treatment for not-yet-diagnosed medical conditions

it is thought

Goofy and patronizing pamphlet offered to Danish patients

If Rosanne Cash had been in Denmark, she might have encountered Per Fink and his colleagues who would ’have offered a cognitive behavior therapy with no hope of alleviating her problems, but a lot of haranguing and undermining of her conviction that her condition had a physical basis.

Cash might have been offered ineffectual mindfulness training in the Netherlands.

If Rosanne Cash had sought help in the UK, she might have encountered neurologist Suzanne O’Sullivan. Rosanne Cash might have been offered amateurish  Freudian explanations of the source of her suffering in her early childhood experience. You can find an excellent critique by Nasim Marie Jafry of O’Sullivan’s pop book All in the Head here.

In the UK, Rosanne Cash might have gone to an NHS clinic influenced by Trudie Chalder and the PACE investigators that would’ve argued that her suffering was being maintained by false illness beliefs.

Anywhere in the world, Cash might have encountered a professional who has too much faith in a flawed Cochrane review tainted by undisclosed conflict of interest and outcome switching.

In the US, Cash might have gone to a prestigious medical center, only to be informed that her headaches  must be due to her unacknowledged child sexual abuse  – a crackpot, but award-winning theory.

She might even have encountered a trauma-informed therapist who would attempt to co-construct with Cash false recollections of early sexual abuse. If Cash protested that this abuse had not occurred, the therapist might counter that she had repressed the experience and needed more work to uncover it.

But Cash persisted, resisting, and refusing to abdicate to the authoritarian smugness and quackery of the professionals and quack healers and for over a decade.

Excerpts from 10-Year Ache: Singer Rosanne Cash on living with Chiari I malformation and syringomyelia 

I’ve had headaches for as long as I can remember,” says Cash, who lives in New York City and has made a name for herself over the last four decades as a musician and a writer. She has been nominated eleven times for a Grammy and won the 1985 award for Best Female Country Vocal Performance. Her 2010 memoir, Composed, was critically acclaimed.

Cash’s headaches worsened during her second pregnancy. By 1994, they were so severe that she finally consulted with a neurologist. Still, it wasn’t until 2007 that Cash’s Chiari I malformation was accurately diagnosed. The first neurologist Cash went to thought the singer was experiencing cluster headaches—an exceedingly painful and relatively rare kind of headache that tends to occur in a cyclical pattern—but the medications she prescribed offered little help.

The second neurologist, a headache specialist, diagnosed Cash with migraines. When the headaches continued and intensified, the diagnosis changed to atypical migraines.

“This went on for a decade,” Cash says. “A decade!”


Rosanne Cash even wondered if she might have a Chiari I malformation after discovering the term online. She discussed it with the headache specialist, but an MRI came back negative. Although most experts consider MRI to be the best way of diagnosing Chiari I malformation, it isn’t flawless, according to Dr. Singh, in part because a malformation can change over time.

The singer frequently experienced neck pain and stiffness, and at times her headaches were severe enough to knock her off her feet. “Sometimes it felt like someone had hit me in the back,” she says. “Once I even dropped to my knees, the pain was so intense.” Her general practitioner determined that she had Lyme disease. “But after she treated me for Lyme disease, nothing changed,” Cash says.

During these years, Cash tried a number of treatments and approaches to managing her pain, including migraine medications, yoga, acupuncture, massage, and chiropractic adjustments. Most offered temporary help at best. “Sometimes not even temporary,” Cash recalls.

Comments from a Buddhist about mindfulness as a therapeutic practice

buddha-saysRecently a very insightful comment was left on my November 2016 PLOS Blog Mind the Brain post, Unintended consequences of universal mindfulness training for schoolchildren?. Comments often don’t get the attention they deserve, especially  a comment left on a blog that is months since it was posted. 

I have taken the liberty of elevating his comment to a post here where it might get the attention it deserves.I welcome further comment from other Buddhists, knowing well that Buddhism represents a variety of perspectives and certainly is not an orthodoxy.

Without further ado, here is Nick Leggert:

Hello Everybody,
Thank you for this blog, James.

I have background both in therapeutic work with emotionally upset adolescents and their families, and in Dzogchen Buddhism, which latter, itself, is a distinctive and unusual member of the Buddhist family. From out of this background, I have long been concerned that mindfulness was being misunderstood, both psychotherapeutically, and by some Buddhists.

When we teach and practise meditation within Dzogchen, we place it within what are called ‘The Three Jewels.’ These are Buddha, Dharma, and Sangha. Simplifying necessarily but simplifying none the less, ‘Buddha’ could be called ‘meditation,’ ‘Dharma’ could be called ‘teaching,’ and ‘Sangha’ could be called ‘a community of practitioners.’’

We would not teach meditation as an instrumental technique, and we would not advise it outside of Dharma and Sangha. The Three Jewels are mutually interdependent – if you like, they are also, at the same time, all aspects/facets of one Jewel. Our teachings are largely meaningless, if they are not accompanied by the practice of meditation. Meditation is actually dangerous, without the teachings. And meditators must have the ‘peer-support’ of a community of meditators, called the sangha. The community of meditators, translated into a modern sociological construct, engage in ‘Situated Learning,’ which is a form of social learning by apprenticeship; and they then pass on their learning to newcomers.

So, from our point of view, five or ten sessions of mindfulness training really don’t cut the mustard, and could be harmful. There have been hundreds of years of experiential learning amongst Buddhist meditators, so there is a vast literature of experience about the pitfalls of unguided practice and the things that can go wrong, even and especially for the most advanced meditators.

More than all this, properly understood, meditation is not primarily psychological, though it may have psychological spin-offs (‘bad’ and ‘good’). Psychology itself is a modern category, and it is a misleading categorization, in some ways, when discussing Buddhism. Meditation is closer to what a modernist would understand as experiential ontology: that is, it is a practice which makes the claim that it yields – or can yield for some – empirical insight into the nature of being – your being, my being, and the being of all things. This is deep ontology, with a phenomenological edge – and there doesn’t appear to be an obvious pot of gold to find at the end of its rainbow (because its rainbow doesn’t end). The most advanced meditators say that vistas expand into infinity, and mysteries deepen, as their practice proceeds. So to imagine meditation as like a train ride with a destination, a terminus, is actually to sabotage meditation by making it purposeful and ambitious. Meditation is a bit like cleaning your teeth, or doing the dusting: you just do it, for ever. Or it’s like learning to play a musical instrument – you must never stop practising, and it becomes a way of living. Meditation is actually quite close to breathing, in the Buddhist understanding: it is understood to be almost that necessary to any kind of fulfilled – and then socially useful – life. You don’t often stop to think about your breathing – it’s almost as if your body breathes you. Just so with meditation. After a while, meditation so infuses you that it meditates you, as you meditate. You aren’t practising any more, you are part of a practice. And very advanced meditators will be meditating, effortlessly, without thought or planning, as they act – meditation ceases to be a separate activity, on a cushion or chair. It changes the whole way you see the world and the whole way the world sees you. Externally, nothing looks different, except to the very experienced, but you’ve turned inside out, and you’re not really ‘there’ any more, in the way you were. This paragraph is too long, but it’s still inadequate, because words can’t fully explain. You have to do it to understand.

If it works, don’t mend it. I expect you know somebody who says they were helped by mindfulness, and that’s fine. But it’s not a good idea to confuse placebo, which is known to be mildly effective, with hard-edged psychotherapeutic efficacy. The mindfulness fad, like the CBT fad, like the frontal lobotomy fad, will come and go. Mindfulness has its place in a battery of possible approaches (which might include political change at one end of the spectrum, and joining a hiking club at the other) but it is not a panacea, and it not the beginning of a return to Buddhism in the West.

Well, that’s what I think – and I’m generally deluded….

Warm wishes,



I will soon be offering e-books providing skeptical looks at mindfulness and positive psychology, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

Sign up at my new website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites.  Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.


Conflicts of interest in Cochrane reports on psychological interventions


Recently I was honored to join an esteemed group of international colleagues in writing to the Cochrane about the Collaboration’s inattention to conflicts of interest in reviews of psychotherapeutic interventions.

The Collaboration has been particularly lax in dealing with conflicts of interest with respect to psychological interventions for “chronic fatigue syndrome” and medically unexplained symptoms. The group obviously don’t apply the same standards that they would to industry’s involvement in evaluations of medical interventions. Why should psychotherapy be different?

In 2017 I will push the Cochrane to do a better job of protecting their valuable reviews from the taint of conflicts of interest, both declared and undeclared.

For background, please see my previous posts –

An open letter to the Cochrane Collaboration: Bill Silverman lies a-moldering in his grave

Why the Cochrane Collaboration needs to clean up conflicts of interest

 I elicited a reply from David Tovey, the Editor in Chief of the Cochrane Library:

To which I responded:

My response to an invitation to improve the Cochrane Collaboration by challenging its policies

The letter with an international group of psychotherapy researchers and meta-analysts

Conflicts of interest in Cochrane reports on psychological interventions

Winfried Rief  (GER), Gerhard Andersson A(SWE), Juergen Barth (SWI), James Coyne (US), Pim Cuiipers (NL), Stefan G. Hofmann  (USA), Klaus Lieb (GER)

Conflicts of interests are a major threat to the validity of clinical trials, meta-analyses and Cochrane reports. Accordingly, people with close links to pharmaceutical companies such as Novartis are typically not invited to chair a Cochrane review on methylphenidate in ADHD, members of Pfizer are not chairing Cochrane reports on sildenafil, etc. However, Cochrane analyses on psychological interventions allow strong conflicts of interests of the chairing experts. Conflicts of interests in psychotherapy might be responsible for controversial and heated debates beyond scientific evidence, and financial involvements and interests can be also substantial 1.

The influence of personal preferences in original clinical trials is strong and a robust finding. This well-known influence is often called the allegiance effect. If experts are highly identified with a specific treatment approach, their scientific reports notoriously overestimate the effect sizes of this treatment. A recent analysis showed a robust and moderate allegiance effect on outcome reports (Cohen’s d=.54) 2. If meta-analyses aggregate these biased original study reports, again mainly steered by the same scientists who are over-identified with this approach, the bias is further amplified in the corresponding Cochrane analysis.

Therefore, we discourage that authors with a strong allegiance for one therapeutic intervention analyze and summarize their favorite approach in Cochrane reports. For example, the person who co-developed cognitive therapy (Aaron T. Beck) should not write the Cochrane analysis on cognitive therapy; Steven Hayes should not author a Cochrane analysis on Acceptance and Commitment Therapy, which was primarily  developed by himself, and for which he expressed a serious interest that this should be broadly disseminated; Gerhard Andersson should not review internet interventions, after a major part of published trials in this field originate from his group; Peter Fonagy and Falk Leichsenring should not chair Cochrane reviews on psychodynamic psychotherapies, after they published a series of papers all expressing a strong interest that these types of therapies should be better acknowledged.

Two conclusions can be drawn. First, the most ambitious proponents for specific treatments should not have a major influence in Cochrane reports on this intervention. Existing Cochrane reports fulfilling this criterion should be excluded from the Cochrane databases. Second, conflicts of interests of any expert who contributes to Cochrane analyses of psychological interventions should be assessed by the Cochrane group and declared by the authors. Criteria for this type of conflict of interest that should be reported could be: the author developed one of the treatments that is examined in the meta-analysis; the author wrote a treatment manual that is examined in the meta-analysis; the author gave workshops or keynote lectures on one of the treatments or is leading a respective psychotherapy institute; the author published comments in favor of one of the treatments or recommended one of the treatments over another; original studies of the author are included in the Cochrane report.

As far as we understood the Cochrane initiative, it is supposed to provide robust and critical information to the public and to health care providers. However, this can only be achieved if no obvious conflicts of interest of the authors are evident, or if conflicts of interest are balanced between proponents and more critical participants. While the Cochrane initiative started already attempts to control for allegiance effects, these effects need to be controlled more rigorously. All authors should declare potential conflicts of interest for reasons of transparency, while experts with strong allegiance to one treatment should not be included in Cochrane reports about this treatment at all.


  1. Lieb K, von der Osten-Sacken J, Stoffers-Winterling J, Reiss N, Barth J. Conflicts of interest and spin in reviews of psychological therapies: a systematic review. BMJ Open 2016; 26(6 (4)).
  2. Munder T, Brütsch O, Leonhart R, Gerger H, Barth J. Researcher allegiance in psychotherapy outcome research: An overview of reviews. Clinical Psychology Review 2013; 33: 501-11.

ebook_mindfulness_345x550I will soon be offering scientific writing courses on the web as I have been doing face-to-face for almost a decade. Sign up at my new website to get notified about these courses, as well as upcoming blog posts at this and other blog sites.  Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.

Simon Wessely: Why PACE investigators aren’t keen on handing over the PLOS One data to Coyne


In what has become his characteristic style, Simon Wessely smears me with innuendo, suggesting I might try to alter the PLOS One PACE data and use the altered data to damage the careers of the investigators. He further argues that any release of the data could hurt the careers of the investigators and he understands their resistance. I say “Nonsense! I should be provided with the data as the investigators promised in publishing in PLOS One.”


Simon Wessely discreetly stays in the shadows, but he’s been very much involved in the struggle over the PACE trial, including whether the data will be released to me. I first learned from Wessely, not PLOS One, that my asking for data promised as a condition for publishing in the journal had somehow been turned into a Freedom of Information Act request.

But before that, Simon and I were in regular contact by direct messages on Twitter. I gave a talk at King’s College on biomarkers in June 2015. Simon and I later discussed getting a drink together because he was not able to be there. Simon established that he’s a wine guy, not one for scotch or beer.

When I first started tweeting about the PACE study months later, Simon contacted me, asking me not to comment on this study until I had spent months familiarizing myself with it. When that strategy didn’t work, he asked me to tone down my criticism of the PACE study. He even suggested that the PACE investigators would meet me in a public debate that Andre Tomlin of Mental Elf was trying to set up. But Andre later confirmed that the PACE investigators had already indicated there was no way that they would debate me.

Simon has continued to work behind the scenes, conveying vague threats to early investigators who criticize PACE in print. Simon’s nudges have been followed up by further threats from the PACE investigators to the universities of these early investigators.

Journalists have also been contacted by Simon who discouraged them in emails marked confidential from commenting on PACE. Tacky and manipulative because Simon’s emails  came out of the blue, and Simon was suggesting that the journalists  should not tell anyone about them.

Journal editors have been contacted by PACE investigators with efforts to suppress publication of criticism.

Critics have asked Psychological Medicine to publish a letter to the editor reporting the switched scoring of PACE outcomes that had substantially inflated the recovery rates reported in that journal in 2013. The editor, Robin Murray – a close colleague of Simon’s at King’s College, London – rejected the possibility of any letter based only on re-analyses. Rather, any correction would have to be based on an independent replication of the £5 million study in another sample.

Something is rotten in the UK, not just the State of Denmark.

anna-sheridan-wood-2When one highly professional and mild-mannered early career researcher requested a small amount of data from the PACE trial, the PACE investigators did a background check on her and attacked her character. Her request was labeled “vexatious” and refused.

Nonetheless, a group of patients teamed up with an early career investigator.  Relying on normative data and reanalysis of the outcomes originally specified  in the PACE protocol, they concluded: 

None of the changes made to PACE recovery criteria were adequately justified. Further, the final definition was so lax that on some criteria, it was possible to score below the level required for trial entry, yet still be counted as ‘recovered’. When recovery was defined according to the original protocol, recovery rates in the GET and CBT groups were low and not significantly higher than in the control group (4%, 7% and 3%, respectively).

Critics need to be protected from bullying and their efforts to secure the data need to be supported. As I have noted before, the success of attempts to correct the untrustworthiness of the scientific literature depend on critics getting access to data, especially when replications are not feasible. That’s why the situation with the withholding of the PACE data should concern everybody, not just those focused on chronic fatigue syndrome.

It’s been over a year since I requested the PLOS One data. It wasn’t through a Freedom of Information Act request. I’m determined that early in the new year either I will get the data released or the PLOS One paper will be retracted. Stay tuned.

But for now, here is a communication from Simon to a patient who had tweeted about the PACE data back in March 2016. Some of the excuses made for not sharing the data with me were tried out at the UK Lower Tribunal and soundly rejected. Nonetheless, the excuses continue to be made by the PACE investigators to the press through the Science Media Centre London, which orchestrated the team’s unsuccessful effort to get Parliament to exclude university research from Freedom of Information Act requests.

There is international consensus that the usefulness of data sharing will be seriously compromised if those of us who request data are screened for whether the original authors think we have been naughty or nice. We are not asking the original authors to play Santa.

The many things that the PACE investigators have done with their data require forensic exploratory analyses of what the data may be hiding. This is especially important because of the policy implications that they are claiming and the financial benefits they are gaining from ties to the insurance and re-insurance industries.*

 The email:

 …I doubt anyone would actually be surprised to learn that they are not keen on handing anything over to someone who says they are “at war” with the PIs, that they are “coming to get your pathetic little trial”  and so on and so forth.  Hardly disinterested academic inquiry.   But I can tell you they do want to release the data, because they have absolutely nothing at all to hide, but it does come down to trust.  I have been suggesting that the sooner they get a robust system set up, which excludes them, the easier life will be for everyone.  And I am sorry, but there have to be safeguards.  For  a start, there are legal obligations on data sharing on any doctor.   And these are not easy to fulfil.  And there is the issue of consent…you simply cannot just ride over it.  And I am afraid some of the tweets do show a total lack of understanding what you can do with data, should you be so inclined.  And if you do that, then that becomes a serious charge against whoever gave you that data.  And in the weird, paranoid, sulphurous world of ME… I am afraid that fanciful notions of someone trying to do that, just to make life difficult and indeed possibly professionally terminal, for the PIs, is something that they don’t dismiss, and neither would I.   so it takes time to get it right.  and lawyers are going to be involved – because it’s a very complex area of the law.  I have actually written on data confidentiality for the academy of medical sciences, so I know of what I speak.  No one is going to hand over any data set these days without a data sharing agreement – I think you do really believe that you get a request, and say  “OK, now, where the CD, ah yes, let me put in the post for you”.   No academic would ever do that, they would be insane, and would probably also be unemployed fairly quickly

In fact the more intemperate some people become over the this, that just makes it worse, and also makes it easier for the PIs to perfectly legitimately use the current legal framework not to share data, because they are worried about all this, and so would I be, if it was.   They are worried about active malice – there are people out there who have downloaded my powerpoints, changed them (guess how) and then circulated false versions to make me look like an ogre – which is why for many years now I never ever allow my powerpoints to be placed in any public place, as usually happens with conferences  remember there are people also, and you know as well as I do who they are, who make up quotes claiming they are from me or peter and co – so its not paranoid to worry about what such people might do with a data set.

The other problem is that speaking frankly, I would say that nearly everyone who can analyse large clinical trial datasets, doesn’t have the slightest interest in doing so.  They don’t care.  PACE looks pretty good to the professionals.  I know you don’t believe that, but it does.

Anyway, I have been suggesting quietly that the sooner they rid of the issue – get the Wellcome, MRC or the US centres that provide a data sharing service  (there are several by the way)  to take this – then they can deal with the data sharing agreements, they can decide if Jim Coyne should get It and give reasons if not, they can police the system, they can check the pre specifcied analytic strategy  (which for sure will be required, trust me – no one is going to be permitted just to do random fishing exercises, because we all know that will create utterly spurious results which will do fantastic harm) –  but guess what,  some of these august bodies are not too keen .  I wonder why……

So I think it will happen. Its not been made easier by someone develop an illness I am afraid, which is certainly stress related  (when KCL said that they were concerned about the health of their eployees they were spot on).  But it will take time.  And here I do agree with you – the sooner it taken away from the PACE team the better for everyone.

Because there isn’t a smoking gun.  Sorry,…but there isn’t.  its just a well conducted huge trial with a rather modest but still useful result that adds to the evidence for the safety and efficacy of CBT and GET, which will remain the treatment of choice until something better comes along.  Because there isn’t anything else at the moment.


*Here is the declaration of conflict of interest that accompanied the 2011 article in The Lancet:

 PDW has done voluntary and paid consultancy work for the UK Departments of Health and Work and Pensions and Swiss Re (a reinsurance company). DLC has received royalties from Wiley. JB was on the guideline development group of the National Institute for Healthand Clinical Excellence guidelines for chronic fatigue syndrome and myalgic encephalomyelitis and has undertaken paid work for the insurance industry. GM has received royalties from Karnac. TC has done consultancy work for insurance companies and has received royalties from Sheldon Press and Constable and Robinson. MB has received royalties from Constable and Robinson. MS has done voluntary and paid consultancy work for government and for legal and insurance companies, and has received royalties from Oxford University Press. ALJ, BA, HLB, LVC, JCD, KAG, LP, MM, PM, HO, RW, and DW declare that they have no conflicts of interests.

ebook_mindfulness_345x550I will soon be offering scientific writing courses on the web as I have been doing face-to-face for almost a decade. Sign up at my new website to get notified about these courses, as well as upcoming blog posts at this and other blog sites.  Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com

Simon Wessely’s muddled views of the good psychotherapy trial: I. Misunderstanding control groups

A large clinical trial might be said to resemble an ocean liner…Very occasionally there is a shipwreck – Simon Wessely

Sir Simon Wessely is apparently still hawking cruises on a wrecked ship that can’t be salvaged. I urge refunds.

 After a long career, Wessely is in the twilight of his influence and relevance. His tired defense of the design of the PACE trial suggests that he is out of touch with contemporary thinking about psychotherapy trials and risks to their validity. But he still chastises those who disagree with his assessment of the PACE trial.

 I invite readers to read and decide….

If you haven’t read Julie Rehmeyer’s excellent article in STAT Bad science misled millions with chronic fatigue syndrome. Here’s how we fought back, you should at least bookmark it for reading later. A mathematics and science writer who happens to be a patient, Julie Rehmeyer’s piece has proven to be the right article at the right time to bring the controversy into the public eye over chronic fatigue syndrome (hereafter CFS/ME)* and what is widely seen as the demolished credibility of the 5 million pound PACE clinical trial of cognitive behavioral therapy (CBT) and graded  exercise therapy (GET) for chronic fatigue syndrome. The article has been rightfully republished and is receiving wide commentary in conventional and social media.

Rehmeyer’s article’s uniformly warm reception was marred only by a chilly comment from Sir Simon Wessely that began:

Sorry to spoil the party but some cold facts are necessaey [sic]

This blog post is the first in a series. I will respond to some of Wessely’s odd pronouncements about evaluating clinical trials and his dismissive defense of the PACE trial.

Wessely starts by lecturing his readers that they should check the PACE trial against the CONSORT checklist, which you can find here.

The PACE trial remains an excellent trial and a model of how to deliver a complex intervention RCT. Read the 2012 Lancet paper again. Check it against the CONSORT statement. You will see it is 100% compliant.

consortThis is a puzzling suggestion. The CONSORT checklist evaluates whether particular aspects of a clinical trial are reported in an article, not whether those aspects were competently implemented in the trial. For instance, an author could say that “Data for patients who did not respond to treatment as we hoped were discarded.” That would meet the criteria for disclosing whether or not analyses were conducted on an intention-to-treat  basis, but it would be a gross violation of best research practices. As we’ll see, simply relying on CONSORT to decide whether the PACE trial was properly done, would miss some egregious questionable research practices.

Wessely’s comment echoes a similarly condescending comment in his earlier Mental Elf blog, to which he provided a link in his comment on Julie’s article:

I am struck that some of the critics are not familiar with the fundamental strengths of the randomised control trial, and why medicine continues to value it so highly. Likewise, some show unfamiliarity with the core methodological components that contribute to the integrity of a clinical trial, and whose violation calls into question the findings, as compared to what one might call secondary less important features. In other words, what distinguishes a good trial whose results are likely to be sound from one in which there is a definite risk of bias.

Wessely simply doesn’t get it. The PACE trial is now being scrutinized by a large international audience who won’t tolerate being patronized.

Chris Chambers  recently remarked:

 What’s happened instead is that technology has empowered the new generation to speak freely and publicly on scientific issues, to critique poor quality science and practices, to bust fraud, and to break the bounds of peer review. Twitter in particular has shaken the traditional academic hierarchy to its core. On Twitter a PhD student with thousands of followers suddenly has a greater voice than an Ivy League professor who might have no social media presence at all.

Here are a few of the aspects of the PACE trial comparison of the active treatments – (CBT) and (GET)– to a control condition that Wessely encourages us to ignore.

  1. Inadequacy of the control group. All four of the conditions in the trial involved providing patients with “standardized specialty medical care” (SSMC). One condition provided only SSMC and served as the control/comparison for evaluating the CBT and GET.

The condition is grossly inadequate as a control group because it is deficient at the basic level of contact time – patients assigned to SSMC received only 3 medical sessions of 30 minute duration. In contrast, patients assigned to CBT or GET have access to these three medical sessions of SSMC plus 15 sessions of either CBT or GET.

The Lancet article describes SSMC:

 Standardised Specialist Medical Care SSMC will be given to all participants. This will include visits to the clinic doctor with general, but not specific advice, regarding activity and rest management, such as advice to avoid the extremes of exercise and rest, as well as pharmacotherapy for specific symptoms and comorbid conditions. SSMC is standardised in the SSMC Doctor’s Manual. As well as this, SSMC participants, like all other participants, will already have received the Patient Clinic Leaflet (PCL). The PCL is a generic leaflet explaining what CFS/ME is, its likely causes, and available treatments. There will be no additional therapist involvement.

 So, the patients assigned to SSMC got a pamphlet. In general, clinical and health psychologists researchers are convinced that getting a pamphlet is an inert intervention. So much so, that pamphlets are routinely provided as control inventions where researchers are intent on making their active intervention appear effective. And routinely criticized as an inadequate control condition.

In contrast to SSMC, the active treatments were delivered with a strong induction of positive expectations. Alem Matthees, the patient who obtained release of the PACE data with a FOI remarked in an email to me:

The CBT manuals for PACE assert with confidence that the therapy was safe and powerful (etc), and aimed the therapies at changing patients’ perceptions about their symptoms. Similarly, the GET manuals stated exercise reverses the pathophysiology responsible for symptoms and that most patients feel “much better” after therapy. No such equivalent in the other manuals. So it is difficult to separate any genuine benefit from methodological artefacts arising from placebo response and other reporting bias. If we assume subjective measures are important (which they are) and there is some genuine benefit (which there probably is), we must still consider the use of objective measures.

  1. The study was not blinded. Patients assigned to either CBT or GET knew they were getting more treatment. In contrast, patients assigned to the control group received only the SSMC.  They could see that they have gone to the bother of signing up for clinical research with the expectation that they would get more than SSMC, but now they are being left in that treatment with the added burden of all the research assessments.

The lack of blinding potentiates the problems of a control condition lacking the frequency and intensity of contact provided with the active treatment.

  1. Bolstering of positive expectations with a newsletter. A newsletter sent to patients while enrollment in the trial was still ongoing strengthened positive expectations and increased a sense of obligation from patients assigned to the control group. This effort was not specified in the original protocol. If this were a drug trial being scrutinized by the FDA, this would be a blatant protocol violation.

I noted in an earlier blog:

Before the intervention phase of the trial was even completed, even before accrual of patients was complete, the investigators published a newsletter in December 2008 directed at trial participants. An article appropriately reminds participants of the upcoming two and one half year follow-up. But then it acknowledges difficulty accruing patients, but that additional funding has been received from the MRC to extend recruiting. And then glowing testimonials appear on p. 3 of the newsletter about the effects of their intervention.

“Being included in this trial has helped me tremendously. (The treatment) is now a way of life for me, I can’t imagine functioning fully without it. I have nothing but praise and thanks for everyone involved in this trial.”

“I really enjoyed being a part of the PACE Trial. It helped me to learn more about myself, especially (treatment), and control factors in my life that were damaging. It is difficult for me to gauge just how effective the treatment was because 2007 was a particularly strained, strange and difficult year for me but I feel I survived and that the trial armed me with the necessary aids to get me through. It was also hugely beneficial being part of something where people understand the symptoms and illness and I really enjoyed this aspect.”

Taken together with the acknowledgment of the difficulty accruing patients, the testimonials solicit expression of gratitude and apply pressure on participants to endorse the trial by providing a positive evaluation of their outcome in the self-report measures they were provided. Some minimal effort is made to disguise the conditions from which the testimonials come. However, references to a therapist and, in the final quote above, to “control factors in my life that were damaging” leave no doubt that the CBT and GET favored by the investigators is having positive results.

Adequacy of control groups is of crucial importance. The US Agency for Healthcare Research and Quality (AHRQ) undertook a comprehensive systematic review and meta-analysis of meditation programs for psychological stress and well-being. I covered the agency’s report and the JAMA: Internal Medicine article in a recent blog post, Mindfulness research’s huge problem with uninformative control groups. The AHRQ report and JAMA article concluded that the widespread impression that meditation is an effective way of reducing stress and improving well-being largely comes from trials with inadequately matched control groups. When meditation, including mindfulness, iscompared to suitable active treatments, there is insufficient evidence of any superiority.

The same could be said for the PACE trial, but I’m just getting started. In my next blog post, I will take a critical look in a second comment on Julie’s article at Simon Wessely’s endorsement of outcomes switching as an admirable feature of the interpretation of clinical trials. What he says:

In essence though they decided they were using a overly harsh set of criteria that didn’t match what most people would consider recovery and were incongruent with previous work so they changed their minds – before a single piece of data had been looked at of course. Nothing at all wrong in that- happens in vast numbers of trials. The problem arises, as studies have shown, when these chnaged [sic] are not properly reported. PACE reported them properly. And indeed I happen to think the changes were right – the criteria they settled on gave results much more congruent with previous studies and indeed routine outcome measure studies of which there are many.


Albrecht Dürer’s Knight, Death and the Devil

In a subsequent blog post, I’ll be arguing that Simon’s justification is factually incorrect and inconsistent with best research practices. Until my next post, for a condemnation of outcomes switching in pharmaceutical trials, see my recent blog post, Study protocol violations, outcomes switching, adverse events misreporting: A peek under the hood. I report on criticism by prominent senior psychiatrists of bad research practices in trials of antidepressants. After writing that blog post, I signed a letter with the senior psychiatrists demanding retraction of one of these papers the blog discussed. The paper reported data from a ghost written trial of antidepressants characterized by protocol violations, outcomes switching and adverse events misreporting. I’ve heard no evidence that the PACE trial was ghostwritten, but I think there is ample evidence of its sharing these other sins.

**I am using this term with some reservation, because it’s familiar to readers and because that is how a set of diverse conditions is named in the bulk of the scientific and in the media.  But for an excellent critique of the term, see another excellent article in STAT,  Why we shouldn’t call it ‘chronic fatigue syndrome’ . I expect we will see a retirement of the term “chronic fatigue syndrome.”



CBT versus psychodynamic therapy for depression: One sentence changes the whole story

A recent comparative effectiveness study in JAMA Psychiatry of CBT versus psychodynamic psychotherapy for depression was billed as a noninferiority trial.

booby prizeOne sentence in the results section changed the whole significance of the study.

The dodo bird verdict for the study is that everybody gets a booby prize.

The study is currently freely accessed at JAMA Psychiatry, although you may need to register for free to actually download the PDF.


Connolly Gibbons M, Gallop R, Thompson D, et al. Comparative Effectiveness of Cognitive Therapy and Dynamic Psychotherapy for Major Depressive Disorder in a Community Mental Health Setting: A Randomized Clinical Noninferiority Trial. JAMA Psychiatry. Published online August 03, 2016. doi:10.1001/jamapsychiatry.2016.1720.

The moderately sized study compared to active treatments without a nonspecific comparison/control group.

Results.  Among the 237 patients (59 men [24.9%]; 178 women [75.1%]; mean [SD] age, 36.2 [12.1] years) treated by 20 therapists (19 women and 1 man; mean [SD] age, 40.0 [14.6] years), 118 were randomized to DT and 119 to CT. A mean (SD) difference between treatments was found in the change on the Hamilton Rating Scale for Depression of 0.86 (7.73) scale points (95% CI, −0.70 to 2.42; Cohen d, 0.11), indicating that DT was statistically not inferior to CT. A statistically significant main effect was found for time (F1,198 = 75.92; P  = .001). No statistically significant differences were found between treatments on patient ratings of treatment credibility. Dynamic psychotherapy and CT were discriminated from each other on competence in supportive techniques (t120 = 2.48; P = .02), competence in expressive techniques (t120 = 4.78; P = .001), adherence to CT techniques (t115 = −7.07; P = .001), and competence in CT (t115 = −7.07; P = .001).

Conclusions and Relevance.  This study suggests that DT is not inferior to CT on change in depression for the treatment of MDD in a community mental health setting. The 95% CI suggests that the effects of DT are equivalent to those of CT.

In case there is any ambiguity in the message the authors wanted to convey, they reiterated:

Key Points

  • Question Is short-term dynamic psychotherapy not inferior to cognitive therapy in the treatment of major depressive disorder (MDD) in the community mental health setting?

  • Findings In this randomized noninferiority trial that included 237 adults, short-term dynamic psychotherapy was statistically significantly noninferior to cognitive therapy in decreasing depressive symptoms among patients receiving services for MDD in the community mental health setting.

  • Meaning Short-term dynamic psychotherapy and cognitive therapy may be effective in treating MDD in the community.

Despite an accompanying editorial, the study only got a moderate amount of immediate attention in the social media. Here are the altmetrics.views

altmetrics PNG

parroting I examined the 40 tweets available on August 6, 2016 and found only one that went beyond parroting.

good tweets I I suspect that Robert Howard had discovered the one sentence in the results section that I noticed:


Nineteen patients (16.1%) in DT and 26 patients (21.8%) in the CT condition demonstrated response to treatment as measured by a 50% reduction on the HAM-D score across treatment (χ21 = 1.27; P = .32).

Most of the patients assigned to either group in this study failed to respond to treatment. Tipped off by this sentence, I looked for the degree of treatment exposure and found that most patients did not get exposed to sufficient intensity of treatment.

Sixty-three patients (26.6%) attended 1 or fewer sessions of psychotherapy; 122 (51.5%), 5 or fewer sessions; and 187 (78.9%), 11 or fewer sessions. We found no statistically significant difference between treatments in the number of sessions attended (t235 = 1.47; P = .14).

 The title of the JAMA Psychiatry article noted that patients had been recruited from the community mental health center. I interpret this to suggest they were likely to be a low income group who were not previously prepared for psychotherapy.

Before anyone proposes that the solution is simply to offer more therapy, note that the patients were not attending enough sessions of a larger number (16) that were offered. My interpretation is that greater effort may be needed to get such patients to consistently show up for sessions.

My colleagues and I previously conducted an exceptionally well resourced study in in the same low income and socially disadvantaged Philadelphia population. Our intention was to reduce risk factors among recently pregnant, low income women for another low weight birth delivery. We demonstrated that we could recruit and retain these women, but it took an intensive, creative effort.

One of the risk factors that we addressed was depression and we offered antidepressant medication and free treatment at the world-renowned University of Pennsylvania Center For Cognitive Therapy. We provided free transportation and child care. Few women access sufficient therapy or receive sufficient dose of antidepressants. The therapists at the center complained that the women did not seem to have their life in order and did not seem ready for psychotherapy. Personally, I think that the therapist may not have been ready for such women and did not sufficiently engage them.

Back to the study under discussion, it was accompanied by an editorial that parroted the authors’ intended message in its title:

Abbass AA, Town JM. Bona Fide Psychotherapy Models Are Equally Effective for Major Depressive Disorder: Future Research Directions. JAMA Psychiatry. Published online August 03, 2016. doi:10.1001/jamapsychiatry.2016.1916.

But I noticed this in the text:


Among other points, the study by Connolly Gibbons and colleagues raises the ongoing challenge facing all psychiatrists using pharmacotherapy and psychotherapy: how to improve rates of remission in real-world clinical samples. The study found that more than 80% of all participants did not respond to treatment (22% of patients receiving CBT and 16% of patients receiving STPP had response to treatment as measured by a 50% reduction in observer-rated depression). This high rate of nonresponse may be partly explained by inadequate treatment “dose” or number of sessions, clinical sample, therapist expertise, biomedical factors, and sociofamilial factors impeding outcomes

The JAMA Psychiatry article under discussion cited another, similar study conducted in the Netherlands, but did not elaborate on its findings:

Driessen E, Van HL, Don FJ, Peen J, Kool S, Westra D, Hendriksen M, Schoevers RA, Cuijpers P, Twisk JW, Dekker JJ. The efficacy of cognitive-behavioral therapy and psychodynamic therapy in the outpatient treatment of major depression: a randomized clinical trial. American Journal of Psychiatry. 2013 Sep 1.

Unlike the JAMA Psychiatry article, the abstract of the Dutch study qualified its finding of non-inferiority by noting that nether therapy did particularly well:


No statistically significant treatment differences were found for any of the outcome measures. The average posttreatment remission rate was 22.7%. Noninferiority was shown for posttreatment HAM-D and patient-rated depression scores but could not be demonstrated for posttreatment remission rates or any of the follow-up measures.


The findings extend the evidence base of psychodynamic therapy for depression but also indicate that time-limited treatment is insufficient for a substantial number of patients encountered in psychiatric outpatient clinics.

dodo bird verdictI suspect that both of these randomized trials will be cited as evidence of the Dodo Bird Verdict for psychotherapy for depression – everybody’s a winner and everybody gets a prize. However, in both the studies, the cognitive behavior therapy underperformed relative to the efficacy demonstrated in a larger body of studies. The literature for psychodynamic therapy is more limited and of low quality.

Still, I think the messages that when you move into more difficult populations, you can’t expect results obtained with more carefully selected, therapy-ready patient populations who were recruited to more typical studies. But this may reflect on the unrepresentativeness of patients in the larger literature.

Meanwhile, Psychiatrist Erick Turner and I have been having an exchange on Twitter concerning another noninferiority study.

Turner Tweet.PNG

Erick is referring to a perspective he shares with things I’ve been saying regularly about noninferiority trials. They typically don’t include a nonspecific comparison/control group. Without such a group, we can’t evaluate whether either of the active treatments are better than provision of nonspecific treatments with elements of support, positive expectation, and attention.

That is also a limitation of the current study, but by peeking into the actual results, we discover referral to neither of two active treatments left most patients free of depression.

What if there had been a credible attention/support condition in the present study? Would either of these two treatments that were “noninferior” to each other have a clinically significant advantage? What would be the implications, if not? would the report have made it into JAMA Psychiatry?