Guest blog reviewing Suzanne O’Sullivan’s It’s All in Your Head With an Introduction

all in your headSuzanne O’Sullivan’s It’s All in Your Head may have won the Wellcome Book Prize, but mental health professionals like Simon Wessely are notably silent – despite O’Sullivan advocating the same policies they promote.

An endorsement of the book from a professional who should know better represents either blind loyalty to a particular point of view, not having read the book, or simple incompetence.

could not put it down

one psychosomatically informed

wonderful book

I’ve been asked to comment on the book and I eventually will. Reading reviews and excerpts from it, I’m confident that I can dismiss the book without having to read it. Indeed, my blog post will be a further explanation of why – with so much out there – we need to develop strategies to screen and dismiss some items as unworthy of further attention. Life is too short…

I get a lot of requests to comment in the social media. Some are based on my reputation as an evidence-minded skeptic . Some occasionally draw on my past life as a strategic family therapist who discredited the family psychosomatic model of diabetes and anorexia.

Both reputations are relevant, but I still find it hard to find something to bring the current scientific literature against  O’Sullivan’s book in the materials I’ve seen. The book consists of chapters that are carefully selected armchair clinical stories that amount to anecdata. But the stories also involve such outlandish, confused, and discredited assumptions that it is tough to contrast them with anything in the contemporary literature.

There were vague references to neurologists. It is too bad that Oliver Sacks is dead and can’t defend himself against Suzanne O’Sullivan’s far out claims that her book is an homage to him.

Contemporary Freudian psychoanalysts would strongly object to the ways in which O’Sullivan compares her case histories to Freud’s. Psychoanalysts would be also embarrassed by their awareness that many of Freud’s famous case histories turned out to have neurological difficulties upon follow-up. At least one, Dora, was up to other mischief, as feminist Robin Lakoff and I described in our book, Father Knows Best.

Most psychoanalysts would be embarrassed by Suzanne O’Sullivan’s explaining physical complaints as expressions of emotional conflicts. The discrediting of that view was devastating to the prestige of psychoanalysis.

As I will explain in a later blog, the view that physical symptoms are expressions of unconscious conflicts was the object of ridicule and nearly destroyed psychosomatic medicine in the second half of the 20th century. The organization recently tried to distance itself from the swing of psychosomatic medicine with an editorial declaring that the journal Psychosomatic Medicine was being renamed as   Psychosomatic Medicine and Biobehavioral Medicine.

You can still dig out explanations in the 1950s and 60s psychosomatic medicine literature of why women are more likely to have migraines. That’s because these women have penis envy and a migraine headache symbolically represents a blood engorged penis. If you brought up this issue at the annual meeting of the American Psychosomatic Society, many people would consider you uncivil and walk away. You can be damn sure that there will be no endorsements of Suzanne O’Sullivan’s book from the American Psychosomatic Society or invitations for her to speak at their annual convention        .

hippocratesSuzanne O’Sullivan locates herself in a long tradition traceable back to Hippocrates. Many of Hippocrates ideas to which she still subscribes were hilariously wrong. For instance, she notes that her patients with unconscious emotional conflicts are women and that their physical complaints shift from organ system to organ system from one medical visit to the next.

Hippocrates’ explanation was that these women were not having enough sex and the neglected uterus traveled around their body, lodging in organs, and creating problems for them. The solution is that some compassionate man could relieve them of their troubles by having wandering uterussex with them.

I will get to these and other juicy examples. But one problem I face is the kinds of things that Suzanne O’Sullivan is talking about have few or no quality references in the current literature.

Many issues have died and professionals who currently identify themselves as in the field of psychoanalysis or psychosomatic medicine don’t want to bring them up anymore. Other issues were never addressed in the literature. Suzanne O’Sullivan’s bizarre case formulations sometimes exist only in her own head. Formulations are not tested in conversations with patients, but her labels are stuck on their heads with minimal observation or discussion.

I would love to see any outcome data for O’Sullivan’s patients, particularly serious physical conditions that were missed because of her discouragement of further medical surveillance and testing.

state of meBut until I get to blogging about Suzanne O’Sullivan, I can provide readers with a very thoughtful review in Goodreads of her book that was written when the book first came out. The reviewer, Nasim Marie Jafry is known for her thoughtful reviews combining the literary with the scientific. You can find her other reviews here [. But she is also a talented novelist herself. Here’s what Elizabeth Baines, author of The Birth Machine and Too Many Magpies says about Jafry’s autobiographical novel, The State of Me:

The amazing feat of this novel is to give one a physical sense of the pain and frustration of this condition [myalgic encephalomyelitis (ME) ], and yet to be bouncing with life, the inner life in the irrepressible psych of Helen.”

I think this talent is displayed in the following review. It should  encourage you to get a copy of Jafry’s novel [get the Kindle edition here ] with the money you might save not buying Suzanne Sullivan’s dreadful book. I would say that both books are fiction based on experience, except I am less confident that Suzanne Sullivan’s book is based on experience, rather than preconception.

Nasim Marie reviews It’s All in Your Head: True Stories of Imaginary Illness

I imagine the publisher was excited by Dr O’Sullivan’s ‘ideas’ – I saw the words ‘groundbreaking’ and ‘controversial’ in one of the blurbs. Imaginary illness carries notions of madness across the centuries, as readers we are intrigued – and seduced. However, having read in detail the chapter ‘Rachel’, which deals with a young woman with ‘ME/CFS’, I can say that the book is certainly not groundbreaking, but rather, in the case of ME, an irresponsible recycling of a dying – very dangerous – narrative which has been perpetuated by psychiatrists since the nineties. And having dipped into the other chapters, I’m afraid I find her style to be rather unengaging and toneless, though I wonder also if that is a kind of clinical constraint.

So her *ideas* must be sparkling and new if I am to be pulled in.

While vigorously suggesting that patients with myalgic encephalomyelitis (ME) have false illness beliefs, she then bases the entire chapter on her *own* beliefs. There is no evidence whatsoever to prove that ME is psychosomatic. There is however growing robust evidence that ME is a complex, multi-systemic neuroimmune illness, and the key to unlocking the puzzle is ever nearer – biomedical researchers worldwide are excited and hopeful about finding a unique biomarker. Dr O’Sullivan acknowledges that there is evidence of immune abnormalities but then chooses to ignore them completely and goes off on her wild somatisation spree. She seems not to *want* the science to progress, so zealous is she in her beliefs.

The whole chapter on ‘Rachel’ is manipulative and incoherent, illuminating only in what it omits. I know what the gaps are, so I can see the huge holes. She wrongly says that graded exercise (GET) is the most effective treatment, even although this treatment has been thoroughly discredited, it makes patients worse. This psychologising of ME is extremely harmful to patients, as patients and true specialists have been pointing out for years.

I have had virally-triggered ME since 1983 – I was nineteen years old, an undergraduate, unlucky to get a nasty enterovirus – and was diagnosed by a consultant neurologist, after EMG and muscle biopsy and many blood tests, which confirmed abnormalities. I had been ill for eighteen months at the time of diagnosis, steadily getting worse, and, of course, had never heard of ME then, few people had (I didn’t go upstairs to my room and google). My initial treatments included a plasma exchange with immunosupression, and anti-viral drugs. And yet Dr O’Sullivan denies hotly in her book that immunotherapy is used for ME, anywhere. She also seems unaware of the anti-cancer drug trial going on in Norway just now. The scientists have recently been in London discussing their trial at an annual ME conference, which attracts scientists from all over world.

She also fails to mention the huge confusion caused by the different criteria for ME – the CFS (chronic fatigue syndrome) label was introduced in late eighties in UK and the criteria for ME were widened and diluted, with the result that anyone with unexplained ‘chronic fatigue’ was being diagnosed with ME. This conflation of ‘classic ME’ and CFS has caused a major headache for patients (no pun intended). Patients who *do* have psychiatric-based fatiguing illness are sometimes being misdiagnosed with ME. The conflation has, naturally, caused immense problems with research; moreover, severely ill/bedridden patients with actual ME are not being included in trials.

O’Sullivan also makes no reference to post-exertional malaise (PEM), which is unique to ME – exhaustion (physical and mental) after trivial exertion – she talks only generally of ‘fatigue’. She ignores the disabling cognitive dysfunction – Rachel has some concentration problems but O’Sullivan does not describe the classic ME ‘brain fog’, which all of us w ME experience as a kind of ‘dementia’. We routinely forget everyday words, we mix words up, we forget people’s names, we cannot remember simple facts, we leave taps on. Neither does she mention orthostatic intolerance, the inability to be upright, stand for long, another cardinal feature. Indeed, many people with ME have full-blown POTS (postural tachycardia syndrome). She basically excludes all the symptoms of ME in her discussion, bar ‘fatigue’. She seems to think managing ME is managing fatigue, and Rachel ‘fails’ in her management. Naughty Rachel.

I honestly wonder if Dr O’Sullivan truly believes what she has written or if she needed to pad out her book as she didn’t have enough real psychosomatic illnesses for the pot. And she knows writing about ME as a psychiatric illness will be immediately controversial – even when she is wrong. Whatever her motive, she has failed spectacularly to keep up with the research and she has insulted not only ME patients but the whole scientific community engaged in ME research.

*I just want to add that this may be one of the most revealing passages in the ME/CFS chapter:

‘In my early years training in neurology I encountered many patients with CFS, but more recently neurologists have distanced themselves from this disorder and patients are more likely to seek help from immunologists or endocrinologists. I do not currently see patients for the purpose of diagnosing or treating ME/CFS, but many of my patients with dissociative seizures have a history of ME/CFS, and there is something very interesting in that fact alone.’

There is something very interesting in the fact that Suzanne does not seem to have actually met (m)any patients with classic Ramsay-ME (in 1990s when she was training, the Wessely/CFS/psychiatry school was just taking root, so it’s hard to know what was actually wrong with the ‘CFS’ patients she was seeing).

I reiterate: Rachel, the case study with ME/CFS is, to my mind, an artificial construct, a composite character with the ‘behaviours’ of ME patients – internet diagnosis, increasingly helpless, ‘over-helpful’ parents – that the Wessely school adores. Rachel rejects the psychiatric treatment offered her. We never find out what happens to her, though Suzanne says: ‘The impact of our emotional well-being on our health is not a trifling problem. I only wish I could convince Rachel of this’.

O’Sullivan also fails spectacularly to describe the experience of probably all of us with ME, of pushing ourselves to ‘recover’ only to relapse catastrophically. Her apparent lack of contact with patients who actually have ME – coupled with not following the science – would perhaps explain why she felt that including ME in a book of imaginary illnesses was acceptable.

Probing an untrustworthy Cochrane review of exercise for “chronic fatigue syndrome”

Updated April 24, 2016, 9:21 AM US Eastern daylight time: An earlier version of this post had mashed together discussion of the end-of-treatment analyses with the follow-up analyses. That has now been fixed. The implications are even more serious for the credibility of this Cochrane review.

From my work in progress

worse than

My ongoing investigation so far has revealed that a 2016 Cochrane review misrepresents how the review was done  and what was found in key meta analyses. These problems are related to an undeclared conflict of interest.

The first author and spokesperson for the review, Lillebeth Larun is also the first author on the protocol for a Cochrane review that has not yet been published.

Larun L, Odgaard-Jensen J, Brurberg KG, Chalder T, Dybwad M, Moss-Morris RE, Sharpe M, Wallman K, Wearden A, White PD, Glasziou PP. Exercise therapy for chronic fatigue syndrome (individual patient data) (Protocol). Cochrane Database of Systematic Reviews 2014, Issue 4. Art. No.: CD011040.

At a meeting organized and financed by PACE investigator Peter White, Larun obtained privileged access to data that the PACE investigators have spent tens of thousands of pounds to keep most of us from viewing. Larun used this information to legitimize outcome switching or p-hacking favorable to the PACE investigators’ interests. The Cochrane review  misled readers in presenting how some analyses were conducted that were crucial to its conclusions.

One of the crucial function of Cochrane reviews is to protect policymakers, clinicians, researchers, and patients from the questionable research practices utilized by trial investigators to promote particular interpretation of their results. This Cochrane review fails miserably in this respect. The Cochrane is complicit in endorsing the PACE investigators’ misinterpretation of their findings.

A number of remedies should be implemented. The first could be for Cochrane Editor in Chief and Deputy Chief Director Dr. David Tovey to call publicly for release for independent reanalysis of the PACE trial data from The Lancet original outcomes paper and the follow-up data reported in Lancet Psychiatry.

Given the breach in trust with the readership of Cochrane that has occurred, Dr. Tovey should announce that the individual patient-level data used in the ongoing review will be released for independent re-analysis.

Larun should be removed from the Cochrane review that is in progress. She should recuse herself from further comment on the 2016 review. Her misrepresentations and comments thus far have tarnished the Cochrane’s reputation for unbiased assessment and correction when mistakes are made.

An expression of concern should be posted for the 2016 review.

The 2016 Cochrane review of exercise for chronic fatigue syndrome:

 Larun L, Brurberg KG, Odgaard-Jensen J, Price JR. Exercise therapy for chronic fatigue syndrome. Cochrane Database Syst Rev. 2016; CD003200.

Added only three studies that were not included in a 2004 Cochrane review of five studies:

Wearden AJ, Dowrick C, Chew-Graham C, Bentall RP, Morriss RK, Peters S, et al. Nurse led, home based self help treatment for patients in primary care with chronic fatigue syndrome: randomised controlled trial. BMJ 2010; 340 (1777):1–12. [DOI: 10.1136/bmj.c1777]

Hlavaty LE, Brown MM, Jason LA. The effect of homework compliance on treatment outcomes for participants with myalgic encephalomyelitis/chronic fatigue syndrome. Rehabilitation Psychology 2011;56(3):212–8.

White PD, Goldsmith KA, Johnson AL, Potts L, Walwyn R, DeCesare JC, et al. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. The Lancet 2011; 377:611–90.

This blog post concentrates on sub analyses that is crucial to the conclusions of the 2016 review reported on pages  68 and 69, Analyses 1.1 and 1.2.

I welcome others to extend this scrutiny to other analyses in the review, especially those for the SF-36 (parallel Analyses 1.5 and 1.6).

Analysis 1.1. Comparison 1 Exercise therapy versus treatment as usual, relaxation or flexibility, Outcome 1 Fatigue (end of treatment).

The only sub analysis that involves new studies includes Wearden et al. FINE trial, White et al. PACE trial and an earlier study, Powell et al. The meta-analysis gives 27.2% weight to Wearden et al and 62.9% weight to White et al.or a 90.1% weight to the pair.

 Inclusion of the Wearden et al FINE trial in the meta-analysis

The Cochrane review evaluates risk of bias for Wearden et al. on page 49:

Wearden selective reporting

This is untrue.

Cochrane used a ‘Likert’ scoring method (0,1,2,3), but  the original Wearden et al. paper reports using the…

11 item Chalder et al fatigue scale,19 where lower scores indicate better outcomes. Each item on the fatigue scale was scored dichotomously on a four point scale (0, 0, 1, or 1).

This would seem a trivial difference, but this outcome switching will take on increasing importance as we proceed.

Based on a tip from Robert Courtney. I found the first mention of a re-scoring of the Chalder fatigue scale in the Weardon  study in a BMJ Rapid Response:

 Wearden AJ, Dowrick C, Chew-Graham C, Bentall RP, Morriss RK, Peters S, et al. Nurse led, home based self help treatment for patients in primary care with chronic fatigue syndrome: randomised controlled trial. BMJ, Rapid Response 27 May 2010.

The explanation that was offered for the re-scoring in the Rapid Response was:

Following Bart Stouten’s suggestion that scoring the Chalder fatigue scale (1) 0123 might more reliably demonstrate the effects of pragmatic rehabilitation, we recalculated our fatigue scale scores.

“Might reliably demonstrate…”?  Where I come from, we call this outcome switching,  p-hacking, a questionable research practice, or simply cheating.

In the original reporting of the trial, effects of exercise were not significant at follow-up. With the rescoring of the Chalder fatigue scale, these results now become significant.

A  physician who suffers from myalgic encephalomyelitis (ME) – what both the PACE investigators and Cochrane review term “chronic fatigue syndrome” – sent me the following comment:

I have recently published a review of the PACE trial and follow-up articles and according to the Chalder Fatigue Questionnaire, when using the original bimodal scoring I only score 4 points, meaning I was not ill enough to enter the trial, despite being bedridden with severe ME. After changing the score in the middle of the trial to Likert scoring, the same answers mean I suddenly score the minimum number of 18 to be eligible for the trial yet that same score of 18 also meant that without receiving any treatment or any change to my medical situation I was also classed as recovered on the Chalder Fatigue Questionnaire, one of the two primary outcomes of the PACE trial.

So according to the PACE trial, despite being bedridden with severe ME, I was not ill enough to take part, ill enough to take part and recovered all 3 at the same time …

Yet according to Larun et al. there’s nothing wrong with the PACE trial.

Inclusion of the White et al PACE trial in the meta-analysis

Results of the Wearden et al FINE trial were available to the PACE investigators when they performed the controversial switching  of outcomes for their trial. This should be taken into account in interpreting Larun’s defense of the PACE investigators in response to a comment from Tom Kindlon. She stated:

 You particularly mention the risk of bias in the PACE trial regarding not providing pre-specified outcomes however the trial did pre-specify the analysis of outcomes. The primary outcomes were the same as in the original protocol, although the scoring method of one was changed and the analysis of assessing efficacy also changed from the original protocol. These changes were made as part of the detailed statistical analysis plan (itself published in full), which had been promised in the original protocol. These changes were drawn up before the analysis commenced and before examining any outcome data. In other words they were pre-specified, so it is hard to understand how the changes contributed to any potential bias.

I think that what we have seen here so far gives us good reason to side with Tom Kindlon versus Lillebeth Larun on this point.

Also relevant is an excellent PubMed Commons comment by Sam Carter, Exploring changes to PACE trial outcome measures using anonymised data from the FINE tria. His observations about the Chalder fatigue questionnaire:

White et al wrote that “we changed the original bimodal scoring of the Chalder fatigue questionnaire (range 0–11) to Likert scoring to more sensitively test our hypotheses of effectiveness” (1). However, data from the FINE trial show that Likert and bimodal scores are often contradictory and thus call into question White et al’s assumption that Likert scoring is necessarily more sensitive than bimodal scoring.

For example, of the 33 FINE trial participants who met the post-hoc PACE trial recovery threshold for fatigue at week 20 (Likert CFQ score ≤ 18), 10 had a bimodal CFQ score ≥ 6 so would still be fatigued enough to enter the PACE trial and 16 had a bimodal CFQ score ≥ 4 which is the accepted definition of abnormal fatigue.

Therefore, for this cohort, if a person met the PACE trial post-hoc recovery threshold for fatigue at week 20 they had approximately a 50% chance of still having abnormal levels of fatigue and a 30% chance of being fatigued enough to enter the PACE trial.

A further problem with the Chalder fatigue questionnaire is illustrated by the observation that the bimodal score and Likert score of 10 participants moved in opposite directions at consecutive assessments i.e. one scoring system showed improvement whilst the other showed deterioration.

Moreover, it can be seen that some FINE trial participants were confused by the wording of the questionnaire itself. For example, a healthy person should have a Likert score of 11 out of 33, yet 17 participants recorded a Likert CFQ score of 10 or less at some point (i.e. they reported less fatigue than a healthy person), and 5 participants recorded a Likert CFQ score of 0.

The discordance between Likert and bimodal scores and the marked increase in those meeting post-hoc recovery thresholds suggest that White et al’s deviation from their protocol-specified analysis is likely to have profoundly affected the reported efficacy of the PACE trial interventions.

Compare White et al.’s “more sensitively test our hypotheses” to Weardon et al.’s ““might reliably demonstrate…” explanation for switching outcomes.

A correction is needed to this assessment of risk of bias in the review for the White et al PACE trial.white study bias

A figure on page 68 shows results of a subanalysis with the switched outcomes at the end of treatment.

analysis 1.1 end of treatment

This meta analyses concludes that exercise therapy produced an almost 3 point drop in fatigue on the rescored Chalder scale at the end of treatment.

Analysis 1.2. Comparison 1 Exercise therapy versus treatment as usual, relaxation or flexibility, Outcome 2 Fatigue (follow-up).

A table on page 69 shows results of a subanalysis with the switched outcomes at follow up:

analyses 1.2 follow up

This meta analysis entirely depends on the revised scoring of the Chalder fatigue scale and the FINE and PACE trial. It suggests that the three point drop in fatigue persists at followup.

But Cochrane should have stuck with the original primary outcomes specified in the original trial registrations. That would have been consistent what with the Cochrane usually does, what is says it did here,  and what its readers expect.

Readers were not at the meeting that the PACE investigators financed and cannot get access to the data on which the Cochrane review depends. So they depend on Cochrane as a trusted source.

I am sure the results would be different if the expected and appropriate procedures had been followed. Cochrane should alert readers with an Expression of Concern until the record can be corrected or the review retracted.

 Now what?

get out of bedIs it too much to ask that Cochrane get out of bed with the PACE investigators?

What would Bill Silverman say? Rather than speculate about someone who neither Dr.Tovey or I have ever met, I ask Dr Tovey “What would Lisa Bero say?”

 

My response to an invitation to improve the Cochrane Collaboration by challenging its policies

I interpret a recent Cochrane Community Blog post as inviting me to continue criticizing the Collaboration’s conflict of interest in the evaluation of “chronic fatigue syndrome” with the intent of initiating further reflection on its practices and change.

Cochrane needs to

  • Clean up conflicts of interest in its systematic reviews.
  • Issue a Statement of Concern about a flawed and conflicted review of exercise for chronic fatigue syndrome.

cochrane communityI will leave for a future blog the argument that Cochrane needs to take immediate steps to get the misnamed “chronic fatigue syndrome” out of its Common Mental Disorders group. The colloquialism throws together highly prevalent complaints in primary care of tiredness with less common, but more serious myalgic encephalomyelitis, which is recognized by the rest of the world as a medical  condition, not a mental disorder.

But I think I call attention in this blog post to enough that needs change now.

The invitation from the Cochrane Community Blog to criticize its policies

I had a great Skype conference with Dr. David Tovey, Cochrane Editor in Chief and Deputy Chief Director. I’m grateful for his reaching out and his generous giving of his time, including reading my blog posts ahead of time.

In the email setting up the conversation, Dr.Tovey stated that Cochrane has a tradition of encouraging debate and that he believes that criticism helps them to improve. That is something he is very keen to foster.

Our conversation was leisurely and wide-ranging. Dr.Tovey lived up to the expectations established in his email. He said that he was finishing up a blog post in response to issues that I and others had raised. That blog post is now available here. It leads off with:

 I didn’t know Bill Silverman, so I can’t judge whether he would be “a-mouldering in his grave”. However, I recognise that James Coyne has set down a challenge to Cochrane to explain its approach to commercial and academic conflicts of interest and also to respond to criticisms made in relation to the appraisal of the much debated PACE study.

Dr. Tovey closed his blog post with:

 Cochrane is not complacent. We recognise that both we and the world we inhabit are imperfect and that there is a heavy responsibility on us to ensure that our reviews are credible if they are to be used to guide decision making. This means that we need to continue to be responsive and open to criticism, just as the instigators of the Bill Silverman prize intended, in order “to acknowledge explicitly the value of criticism of The Cochrane Collaboration, with a view to helping to improve its work.”

 As a member of a group of authors who received the Bill Silverman prize, I am interpreting Dr. Tovey’s statement as an invitation to improve the Cochrane collaboration by instigating and sustaining a discussion of its handling of conflicts of interest in reviews of the misnamed “chronic fatigue syndrome.”

I don’t presume that Dr. Tovey will personally respond to all of my efforts. But I will engage him and hope that my criticisms and concerns will be forwarded to appropriate deliberative bodies and receive wider discussion within the Cochrane.

For instance, I will follow up on his specific suggestion by filing a formal complaint with Funding Arbiters and Arbitration Panel concerning a review and protocol with Lillebeth Larun as first author.

 A flawed and conflicted Cochrane systematic review

 There are numerous issues that remain unresolved in a flawed and conflicted recent Cochrane systematic review:

 Larun L, Brurberg KG, Odgaard-Jensen J, Price JR. Exercise therapy for chronic fatigue syndrome. Cochrane Database Syst Rev. 2016; CD003200.

As well as a protocol for a future review:

Larun L, Odgaard-Jensen J, Brurberg KG, Chalder T, Dybwad M, Moss-Morris RE, Sharpe M, Wallman K, Wearden A, White PD, Glasziou PP. Exercise therapy for chronic fatigue syndrome (individual patient data) (Protocol). Cochrane Database of Systematic Reviews 2014, Issue 4. Art. No.: CD011040.

I’m pleased that Dr. Tovey took a stand against the PACE investigators and Queen Mary University, London. He agreed sharing patient-level data for a Cochrane Review on which they were authors should not be used as an excuse to avoid sharing data with others. .

 Another issue raised by Coyne has also been raised with me in personal correspondence: namely the perceived use of Cochrane as a rationale for withholding clinical trials data at the level of individual patients from other individuals and organisations. Cochrane is a strong supporter and founding member of the AllTrials initiative and is committed to clinical trials transparency. Cochrane does not believe that sharing data with its researchers is an appropriate rationale for withholding the data from alternative researchers. Each application must be judged independently on its merits. Cochrane has issued a public statement that details our position on access to trial data.

I hope that Dr.Tovey’s sentiment was formally communicated to the Tribunal deliberating an appeal by the PACE investigators of a decision by the UK Information Commissioner that the trial data must be released to someone who had requested it.

I also hope that Dr. Tovey and the Cochrane recognize the implications of the PACE investigators thus far only being willing to share their data when they have authorship and therefore some control over the interpretation of their data.  As Dr.Tovey notes, simply providing data does not meet the conditions for authorship:

 It is also important that all authors within a review team meet the requirements of the International Committee of Medical Journal Editors (ICMJE) in relation to authorship.

These requirements mean that all authors must approve the final version of the manuscript before it is submitted. This allows the PACE investigators to control the conclusions of the systematic review so that they support the promotion of cognitive behavior and graded exercise therapy as the most evidence-supported treatments for chronic fatigue syndrome.

A favorable evaluation by the Cochrane will greatly increase the value of the PACE group’s consultations, including  recommendations that disabled persons be denied benefits if they do not participate in these “best-evidence”interventions.

I’m pleased that Dr. David Tovey reiterated the Cochrane’s strong position on disclosures of conflict of interest being necessary but not sufficient to ensure the integrity of systematic reviews:

 Cochrane is still fairly unusual within the journal world in that it specifies that in some cases declaration of interests is necessary but insufficient, and that there are individuals or groups of researchers who are not permitted to proceed with a given systematic review.

Yet, I’m concerned that in considering the threat of disclosed and undisclosed conflicts of interest, Dr. Tovey and the Cochrane narrowly focus on Pharma and medical device manufacturers, to the exclusion of other financial ties, such as the large disability re-insurance industry:

 Within the 2014 policy it was made explicit that review authors could not be employed by pharmaceutical companies, device manufacturers or individuals that were seeking or holding a patent relevant to the intervention or a comparator product. Furthermore, in all cases, review author teams are required to have a majority of non-conflicted authors and the lead author should also be non-conflicted. The policy is available freely.

[The Cochrane apparently lacks an appreciation of the politics and conflicts of interest of the PACE trial. The trial has the unusual if not unique distinction of being a psychotherapy trial funded in part by the UK Department of Work and Pensions, which had a hand in its design. It’s no accident that the PACE investigators include paid consultants to the re-insurance industry. For more on this mess, see The Misleading Research at the Heart of Disability Cuts.

nothing to declareIt also doesn’t help that the PACE investigators routinely fail to declare conflicts of interest. They failed to disclose their conflicts of interest to patients being recruited for the study. They failed again until they were caught in declaring no conflicts of interest in a protocol for another systematic review.

Dr. Tovey goes on to state:

Authors of primary studies should not extract data from their own study or studies. Instead, another author(s) or an editor(s) should extract these data, and check the interpretation against the study report and any available study registration details or protocol.

The  Larun et al systematic review of graded exercise therapy violates this requirement.  The meta-analyses forming the basis of this review is not reproducible from the published registrations, original protocols, and findings of the original studies.

Dr. Tovey is incorrect on one point:

 James Coyne states that Lillebeth Larun is employed by an insurance company, but I am unclear on what basis this is determined. Undeclared conflicts of interest are a challenge for all journals, but when they are brought to our attention, they need to be verified. In any case, within Cochrane it would be a matter for the Funding Arbiters and Arbitration Panel to determine whether this was a sufficiently direct conflict to disbar her from being first author of any update.

I can’t find anywhere that I have said that Lillebeth Larun is employed by an insurance company. But I did say that she has undeclared conflicts of interest.  These echo in her distorted judgments and defensive responses to criticisms of decisions made in the review that favor the PACE investigators’ vested interest.

Accepting  Dr. Toby’s suggestion, I’ll be elaborating my concerns in a formal complaint to Cochrane’s Funding Arbiters and Arbitration Panel. But here is a selection of what I previously said:

Larun dismisses the risk of bias associated with the investigators not sticking to the primary outcomes in their original protocol. She suggested deviations from these outcomes were specified before analyses commenced. However, this was an unblinded trial and the investigators could inspect incoming data. In fact, they actually sent out a newsletter to participants giving testimonials about the benefits of the trial while they were still recruiting patients. Think of it: if someone with ties to the pharmaceutical industry could peek at incoming data and make changes to designate outcomes, wouldn’t that be a high risk of bias? Of course.

Laurun was responding to an excellent critique of the published review by Tom Kindlon, which you can find here.

Other serious problems with the review are hidden from the casual reader. In revising their primary outcomes specified in the original proposal, the PACE investigators had access to the publicly available data from the sister FINE trial (Weardon, 2010).

 Wearden AJ, Dowrick C, Chew-Graham C, Bentall RP, Morriss RK, Peters S, Riste L, Richardson G, Lovell K, Dunn G; Fatigue Intervention by Nurses Evaluation (FINE) trial writing group and the FINE trial group. Nurse led, home based self help treatment for patients in primary care with chronic fatigue syndrome: randomised controlled trial. BMJ. 2010 Apr 23;340:c1777. doi: 10.1136/bmj.c1777.

These data from the FINE trial clearly indicated that the existing definition of the primary outcomes in the PACE trial registration would likely not provide evidence of the efficacy of cognitive behavior or graded exercise therapy. Not surprisingly, the PACE investigators revised their scoring of primary outcomes.

Moreover, the Larun et al review misrepresents how effect sizes for the FINE trial were calculated. The review wrongly claimed that only protocol-defined and published data or outcomes were used for analysis of the Wearden 2010 study.

Robert Courtney documents in a pending comment that the review relied on an alternative unpublished set of data. As Courtney points out, the differences are not trivial.

Yet, the risk of bias table in the review for the Wearden study states:

Wearden selective reporting

Financial support for a meeting between Dr. Lillebeth Larun and PACE investigators

The statement of funding for the 2014 protocol indicates that Peter White financed meetings at Queen Mary University in 2013. If this were a Pharma-supported 2016 systematic review, wouldn’t Lauren have to disclose a conflict of interest for attendance at the the 2014 meeting sponsored by PACE investigators?

Are these meetings the source of the acknowledgment in the 2016 systematic review?

We would like to thank Peter White and Paul Glasziou for advice and additional information provided. We would also like to thank Kathy Fulcher, Richard Bentall, Alison Wearden, Karen Wallman and Rona Moss-Morris for providing additional information from trials in which they were involved.

The declared conflicts of interest of the PACE investigators in The Lancet paper constitute a high risk of bias. I am familiar with this issue because our article which won the Bill Silverman Award highlighted the importance of authors’ conflicts of interest being associated with exaggerated estimates of efficacy. The award to us was premised on our article having elicited a change in Cochrane policy. My co-author Lisa Bero wrote an excellent follow-up editorial for Cochrane on this topic.

 This is a big deal and action is needed

 Note that this 2016 systematic review has only three new studies considered that were not included in the 2004 review. So, the misrepresentations and incorrect calculation of effect sizes for two  added trials– PACE and FINE – are decisive.

As it stands, the Larun et al Cochrane Review is an unreliable summary of the literature concerning exercise for “chronic fatigue syndrome.”  Policymakers, clinicians, and patients should be warned. It serves the interests of politicians and re-insurance companies–and declared and undeclared interest of the PACE investigators.

I would recommend that Dr. Lillebeth Larun recuse herself from further commentary on the 2016 systematic review until complaints about her conflicts of interest and unreproducibility of the review are resolved. The Cochrane should also publish an Expression of Concern about the review, detailing the issues that have been identified here.

Stay tuned for a future blog post concerning the need to move “chronic fatigue syndrome” out of the Cochrane Common Mental Disorders group.

 

 

Needed: more informative and trustworthy abstracts. Recommendations for some simple reforms.

An analysis of an uninformative, seriously spun abstract chosen from PLOS One shows why we need guidelines for writing and interpreting abstracts.

  •  With so much to read, and so little time, readers need to be able to quickly screen abstracts and decide whether articles are worth putting further effort into retrieving and interpreting.
  •  More informative, trustworthy abstracts are crucial to facilitating this process.
  •  Journals should adopt, publicize, and enforce standards for writing abstracts.
  •  In the interim, authors can adopt basic standards and editors and reviewers can begin insisting on them.
NeuroSkeptic abstract_expression

From Neuroskeptic

Personally – and I speak only for myself, not any official policies of PLOS One – I’m already applying the standards and desk rejecting manuscripts that don’t comply. My decision letters explain that further consideration of a manuscript is contingent on a revision providing a more informative abstract.

Casual readers benefit from more informative abstracts, but so does everybody else.

We can think of an abstract as part way down the funneling process from a reader encountering the title of an article to downloading the actual paper to eventually citing the article.

For instance, in conducting a systematic review, a large number of abstracts are typically screened  in order to identify a much smaller number for more intensive review. Although there may be some spot checking on this process, the accuracy of an abstract can be decisive in determining  whether it is further examined for inclusion in a review.

Journalists often screen abstracts to choose the articles about which they will write a story. Hype and distortion in media coverage can be linked to exaggerations in an article’s abstract. It is unclear whether that is due to journalists only reading abstracts or to authors reliably revealing their commitment to trustworthy reporting of their study in the transparency and completeness of their abstracts.

Abstracts for clinical trials are increasingly accompanied by trial registration. PubMed now routinely provides trial registration information so that readers can compare the abstract to the trial registration without going to the actual article.

There is little evidence that trial registrations are routinely considered in evaluating manuscripts  The problem starts with reviewers and editors failing even to access the trial registration.

In this blog post, I will show how an uninformative abstract precluded an assessment of the article published in PLOS One that I should spend much time reading further because of its serious limitations. I first compare the abstract to the trial registration and then delve into the article itself. We will soon see why a more uninformative abstract would have led to dismissing this article out of hand.

The article

The open access article, downloadable by anybody with access to the Internet is:

Fancourt D, Perkins R, Ascenso S, Carvalho LA, Steptoe A, Williamon A (2016) Effects of Group Drumming Interventions on Anxiety, Depression, Social Resilience and Inflammatory Immune Response among Mental Health Service Users. PLOS ONE 11(3): e0151136. doi:10.1371/journal.pone.0151136.

Abstract

Growing numbers of mental health organizations are developing community music-making interventions for service users; however, to date there has been little research into their efficacy or mechanisms of effect. This study was an exploratory examination of whether 10 weeks of group drumming could improve depression, anxiety and social resilience among service users compared with a non-music control group (with participants allocated to group by geographical location.) Significant improvements were found in the drumming group but not the control group: by week 6 there were decreases in depression (-2.14 SE 0.50 CI -3.16 to -1.11) and increases in social resilience (7.69 SE 2.00 CI 3.60 to 11.78), and by week 10 these had further improved (depression: -3.41 SE 0.62 CI -4.68 to -2.15; social resilience: 10.59 SE 1.78 CI 6.94 to 14.24) alongside significant improvements in anxiety (-2.21 SE 0.50 CI -3.24 to -1.19) and mental wellbeing (6.14 SE 0.92 CI 4.25 to 8.04). All significant changes were maintained at 3 months follow-up. Furthermore, it is now recognised that many mental health conditions are characterised by underlying inflammatory immune responses. Consequently, participants in the drumming group also provided saliva samples to test for cortisol and the cytokines interleukin (IL) 4, IL6, IL17, tumour necrosis factor alpha (TNFα), and monocyte chemoattractant protein (MCP) 1. Across the 10 weeks there was a shift away from a pro-inflammatory towards an anti-inflammatory immune profile. Consequently, this study demonstrates the psychological benefits of group drumming and also suggests underlying biological effects, supporting its therapeutic potential for mental health.

Trial registration for Creative Practice as Mutual Recovery ClinicalTrials.gov  NCT01906892

The Primary Outcome Measures is designated as the Warwick-Edinburgh Mental Well-being Scale.

Secondary Outcome Measures are both psychological and biological. The psychological are Secker’s Measure of social inclusion. The Connor-Davidson Resilience Scale (CD-RISC), and the Anxiety and Depression subscales of the Hospital Anxiety and Depression Scale (HADS). Biological secondary outcome measures include saliva levels of cortisol, immunoglobulin and interleukins including IL6, as well as blood pressure and heart rate.

Comment and integration

We immediately see evidence of outcomes switching. The Mental Well-being Scale is not mentioned in the abstract. Instead, the Depression subscale of the Hospital Anxiety and Depression Scale (HADS) has been elevated to a primary outcome. Among the primary and secondary outcomes designated in the trial registration, only the HADS depression subscale and Connor-Davidson Resilience Scale are mentioned.

A battery of cortisol and immunological measures derived from saliva are mentioned in the abstract, but there is hand waving, rather than presenting of the actual results and claims of “a shift away from a pro-inflammatory towards an anti-inflammatory immune profile.”

There is no mention of blood pressure or heart rate in the abstract.

What I wanted to see in the abstract, but did not.

A careful reader can figure out that this study was not a randomized trial. Rather, participants were recruited for a drumming group if they lived in close proximity to attend a group. A control group was somehow constructed from participants who lived further away. So, this is not a randomized trial and there is a lack of equivalence or comparability between the intervention and control groups. This likely precludes meaningful, generalizable comparisons.

The nature of the study design certainly needs to be taken into account in interpreting the resulting data. I would have required an explicit statement in the abstract that this is a nonrandomized trial.

Checking with the trial registration, it appears the design was a compromise from what had been originally planned. Whenever I see that a study design has been compromised, I look more carefully for other ways in which compromises  may have introduced bias into the results that are reported.

But I also want to know how many participants were recruited and how many were retained for the analyses. Selective retention in a control group that derives no benefit from participating in the study is another source of bias. More generally, I want to know about whether all participants were retained for analysis, as well as the adequacy of the sample size to determine if this such an underpowered that I can drop further it from consideration.

Enter the CONSORT Elaboration for Abstracts

The well-known Consolidated Standards for Reporting Trials (CONSORT) checklist is required to be completed by many journals in submitting a manuscript reporting a clinical trial. But there is also a checklist for abstracts  that has not received nearly as much attention. It can be readily modified to cover other designs for intervention studies, if an item is added requiring specific designation of which design the the study the paper is reporting involved.

consort for abstracts

Comparison to the body of the article

The Design and Participants section indicates:

Control participants were recruited through the same channels in South and North London. To minimise potential bias, they met the same inclusion criteria but were not within the vicinity of the group drumming sessions to take part. The groups were matched for age, sex, ethnicity and employment status, within the constraints of our recruitment channels…

Drumming participants were not blinded as to which study condition they were in. Control participants were told they were participating in a study about music and mental health but were not aware that they could have had access to a drumming group had they lived in West London. Staff collecting saliva samples were not blinded, but laboratory analysis was blind.

This is a weak design with multiple risks of bias. There is no randomization and no blinding of an intervention and control participants who were recruited with different consent forms. It’s not at all clear comparison with the nonequivalent control condition would provide for understanding of the intervention condition.

Glancing at the Table 1 presenting baseline differences, I noticed a substantial difference in depression scores between the intervention and control condition,  that would cause problems for any reliance on depression as an outcome.

reduced baseline differences

Nonetheless, I next learned that this particular measure was used for calculating sample size:

Sample size was calculated using data from the previous six-week study with the primary endpoint of depression (HADSD) which showed an effect size of f = 0.6 [6]. Using this effect size, an a priori sample size calculation using G*Power 3.1 for a between-factors ANOVA with an alpha of 0.05, power of 0.9 and assuming two-sided tests and a correlation of 0.8 among repeated measures (2 groups, 3 timepoints) was made which showed that an overall total of 28 participants would be required (14 per group). For the control group, to allow for drop-outs of 30% (estimated based on the six-week study), 20 participants were targeted for recruitment. For the experimental group, because of the range of biological markers being tested, we decided to match sample size with our preliminary study [6], and so 39 participants were initially recruited. Recruitment was continued until these targets had been reached before being closed one week before the drumming started. Following drop-outs, 15 control and 30 drumming participants remained.

[It is beyond the focus of the present blog post, but small feasibility studies should not be used to provide effect sizes for determining the sample size for a larger, more resource study. If there are significant findings at all in the smaller study, they are likely to be an exaggeration of what will be found with the larger study.]

Enough is about enough. I am accumulating information here that would’ve been sufficient for me to drop this article from further consideration if it had appeared in the abstract. Namely, this is a nonrandomized trial with different recruitment procedures for intervention and control participants. If that were not enough, there are only 15 control patients retained for analysis. I would not expect robust and generalizable conclusions.

Results

I’m not sure I could given a priori prediction of what benefits, if any, a drumming group would provide for people accessing mental health services, but I wouldn’t expect any effects would be sufficiently strong to be detected in a trial with 15 control patients. Nonetheless…

The analysis of anxiety ratings showed a significant condition by time interaction (F2,84 = 3.63, p<0.05), with anxiety falling over the 10 weeks of drumming (mean -2.21, SE 0.50, CI -3.24 to -1.19) while remaining unchanged in the control condition (mean -0.33, SE 0.57, CI -1.55 to 0.88). Within-subjects contrasts showed that the time by group interaction did not reach significance at 6 weeks but did reach significance by 10 weeks (F1,42 = 5.357, p<0.05). The overall decrease in anxiety from baseline in the drumming group averaged 9% by week 6 and 20% by week 10. Fig 2A shows the within-subject change from baseline at weeks 6 and 10 in both the drumming and control conditions.

The analysis of depression ratings showed a similar pattern, with a significant condition by time interaction (F2,84 = 10.23, p<0.001). Depression fell over the 10 weeks of drumming (mean -3.41, SE 0.62, CI -4.68 to -2.15) while remaining unchanged in the control condition (mean 0.47, SE 0.52, CI -0.66 to 1.59). Within-subjects contrasts showed that the time by group interaction reached significance at 6 weeks (F1,42 = 10.038, p<0.01) and was seen even more strongly by 10 weeks (F1,42 = 17.048, p<0.001). The overall decrease in depression from baseline in the drumming group averaged 24% by week 6 and 38% by week 10. In the light of the baseline differences in depression ratings, we also analyzed change scores over time controlling for baseline levels; the difference between drumming and control conditions remained significant at week 10 (F1,41 = 5.035, p<0.05). Fig 2B shows the within-subject change from baseline at weeks 6 and 10 in both the drumming and control conditions.

So, for anxiety, which was not an original primary outcome, results reached the magical p<0.05, not at six weeks, but at 10. Results for depression at first seem more impressive – that is until we recall the large group difference at the outset, which will not be overcome by any control for baseline characteristics. As for the originally designated primary outcome well-being?

In the analysis of the wellbeing scores, the condition by time interaction was nearly significant (F2,82 = 2.91, p = 0.06), with changes in the drumming group (mean 6.14, SE 0.92, CI 4.25 to 8.04) but not the control group (mean 2.33, SE 1.56, CI -1.01 to 5.67). Within-subjects contrasts showed that the time by group interaction was not significant at 6 weeks but was significant by 10 weeks (F1,41 = 5.033, p<0.05). The improvement in the drumming group averaged 8% by week 6 and 16% by week 10. Fig 2D shows the within-subject change from baseline at weeks 6 and 10 in both the drumming and control conditions.

The results for biological variables are presented with a lot of selective reporting and hand waving, but a worth reading through to the dramatic dénouement of a nonsignificant decrease in cortisol, p=0.557.  A real trend developing there, that will need a bigger study to explore. [Sigh!]

Biological results

In order to explore the mechanisms of change in the drumming group, exploratory saliva samples were taken immediately before and after baseline and weeks 6 and 10 of drumming. Drumming significantly increased the anti-inflammatory cytokine IL4 (F2,34 = 3.830, p<0.05). Planned polynomial contrasts showed that there was a linear effect across time (F1,17 = 6.504, p<0.05), with a 9% increase in levels by week 6 and a 13% increase by week 10 (see Fig 3A). Alongside this, there was a significant change in levels of the pro-inflammatory chemokine MCP1 (F2,30 = 4.221, p<0.05), which polynomial contrasts revealed to be a quadratic effect, with a decrease of 10% by week 6 followed by a return to near baseline levels by week 10 (F1,15 = 10.793, p<0.01) (see Fig 3B). There was also a near-significant effect for IL17 (F2,30 = 2.502, p = 0.099), also shown to be a quadratic effect characterized by an initial increase of 13% followed by a small decrease returning to 6% above baseline levels (F1,15 = 4.301, p = 0.056) (see Fig 3C). No changes were found in levels of TNFα across the 10 weeks (F2,34 = 0.134, p = 0.875) nor IL6 (F2,34 = 0.808, p = 0.454), and although there was a decrease in cortisol across the 10 weeks, this was not significant (F2,42 = 0.593, p = 0.557).

Discussion

We are long beyond the days of browsing the scientific literature by going over to the library and taking a volume off the shelf. Authors and even journals that expect us to commit attention should realize that they, in turn, need to capture and retain our attention with informative abstracts that cultivate our sense of a trustworthy source.

The particular PLOS One article that I examined in detail was chosen largely at random. I’m confident that I could find numerous examples elsewhere and surely worse ones. Misleading abstracts are endemic, and an important part of promoting confused and deliberately misleading science. In this article, as I’m sure we would find elsewhere, the deviations from an accurate abstract conforming to a trial registration and a fair interpretation of the results were in the service of creating a confirmatory bias.

PLOS One prides itself in not requiring breakthrough or innovative research as a condition of publication, but it does insist on the transparent reporting of studies, which even if not perfect, are clear in their limitations.This abstract fails that test.

plos oneI think it would be great for PLOS One to lead the field forward in insisting on better abstracts and educating authors and reviewers accordingly.

 

 

 

In the standoff over release of the PACE PLOS One trial data, has the journal just blinked?

plos oneI just received (April 7, 2016) another communication from the Managing Editor of PLOS One reporting  an emerging position on my gaining access to the PACE trial data, promised to be available as a condition for publishing in PLOS One.

The negotiations are not over. But is there a signal that PLOS One is prepared to render meaningless their distinctive commitment to full data availability and sharing?

21925139-Various-hands-during-tug-war-on-white-backgrounds-Stock-PhotoAny retreat from full and unrestricted availability of data has implications for those of us who work for free as Academic Editors, those of us who submit our manuscripts to PLOS One rather than somewhere else because of its data sharing policy, and even those institutions who been willing to subsidize hefty publication costs because of the distinctive claim that data from published articles will be available to all.

If PLOS One abandons or even waters down its commitment to data sharing, will we abandon PLOS One?

 As I’ve stated before, despite being in one of thousands of PLOS One Academic Editors, I have no special insight or influence over the decision-making in this matter. Indeed, the PACE investigators have more insight into what is going on, not only because they are directly involved in the negotiations, but because maybe PLOS One could be granting them an extraordinary latitude in whether they can refuse or bureaucratize what had been my simple request for the data.

The e-mail

Dear Jim,

Thank you for the correspondence regarding the article by McCrone et al.

As you know we are actively following up on the request for data from the article. We sought advice from two editorial board members in order to establish what data we would expect the authors to share in the context of the current analysis, and we have followed up with the authors. The authors have raised concerns about the need to protect patient privacy as well as the specifications outlined in the consent obtained at the time of recruitment for the trial; our follow up with the authors is ongoing.

We plan to contact Queen Mary’s University of London to discuss our policy and position in relation to the sharing of data from this study.

I would like to take this opportunity to outline what we can and cannot engage in as part of our follow up as editors.

The policy that applies to the article indicates the following:

Publication is conditional upon the agreement of the authors to make freely available any materials and information described in their publication that may be reasonably requested by others for the purpose of academic, non-commercial research.

Availability of data and materials. PLOS is committed to ensuring the availability of data and materials that underpin any articles published in PLOS journals. PLOS’s ideal is to make all data relevant to a given article and all readily replaceable materials immediately available without restrictions (while not compromising confidentiality in the context of human-subject research).

The policy therefore outlines that data should be shared for the purposes of academic research and without compromising confidentiality in the context of human-subject research. Our goal is for authors to release as much data as possible, however the policy cannot supersede the requirements for ethical/data access oversight that may apply to the use of datasets involving human subjects.

It is not within our remit as editors to judge what the requirements for consent, patient privacy or data de-identification should be, such considerations are handled by institutional committees (such as ethics committees), which are equipped to assess and rule on such matters. In the context of our policy we will follow up on any and all requests for data underlying publications in the journal, but there may be situations where such follow up establishes that access to the data requires a process of evaluation by an ethics committee. If that is the case, our position is that the readers requesting the data should pursue such approval process in the context of a defined research proposal, in the same manner as the researchers who undertook the work described in the article did.

Commentary

In making this new statement available, I believe the scientific and larger communities have the right to know what is going on and should have the opportunity to comment. We are all stakeholders and although PLOS One does not apparently have a mechanism for soliciting such comments, I think we can be heard and influential.

Here is some preliminary observations concerning the communication from PLOS One

It’s reassuring that the journal reaffirms its commitment to making all data relevant to the article available, but it’s worrisome that PLOS One is legitimizing previously expressed objections of the PACE investigators.

Namely, sharing the data would be inconsistent with their purported commitment to protecting privacy and to honoring the conditions laid out in obtaining patient consent to participation in the trial.

In a decision last winter, the UK Information Commissioner’s (IC) rejected arguments from the PACE investigators that fears of breaching confidentiality because of the inability to anonymize data precluded making the data available. As we now know, anonymizing can be accomplished with readily available technologies.

The recruitment and consenting materials that I have reviewed do not mention any commitment to restricting access to anonymized data. Indeed, the Medical Research Council (MRC) funded a sister trial to PACE for which data has already been readily available. I would be curious to know if there are any relevant differences in recruitment and consent materials between the two trials.

Furthermore, in recent arguments for rejection of a request for data from a PhD physicist, the PACE investigators did not even mention anonymizing or any concern for keeping the promises provided in obtaining patient consent as reasons for denying her the data. A letter from Peter White to the Wall Street Journal acknowledged the PACE trial data can be effectively anonymized.

Finally, the PACE investigators have disclosed that they have already shared the data with researchers outside their group.

It’s not clear whether the requirement for review of a request by an ethics committee for data opens new loopholes for withholding the data. I’m concerned about a non sequitur being introduced here: that the application to the committee has to involve a pre-specified research plan.

Where did that requirement come from? The PACE investigators notoriously changed their endpoints from their protocol. This raises curiosity not only about what results would’ve been obtained with the original endpoints, but also about what may be uncovered by more exploratory, even forensic, analyses probing what might have led to the decision and what the PIs might be avoiding.

Having the data from published papers available allows independently checking the basis for the original investigators’ claims. This becomes particularly important – as in the case of the PACE trial – when investigators have conflicts of interest and when the conflicts of interest were not disclosed to the study participants.

However, another widely accepted rationale for data sharing is that clinical trials are expensive and impose considerable burdens on patients who participate in them. The best use of the public funds that supported trials and of the patients’ contribution is to make data available for exploring research questions that were not anticipated, without the added expense and patient burden of conducting a whole new trial. I anticipate some anomalies being uncovered that would best be examined in analyses explicitly labeled and interpreted as exploratory.

It’s a relief that the PLOS One administration seems to be avoiding any endorsement of the kinds of loyalty or character tests that were so pivotal in refusals to release the data to date. The PACE investigators may have turned my request for the data into a matter falling under the freedom of information act and rejected the request because my alleged vexatiousness and impure motives in wanting the data.

I’m sure that many of you are disappointed that after all this time, an email from the Managing Editor of PLOS One did not simply provide simple instructions how I and others could access the PACE trial data.

I had also assumed that my request would yield circumstances by which everyone could access the data. It now appears that anyone wanting to access the data, needs to anticipate long delays and a frustrating and potentially uncertain bureaucratic process.

Am I missing anything in this communication?

We need to keep in mind that we always have the option of demanding a retraction of the PLOS One article if the investigators set unreasonable conditions for independently scrutinizing their dubious claims.

And then there is the much-anticipated final decision on April 22, 2016 concerning Queen Mary University London appeal of an Information Commission decision requiring release of the PACE trial reported in The Lancet.

Innocent and cynical emotional appeals for suicide prevention programs

The inevitable wastefulness of suicide prevention programs –

  • pathos-emotional-appeal-adapted-from-thos-kane-oxford-guide-to-writing-1983-nThe innocent and the cynical make emotional appeals for suicide prevention programs.
  • Open-minded citizen-scientists ask for simple numbers to do some calculations. Obtaining these numbers, they come to inevitable conclusions.
  • This blog post shows you how to analyze emotional appeals for suicide prevention programs, whether they are innocent or cynical.

A recent article in The Lancet Psychiatry

Khalifeh H, Hunt IM, Appleby L, Howard LM. Suicide in perinatal and non-perinatal women in contact with psychiatric services: 15 year findings from a UK national inquiry. The Lancet Psychiatry. 2016 Jan 16.

Was reviewed in the Mental Elf.

The blog post ended with a non sequitur:

Routine assertive follow-up should be provided to all women in the perinatal period to address the risk of suicide [emphasis in original].

The blog post received a number of tweets, including:

“This really needs to change urgently. Suicide during the perinatal period.”

And:

“Suicide during the perinatal period… This is exactly why we need better services”

I tweeted:

no evidence

I got an expected response:

“… Are you suggesting some suicides okay?”

If you bring up evidence when you’re dealing with an emotional, politically charged topic, you at least get misunderstood. Maybe you get maligned and have to defend yourself.

A look at the numbers

Integrated data on all suspected suicides by women aged 16 – 50 between 1997 and 2012 in the UK with data concerning whether or not the woman had been in contact with psychiatric services in the 12 months preceding their death.

Reducing the sample to 4785 women in the perinatal period yielded 80 women who died by suicide during the first year after birth of the child and 18 who by suicide died during pregnancy.

So, we are dealing with approximately five women dying by suicide in the entire UK in the first year postpartum and one in the entire UK dying by suicide during pregnancy.

Statistical analyses with such small numbers should not be taken too seriously, but the blog post and the target paper refers to a reduced likelihood that women in the perinatal period who died by suicide were – compared to other women dying by suicide at other times –less likely to ever have been admitted to psychiatric facilities, less likely to be prescribed any psychiatric medication at the time of death, and less likely to be receiving psychotherapy. They were more likely to have a diagnosis of depression and a duration of illness less than a year.

With such small numbers of events – deaths by suicide – it makes much more sense to look at the circumstances of the few suicides than to try to speculate from multivariate statistics. But let’s go with what the article and blog present.

Women who died by suicide during pregnancy in the year afterwards are less likely to be admitted to inpatient units and less likely to be getting treatment at their end of life. We don’t know if the women made medical emergency room contacts as a result of attempted suicide, which is quite relevant. But it seems that some of the women were just coming into contact with the mental health system. Were they treated in primary care beforehand and only then referred onward when they became suicidal? We don’t know, but we need to know.

But more importantly, we’re talking a single death per year in all of the UK. Any loss of life is tragic, but are we prepared to reorganize services in the off chance that we could prevent this single death? In a time of scarce resources, what would be the trade-off? My understanding is that in the UK, referrals for psychotherapy can take longer to be completed than a pregnancy, particularly if the depression is detected in the third trimester.

Multivariate analyses with such small numbers are even more dubious. Nonetheless, when controls were introduced for age, ethnicity, marital status, and employment status, only a diagnosis of depression and an illness duration of less than a year remain significant.

Again, we’re really trying to make sense of unreliable numbers. But could it be that the few deaths by suicide among pregnant women and women who recently gave birth had unwanted pregnancies? Were there issues associated with partner or family reactions to the unwanted pregnancy contributing to wish to die?

Would the system be better off focusing on how unwanted pregnancies are dealt with than with creating specialty programs to prevent suicide among these women?

Effectively dealing with an unwanted pregnancy could be a benefit in itself, aside from whether it reduced the single death by suicide among pregnant women per year in the UK or the five in the first year postpartum.

Having been a scientific advisor to over a decade of futile suicide prevention programs, I’m convinced that it is better to create programs which are justified by sufficient evidence for meeting goals other than reducing suicide.  We might hope that they reduce the infrequent event of death by suicide, but don’t reasonably expect to demonstrate that.

More complex multivariate analyses take us far into the realm of voodoo statistics. Nonetheless, when additional controls were introduced such as depression diagnosis, alcohol misuse, drug issues, personality disorder, and history of psychiatric admissions, no risk factors remain significant.

The naïve and innocent

Another Mental Elf post discussed the article:

Bolton JM, Gunnell D, Turecki G. Suicide risk assessment and intervention in people with mental illness. The BMJ 2015;351:h4978

The abstract of the article is:

Suicide is the 15th most common cause of death worldwide. Although relatively uncommon in the general population, suicide rates are much higher in people with mental health problems. Clinicians often have to assess and manage suicide risk. Risk assessment is challenging for several reasons, not least because conventional approaches to risk assessment rely on patient self reporting and suicidal patients may wish to conceal their plans. Accurate methods of predicting suicide therefore remain elusive and are actively being studied. Novel approaches to risk assessment have shown promise, including empirically derived tools and implicit association tests. Service provision for suicidal patients is often substandard, particularly at times of highest need, such as after discharge from hospital or the emergency department. Although several drug based and psychotherapy based treatments exist, the best approaches to reducing the risk of suicide are still unclear. Some of the most compelling evidence supports long established treatments such as lithium and cognitive behavioral therapy. Emerging options include ketamine and internet based psychotherapies. This review summarizes the current science in suicide risk assessment and provides an overview of the interventions shown to reduce the risk of suicide, with a focus on the clinical management of people with mental disorders.

I agree that the most compelling evidence is for lithium in the case of bipolar disorder and cognitive behavior therapy for depression, but the evidence is also sufficient to be providing those treatments for bipolar disorder and depression, respectively. Any reduction in suicide is a plus.

The blog post tries to spin a hopeful picture:

Emerging methods of supporting a clinical assessment (e.g. the Implicit Association Test), which have an empirical basis, and are interview independent, may provide an important new avenue for supplementing clinical suicide risk assessment. Whilst the review may paint a somewhat bleak picture of the ‘state of the art’ of suicide risk assessment, this is not a message of hopelessness, but rather a clarion call to action for further research into how we can refine risk assessment and intervention development.

I really don’t think the Implicit Association Test is going to get around the many persons not expressing intention to die by suicide while in contact with mental health or other professionals. I seriously doubt whether it has been validated as a predictor of suicide, rather than a surrogate end point like suicidal ideation.

The blog post ends with a realistic appraisal of the difficulty predicting suicide but slides into another emotional non sequitur:

Given the fluctuating nature of suicide risk and the fact that it is a rare event which cannot accurately be predicted, the only safe response is to take all suicidal thoughts seriously and respond appropriately. NICE advocate a needs and assets-based approach after self-harm rather than focusing on risk. We welcome the day when everyone at risk of suicide is responded to with compassion, confidence, and competence, and have a co-created safety plan to ensure their safety.

compassion.PNG

The Mental Elf blog post ended with an emotional plea. But what does this mean and what action does it call for? How practical is it? These questions need to be asked.

The cynical and opportunistic

The Mental Elf blog post claims that women who discontinue antidepressant medication during pregnancy have a significant risk of depression versus those who continue, citing this article:

Cohen LS, Altshuler LL, Harlow BL, Nonacs R, Newport DJ, Viguera AC, Suri R, Burt VK, Hendrick V, Reminick AM, Loughead A. Relapse of major depression during pregnancy in women who maintain or discontinue antidepressant treatment. JAMA. 2006 Feb 1;295(5):499-507.

The Cohen article created a scandal and an embarrassment for JAMA. The lead author had undisclosed conflicts of interest – ties to the pharmaceutical industry – and oddly biased and unrepresentative sampling yielded an exaggerated risk within this study. The sample was certainly not representative of the typical woman on antidepressants when she discovers she is pregnant. The risk was exaggerated.

JAMA was so embarrassed by being snookered by this and other articles by psychiatrists with undeclared conflicts of interest that in the aftermath, an editor declined our proposal to write a systematic review of screening for depression during the perinatal period. This was confirmed in a number of phone calls with the editor. Our article had to go elsewhere, landing in a more obscure place, after evaluation by reviewers who seem to have no familiarity with depression in the perinatal period:

Thombs BD, Arthurs E, Coronado-Montoya S, Roseman M, Delisle VC, Leavens A, Levis B, Azoulay L, Smith C, Ciofani L, Coyne JC. Depression screening and patient outcomes in pregnancy or postpartum: a systematic review. Journal of Psychosomatic Research. 2014 Jun 30;76(6):433-46.

The issue of whether a pregnant woman should initiate or continue antidepressants is a complex, personal decision concerning risk versus benefits that takes other risks into account, including past history of depression and suicide attempts. Such a discussion deserves an informed discussion with a primary care physician, who seldom are willing or prepared to offer them the time.

The naïve and the cynical: David Cameron pledges money for a revolution in maternal mental health

David Cameron pledges money

David Cameron won’t be remembered for improving mental health services. . But he scores a lot of points by proposing the creation of specialized services – without really addressing the serious crisis in routine provision of mental health services to all – inaccessibility and long delays to actual receipt of services.

The prime minister will pledge on Monday to end the postcode lottery under which three-quarters of the 40,000 women a year who experience conditions such as postnatal depression do not receive vital treatment intended to keep families together, protect babies and reduce the risk of maternal suicide.

I think what we have discussed above should raise skepticism about whether the specialized services will actually reduce maternal death by suicide.

Pledging a huge expansion of services to tackle the huge unmet need in maternal mental health, Simon Stevens, chief executive of NHS England, told the Guardian: “At the moment about 40,000 women who are pregnant or within the first year of having their baby have a severe mental health problem. But of those 40,000 only about 10,000 are at the moment getting access to specialist perinatal mental health services. Three out of four are missing out. But by the end of the decade we are going to make that a universal offer, so all 40,000 will get access to a local specialist team.”

A tug at our heartstrings but let’s watch our wallets.

If we’re talking about “severe mental health problems,” then we’re mostly talking about pre-existing conditions, most of which have been treated within routine care. The big challenge is not letting receipt of those services get disrupted by pregnancy and care for an infant or restoring the connection if indeed these services are disrupted. Given the prevalence of “severe mental health problems” and the difficulties getting pregnant women and mothers of infants to their appointments, it would seem best to facilitate strengthening their connection with pre-existing care.

Guaranteeing care for every mother who needs it by 2020 will tackle what the Maternal Mental Health Alliance – a coalition of more than 60 organisations that work on the issue – claims are “shocking gaps” in “patchy” NHS maternal mental health services.

In his speech, Cameron will announce that the NHS will put £290m into creating new community perinatal mental health teams and more beds in mother-and-baby units. They help women battling post-traumatic stress disorder, postpartum psychosis and other serious similar conditions. There are about 120 in England but experts such as Andy Bell, deputy director of the Centre for Mental Health, say 60 more are needed.

Postpartum psychosis is dramatic and carries risk to the infant as well as the mother. But it is fortunately rare, and has different causal factors than more common major depression. A good predictor is a past psychotic experience.

Suicide is the second biggest cause of maternal death after sepsis, a violent immune reaction caused by serious infection. A recent major inquiry into such deaths found that mental health problems were involved in 23% of them and that one in seven is from suicide. More than 100 suicides occurred between 2009 and 2013, it found.

Here we go again: we need to consider absolute rates of death by suicide, not relative rates, deaths are relatively uncommon during pregnancy in the first year postpartum.

Dr Dan Poulter, the Conservative MP who was minister for maternity care until last May, said: “It’s frankly unacceptable that mental health is the second commonest cause of maternal death.”

Dr. Poulter, exactly how is it unacceptable and do you really think that anything you do will make a difference? What exactly are you doing by declaring it “frankly unacceptable” except for looking good politically?

“Mental health has been the Cinderella of the NHS and perinatal mental health services haven’t historically received the investment attention they needed. If they’d had that, we’d have been able to identify more mums at risk and prevent many of these deaths from suicide.

Dr. Poulter, you are naïve or cynical or both.

In his highly political role as President of the Royal College, Simon Wessely is hardly in a position to counter emotion with evidence. Instead, he is evasive and vaguely goes along with the emotion:

Prof Sir Simon Wessely, President of the Royal College of Psychiatrists, said: “Suicide in anyone is a tragedy, but the impact on a new family is probably as bad as it gets, so extending quality mental health to all, and not just some, of those new mothers with serious depression or psychosis is clearly the right thing to do.”

Yep, probably as bad “as it gets” and so develop any special services is “clearly the right thing to do.”

As I was uploading this blog post, I came across highly relevant criticism of the David Cameron’s government being out of touch with the evidence:

Questions the Government can’t answer on mental health as it fails to tackle growing crisis 

government can't answer

David Cameron’s government is failing to collect the vital data needed to help tackle Britain’s growing mental health care crisis, the Mirror reveals.

Bungling Tories admit they have no idea how many young parents have committed suicide in the UK – nor how many people diagnosed with mental illness end up in prison.

Labour will highlight 30 different mental health related issues on which the Tories were unable to provide any information in a hard-hitting #mentalhealthmatters campaign.

Shadow Minister for Mental Health Luciana Berger last night branded the situation “absolutely appalling.”

Andy Bell, deputy chief executive at the Centre For Mental Health said: “It is surprising what we don’t know.”

Suggestions for dealing with the inevitable future emotional appeals for suicide prevention.

  • Ask about absolute numbers of how many lives would be saved if everything went as planned.
  • Ask about the likelihood of these lives actually being saved given the unpredictability and infrequency of death by suicide, no matter how tragic.
  • Ask about the specialized services that would remain unutilized or underutilized if they are devoted to dealing specifically with risk of suicide.
  • Ask about what mental health needs of persons at high risk of suicide would be better addressed within existing mental health services, emphasizing continuity of care, rather than setting up separate specialized services.
  • Ask what improvements to the existing inadequacy of mental health care would be postponed if the services were diverted on the basis of emotion, not evidence.

no available beds

Specifically for suicide during the peripartum –

  • Ask whether those who demand specialized programs seem aware that depression is now more common among pregnant and peripartum women versus other woman of the same age and suicide is less common.
  • Ask whether addressing resolution of unwanted pregnancies, including those due to rape or incest, would be a more efficient use of resources in terms of overall improvement in maternal well-being than committing the same resources specialized services attempting to prevent infrequent suicides.

Why is the topic of unwanted pregnancies so seldom raised in discussions of perinatal suicide?

 

 

‘We don’t share data.’ Why Peter White’s Wall Street Journal letter can be ignored

 

A few posts back, I discussed how the push for sharing of data from the PACE for chronic fatigue syndrome trial got a boost from an article in the Wall Street Journal (WSJ) by Amy Dockser Marcus.

Amy headlineQueens University of London would not make the PACE investigators available to the journalist for comment. I’m sure they think that this was a strategic error.  An article in a major US newspaper has become an embarrassment for them.

Peter White has now responded with an ineffectual letter to the editor.

Those of us pushing for release of the data should not be concerned. In this blog post I’ll explain why.

Here is White’s complete letter [Please click to enlarge]:

peter white wsj letter

The features  to notice are that the PACE investigators are “unable” to release their data to “the public” because they don’t have the “consent” of the patients. Peter White concedes the data can be anonymized, so that’s not the problem. Yet, the investigators’ withholding the data is an ethical imperative. The investigators have already shared the data with other researchers, but they can’t do so with the public.

So, the public is left with the investigators’ claim that the PACE chronic fatigue trial data are the “best available,” with cognitive-behavioral therapy and graded-exercise therapy now have more evidence than any other treatment for chronic fatigue syndrome. We just have to trust the investigators.

I’ll get back to these claims, but first a bit of context.

 Dr. Anna Wood

A number of us were interviewed for the WSJ, but the article will be remembered for its portrayal of Dr. Anna Wood. She has a doctorate in physics and happens to be a patient. She requested a small bit of the PACE objective data. She wanted to determine if she could validate with objective data the investigators claims that were based on only manipulating subjective data.  Specifically, she wanted to see if these subjective self-report data had any relationship at all to the more meaningful six-minute walk test.

I would gladly bet the PACE investigators a decent bottle of scotch that the treatments don’t register on the six-minute walk test. That would be truly embarrassing, because most scientists consider such objective data more meaningful. But I’m sure the PACE investigators know all this. They don’t want anybody to take a look, so there’s no way that Dr. Wood will get the data.

Dr. Wood was quoted as saying that she would look at the data with “an open mind, as a scientist.”

That’s not particularly extraordinary. Except that the PACE investigators under the direction of Simon Wessely and the Science Media Centre have worked hard to enforce the meme of angry, vexatious, and even dangerous patients who should not be trusted with the data.

Richard Horton, editor of The Lancet was trotted out with his tired old warning for the WSJ. It is an exact quote of what he has said numerous times elsewhere.

 “When you see some things written on social media [about the ME/CFS trial and the investigators], it makes you anxious that the claims for rational scientific debate based on access to the data may not be fully delivered.”

Anna Sheridan Wood

But this objection was neutralized by the quote from Dr. Wood and a photo portraying her as a pleasant, demure professional, not a raving loony. This is a PR problem for the PACE investigators.

With the PACE investigators having opted out, Dr. Stephan Lewandowsky was recruited as a foil.

 

 

smaller stephenIn contrast, the photo provided by the University of Bristol of Lewandowskywas a lot less appealing . Posing with his hand on his hip in an ill-fitting dark suit, dark blue shirt, and Donald Trump tie, he looks like the bouncer from Tony Sopranos strip club, Bada Bing.bada bing

Comparing photos should be irrelevant to whether we should get the PACE data released, but such images are a part of effective communication.

Really, coming in off the street and knowing nothing more, with which of these individuals would you side?

A cultural theorist would also point out an obvious additional feature disadvantaging the PACE investigators. The American presidential campaign has brought into the limelight rough men bullying women and treating them like objects. It is particularly bad timing for the PACE investigators to have Lewandowsky with his hand on his hip squaring off against a pleasant woman.

We shouldn’t be noticing this, but we do: The struggle for the PACE data has up until now largely been a struggle between domineering old white men versus patients, who are a varied group, but disproportionately, very ill women confined to bed with insufferable pain.

 Digression: what is a foil?

In fiction, a foil is a character who contrasts with another character (usually the protagonist) in order to highlight particular qualities of the other character.[2][3][4] In some cases, a subplot can be used as a foil to the main plot…The word foil comes from the old practice of backing gems with foil in order to make them shine more brightly..[6]

Think the Harry Potter series, particularly the book. Consider Draco Malfoy as the foil of the Harry Potter character.

 Harry Potter and Draco Malfoy are similar enough…to make their differences meaningful. They are both Slytherin potentials, and reasonably good students. However, while Draco chose Slytherin, Harry chose Gryffindor. Harry is brave, and Malfoy is a coward and a tool.

 Back to Peter White’s letter to the editor

 Left to a brief comment in the wake of the WSJ article already having been published, Peter White wisely vacates any claim he is dealing with vexatious patients, at least for now. So, it’s a matter of keeping the data from the public. But the problem is that Dr. Wood is not just the public, she’s a scientist.

Hey, Peter, I’m a scientist – that that – I’m still waiting for the data that you promised would be available as a condition of publishing in PLOS One.

Peter White then concedes that anonymizing the data is not a problem. We already know that anonymizing is readily done with a variety of available techniques. But for Peter White to concede it here is a big admission that he will regret later. I’ll make a point of repeatedly pointing that out to him.

Peter White: Staunch champion of Patient Rights?

 Let’s look at that ‘respect for the patients who participated in the trial.’ Peter White had gross conflicts of interest in running the trial. He and the insurance companies for whom he worked would benefit from a particular spin on the results.

tied double bindThe Declaration of Helsinki, the international rules for recruiting patients in the clinical trials and gaining their informed consent requires that patients be told of such conflicts of interest. But this information was kept from the patients participating in the PACE trial so they couldn’t take it into account in deciding whether to participate. Not much respect for patient rights there.

So, Peter White claims he is ethically bound to respect patients, but when it is convenient, he gets unbound.

It gets worse. When Peter White and colleagues collected the data they used to write their papers, they didn’t anonymize it and store it in secure places where it could be matched up against the names of participants. Some of the data actually got stolen. So again, Peter White and colleagues violated the rights of patients. But here they are claiming the dedication of those rights prevents them from sharing the data.

 Are Peter White and the PACE investigators good at sharing data with colleagues?

 Readers of the WSJ have no way of evaluating Peter White’s claim. But he is referring to his co-authoring of a Cochrane review. He did not just share the data.  He became an author, maintaining control of what would be said about the data.

In doing so, he is violating some fundamental checks and balances. Any trial investigators, particularly those with financial conflicts of interest, should not be involved in what should be a more disinterested and dispassionate integration of their data with others’ data in a systematic review. Peter White has created a major mess for Cochrane. We are awaiting how they will deal with a violation of their own standards.

The UK Freedom of Information Act Commissioner has considered Peter White’s arguments and rejected them.

 Most readers of the WSJ have no way of knowing that Peter White has already tried out his arguments about an ethical imperative before the UK Freedom of Information Act Commissioner and lost.

 Here is the full summary of the decision.

But let’s jump to page 11:

The University went on to explain that as the release of individual data would cause it to break its specific agreement with the patients who consented to participate in the study on that basis, this would erode trust and cause people to withdraw from any future new studies that it might plan to undertake.

And page 14:

 “… the information sought here remains the sensitive medical information of 640 patients, who although consented to participate in a research study, received medical treatment from medical practitioners subject to obligations of confidence and as QMUL has further noted, their participation in the study was subject to specific assurances of confidentiality, raising clear expectations, reasonably held, that such information would be kept private. A disclosure without consent of the patients’ medical data would seem to breach both data protection, the medical obligation of confidence and Article 8 ECHR.”

 On page 21 the Commissioner summarizes the questions that the testimony from the University raised. On page 22 the Commissioner concluded:

 In order for section 41 to apply it is necessary for all of the relevant  elements of the test of confidence to be satisfied. Therefore if one or more of the elements is not satisfied then section 41 will not apply. The Commissioner has explained, in relation to the application of section 40(2), why he does not consider it possible to reliably identify an individual as the subject of the withheld information from its contents or if it is linked with other material available to the general public. In such circumstances he does not consider that there can be an expectation of confidence or that disclosure would cause detriment by way of an invasion of privacy. Therefore it follows that there can be no breach of confidence to action and section 41 does not apply.

 So, where does Peter White’s letter to the editor leave us?

 The PACE investigators are well practiced at fielding challenges with stock answers and saying nothing new. But I think in this letter, Peter White has ceded some ground in acknowledging that the data can anonymized. Maybe he didn’t embarrass himself by dragging in the old meme, but claiming he could release the data to “the public” will not serve him well.

I am a scientist, but so what? Patients are significant stakeholders affected by how the data are interpreted. By contemporary standards –as promoted by The BMJ– they should be involved in the design of clinical trials, participate in their interpretation, and have access to the data.

Not that it matters, but a number of the patients who have requested the data have demonstrated capacity to analyze and interpret data in peer-reviewed articles. One of them is awaiting an April 22 decision concerning Peter White’s appeal of the Commissioner’s earlier rejection of his arguments.

Over 100 days after I made my initial request, I’m still waiting for PLOS One to hand over the PACE data or retract their paper . I will settle for either. We already know that the PACE investigators are hiding something or they would’ve already given me (and others) the data.

 Three bits of advice from Bertrand Russell to Peter White

 Peter White doesn’t like to answer to anyone outside of his circles of friends and family. However, if he were around, British philosopher, mathematician, historian, and social critic Bertrand Russell (May 18, 1872–February 2, 1970) could offer him some useful advice:

  • Do not think it worth while to proceed by concealing evidence, for the evidence is sure to come to light.
  • Do not use power to suppress opinions you think pernicious, for if you do the opinions will suppress you.
  • Be scrupulously truthful, even if the truth is inconvenient, for it is more inconvenient when you try to conceal it.