Could I critically evaluate the published results of the PACE trial without the raw data?

fb group logoSomeone asked a good question at the excellent Facebook page, Psychological Methods Discussion Group.

Uli R Schimmack I am just wondering whether you can critically evaluate the published results without the raw data. We have seen numerous examples where even published results are sufficient to critique an article. (e.g. #PizzaGate)

This blog post is my response.

Critics are disadvantaged by the investigators having switched the scoring of subjective self-report outcome measures and having suppressed objective measures of outcomes. Investigators also had some influence on reporting of outcomes from related studies as well as a meta-analysis published in Cochrane. The investigators’ outcome switching is thus tainting what should be independent reviews and integration of their findings with other literature.

Diligent critics have managed to gather data from the appendices of papers and the slides of presentations and careful comparing between groups in the PACE trial and  across papers and with population norms.  For instance:

PACE fitness data

CBT = Cognitive Behaviour Therapy GET = Graded Exercise Therapy CBT and GET are the treatments favored by the PACE investigators

Some of these findings have been summarized:

We can estimate the normal range for this test from recently published norms based on a comparable version of this task (like PACE, it used a 10 m track length) [40]. Taking into account the PACE participants’ gender composition, average age and body mass index, and adopting the formula derived from the published norms, the lower bound of normal for this test is 589 m. None of the patients in the CBT, GET or Control groups who qualified as ‘recovered’ achieved a walking distance that approached this lower bound, even after a whole year – irrespective of whether the protocol-specified or the revised definition of recovery is used. Unfortunately, individual patient data for the other objective measures have not yet been made available, so we cannot evaluate how the ‘recovered’ patients fared on these.

Results of an independent intervention study suggested that changes in subjective measures are not reflected in objective measures of activity.

After a legal battle lasting five years in which the investigators spent almost 250,000 pounds, a small amount of data was obtained and analyzed. By themselves, these data do not allow replication of the controversial original PACE trial findings reported in Lancet.

The PACE investigators are determined to keep anyone outside their circle of friends and supporters from examining their data. Peter White even petitioned  the UK Parliament to exempt universities from freedom of information act requests and specifically from FOIAs intended to obtain clinical trial data.

The PACE investigators are thus even willing to dismantle in the UK the basic mechanism by which data is shared for all research in order to protect themselves.

The results of the reanalysis of the PACE data which are currently possible are reported here:

 Wilshire, C., Kindlon, T., Matthees, A. and McGrath, S., 2017. Can patients with chronic fatigue syndrome really recover after graded exercise or cognitive behavioural therapy? A critical commentary and preliminary re-analysis of the PACE trial. Fatigue: Biomedicine, Health & Behavior, pp.1-14.

BACKGROUND: Publications from the PACE trial reported that 22% of chronic fatigue syndrome patients recovered following graded exercise therapy (GET), and 22% following a specialised form of CBT. Only 7% recovered in a control, no-therapy group. These figures were based on a definition of recovery that differed markedly from that specified in the trial protocol.

PURPOSE: To evaluate whether these recovery claims are justified by the evidence.

METHODS: Drawing on relevant normative data and other research, we critically examine the researchers’ definition of recovery, and whether the late changes they made to this definition were justified. Finally, we calculate recovery rates based on the original protocol-specified definition.

RESULTS: None of the changes made to PACE recovery criteria were adequately justified. Further, the final definition was so lax that on some criteria, it was possible to score below the level required for trial entry, yet still be counted as ‘recovered’. When recovery was defined according to the original protocol, recovery rates in the GET and CBT groups were low and not significantly higher than in the control group (4%, 7% and 3%, respectively).

CONCLUSIONS: The claim that patients can recover as a result of CBT and GET is not justified by the data, and is highly misleading to clinicians and patients considering these treatments.

But we really need to have the data available for independent reanalysis, as promised as a condition for publishing a paper in PLOS One.

Instead, a group of investigators had financial interests which they did not reveal to patients during recruitment for the trial. These investigators published a study funded largely by public money in which they switched outcomes, knowing full well the implications for public and clinical policies, including benefits for the groups to which they were serving as advisors. This is not a tolerable situation.

We should  believe the investigators when they assert that they risk reputational damage if they allow independent re-analyses of their data. I don’t know what other explanation is plausible of investigators having gone to so much trouble to protect their data from outside scrutiny.

PLOS One should make the data from analyses of the PACE trial available immediately without restriction. The data influence patients’ lives and clinical and public policy.

PLOS One should also cease its complicity in the PACE investigators misrepresenting their compliance with the PLOS One data sharing policies, even while withholding data. The journal should prominently correct the front page of the article which now states:

plos coi data sharing


To keep up with new blog posts, as well as speaking and writing activities of Coyne of the Realm, sign up at A lot quick and easier than signing up for access to the PACE trial data.

2 thoughts on “Could I critically evaluate the published results of the PACE trial without the raw data?

  1. Thanks for the article, James. It would be very valuable to see data for the step fitness test. For the graded exercise therapy (GET) programme, this could be used as an indicator of whether an individual actually complied with the programme as instructed. We might then explore whether compliant individuals were different in any other way.

    I would also very much like to explore the relationship between self-reported health and actual function. So for example, how closely related is one’s self assessment of function and one’s actual functioning? And is this relationship tighter in some trial arms that others? There are hints from the small amount of data so far available (which is primarily the walking test results) that this might be the case. Curiously, the self-report/objective outcome relationship looks less tighter in the CBT arm that in the other trial arms. So perhaps patients’ self-reports are changing in a special way that doesn’t carry over very well to actual functioning. I’d have to say this is only a hint at the moment, but it interesting.

    I see much value in the PACE data as a resource for exploring aspects of this sample that have not yet been addressed.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s