What reviewers can do to improve the trustworthiness of the psychotherapy literature

Reviewers arise, we have only an untrustworthy psychotherapy literature to lose.

Psychotherapy researchers have considerable incentives to switch outcomes, hide data, and spin reports of trials to get published in prestigious journals, promote their treatments in workshops, and secures future funding. The questionable research practices that permeate psychotherapy research cannot be changed without first challenging the questionable publication practices that allow and encourage them.

Journals must be held responsible for the untrustworthiness of what they publish concerning the efficacy and effectiveness of psychotherapy. When journals publish unreliable findings, they are failing not only their readers, but patients, clinicians, and policymakers.

Yet there are institutional agendas supporting and encouraging the questionable research practices of psychotherapy researchers. Unreliable, but newsworthy reports of “breakthrough” findings contribute more early citations than honest, transparent reporting of findings that are inevitably more modest than the illusions that questionable research practices and poor reporting can create. Early citations of articles lead to higher impact factors, which, rightly or wrongly, are associated with more prestige and the ability to attract reports of more ambitious, better resourced trials, even if the reliability of the reports are in question.

Editors of journals often endorse responsible practices such as registration of trials, publishing of protocols, and CONSORT (Consolidated Standards for Reporting Clinical Trials), but do little to enforce these practices in request for revisions and editorial decisions.

Reviewers can nonetheless lead the reform of the psychotherapy literature by making their own stand for responsible reporting.

The burden of getting a better psychotherapy literature may depend on reviewers’ insistent efforts, particularly in the likelihood that journals for which they review are lax or inconsistent in enforcing standards, as they often are.

When reviewers are given no explicit encouragement from the journals, they should not be surprised when their recommendations are overruled or when they do not get further requests for reviews after holding authors to best practices. But reviewers can try anyway and decline further requests for reviews from journals that don’t enforce standards.

cheshire cat4Recently I tried to track the progress of a psychotherapy trial from (a) its registration to (b) publishing of its protocol to (c) reporting of its outcomes in the peer-reviewed literature.

Reports of the trial had been reported in at least two articles. The first reportin Psychosomatic Medicine ignored the primary outcomes declared in the protocol.

Journal of Psychosomatic Research published another report that did not acknowledge registration, minimally cited the  first paper without noting its results, and hid some important shortcomings of the trial.

Together, these two papers entered a range of effect sizes for the same trial into the literature. Neither article by itself indicating which should be considered the primary outcome and they compete for this claim. When well done, meta-analyses should be limited to a single effect size per study. Good luck to anyone undertaking the bewildering task of determining which of the outcomes, if any, of the reported in these two papers should be counted.

Overall, detecting the full range of problems in this trial and even definitively establishing that the two reports were from the same trial, took considerable effort. The article in JPR did not give details or any results of the first report of the trial in PM. Although the PM article and JPR claimed to adhere to CONSORT in its reporting, it provided no flow chart of participants moving from recruitment through follow-up. That flowchart was included in the PM article but not in JPR. Yet even in PM,  the authors failed to discuss that the flowchart indicated substantial lower retention of patients randomized to treatment as usual (TAU). A reader also had to scrutinize the tables in both articles to recognize the degree to which substantial differences in baseline characteristics influenced the outcome of the trial and limited its interpretability. This was not acknowledged by the authors.

Overall, figuring out what happened in this trial took intense scrutiny, forensic attention to detail, and a certain clinical connoisseurship. Yet that is what we need to evaluate what contribution to the literature it provided, and with what important cautions because of its limitations.

There were shortcomings in the peer review of these two articles, but I don’t think that we can expect unpaid reviewers to give the kind of attention to detail that I gave in my blog. Yet, we can expect reviewers to notice basic details related to the trustworthiness of reporting of psychotherapy trials than they now typically do.

If reviewers don’t catch certain fundamental problems that may be hiding in plain sight, there are unlikely to be detected by subsequent readers of the published paper. It is notoriously difficult to correct errors once they are published. Retractions are almost nonexistent. APA journalssuch as Journal of Consulting and Clinical Psychology or Health Psychology that are the preferred outlets for many investigators publishing psychotherapy trials are quite averse to publishing critical letters to the editor

Anyone who has tried to published letters to the editor criticizing articles in these journals knows that editors set a high bar for even considering any criticism. Authors being criticized often get veto over what gets published about their work, either by being asked directly or by simply refusing to respond to the criticism. Some journals still hold to the policy that criticism cannot be published without a response from authors.

It also isn’t even clear that the authors of the original papers have to undergo peer review of their responses to critics. One gets doubts from the kinds of ad hominem attacks that are allowed from them and from authors’ general tendency to simply ignore the key points being made by critics. And authors get the last word, with usually only a single sequence of criticism and response allowed.

The solution to untrustworthy findings in the psychotherapy literature cannot depend on the existing, conventional system of post publication peer review for correction. Rather, something has to be done proactively to improve the prepublication peer review

A call to arms

If you are asked to review manuscripts reporting psychotherapy trials, I invite you to join the struggle for a more trustworthy literature. As a reviewer, you can insist that manuscripts clearly and prominently cite:

  • Trial registration.
  • Published study protocol.
  • All previously published reports of outcomes.
  • What are the reports might subsequently be in the works.

Author should provide clear statements in both the cover letter and the manuscript whether it is the flagship paper from the project reporting primary outcomes outcomes.

Reviewers should double check the manuscript against electronic bibliographic sources such as Google Scholar and Pub Med to see if other papers from the are not being reported. Google Scholar can often provide a way of identifying reports that don’t make it into the peer-reviewed literature as reported in Pub Med or have not yet made it to listings in Pub Med.

Checking is best done by entering the names of all authors into a search. It’s often the case that order of authors change between papers and authors are added or dropped. But presumably there will be some overlap.

Reviewers should check for the consistency between what is identified as outcomes in a manuscript being reviewed with what was registered and what was said in the published protocol. Inconsistencies should be expected, but reviewers should but insist these be resolved in what could be a major revision of the manuscript. Presumably, as a reviewer, you can’t make final recommendations for publication without this information been prominently available within the paper and you should encourage the editor to withhold judgment.

Reviewers should alert editors to incomplete or inaccurate reporting and consider recommending a decision of “major revisions” where they would otherwise be inclined to recommend “minor revisions” or outright acceptance.

It can be a thankless task to attempt to improve the reliability of what is published in the psychotherapy literature. Editors won’t always like you, because you are operating counter to their goal of getting newsworthy reports into their journals. But the next time, particularly if they disregard your critique, you can refuse to review for them and announce that you are doing so in the social media.

Update 1 (January 15, 2016 8:30 am): The current nontransparent system of prepublication peer review requires reviewers to keep confidential the review process and not identify themselves as having been involved after the fact. Yet, consistent with that agreement of confidentiality, reviewers are still free to comment on published papers. When they see that journals have ignored their recommendations and allowed the publication of untrustworthy reports of psychotherapy trials, what options do they have?

They can simply go to PubPeer and post a critique of the published trial without identifying themselves as having been a reviewer. If they are lucky, they will get a thread of post publication peer review commentary going that will influence the subsequent perception of the trial’s results. I strongly recommend this procedure. Of course, if they would like, disappointed reviewers can write a letter to the editor, but I’ve long been disillusioned with the effectiveness of that approach. Taking that route is likely to only leave them disappointed and frustrated.

Update (January 15, 2016 9:00 am):

While I was working on my last update, an announcement about the PRO appeared on Twitter. I reviewed it, signed on, and find its intent quite relevant to what I am advocating here. Please consider signing on yourself.

The Peer Reviewers’ Openness (PRO) Initiative is, at its core, a simple pledge: scientists who sign up to the initiative agree that, from January 1 2017, will not offer to comprehensively review, or recommend the publication of, any scientific research papers for which the data, materials and analysis code are not publicly available, or for which there is no clear reason as to why these things are not available. To date, over 200 scientists have signed the pledge.

 

just say noReviewers, just say no to journals and editors not supporting registration, transparent reporting, and, importantly the sharing of data required by readers motivated to reevaluate for themselves what is being presented to them.

Advertisements

9 thoughts on “What reviewers can do to improve the trustworthiness of the psychotherapy literature

  1. James – You raise important issues. I would make a couple of quick observations. First, I guess, some defensive folk will immediately ask “why are you singling-out psychotherapy for special attention, rather than say medication trials?’ …and this is a reasonable question, but one that is readily answered. I would venture that the lack of scrutiny on psychotherapy trials is a direct consequence of the well-documented focus on, for example, pharma studies – while drug studies hog the limelight in terms of questionable research practices, psychotherapy slips under the radar and always has done. A good example is the lack of attention to ‘harms’ in psychotherapy trials – these are at least always fully documented in drug trials and poorly and rarely documented in psychotherapy trials. Anyway, my point is that psychotherapy operates sometimes quite nefariously in the shadow of the greater focus on the ills of pharma – but we need to shine that light around a bit more ,,,
    My second point concerns the review process – my concern and I have commented on this previously e.g. my post called “The Farcial Arts” (http://keithsneuroblog.blogspot.com/2014/02/the-farcial-arts-tales-of-science.html) it stems from working in the area of CBT for schizophrenia and psychosis – in this corner of the psychotherapy world, pro-CBT voices represent the majority and critical voices a small minority. This has clear and interesting implications for the review process. Almost all reviewers of CBT trials are reviewing each others work and rarely being seen by critical voices (certainly they ‘never’ come to myself or any of my colleagues) ; By contrast, the work of more sceptical voices are regularly reviewed by the same pro-CBT advocates. The review process in psychotherapy is highly conservative …and one upshot is that inexplicably, very poorly conducted studies are rubber stamped trough the system

    Liked by 1 person

    • Keith – thank you for your wise comments and your link to a blog post that I highly recommend. I think your comments deserve more of a response that I’m giving here. Stay tuned for that.

      I agree that pharma studies hog the limelight in terms of attention to their shortcomings. This at least encourages efforts at reform, including regulatory pressures. We know those attempts at improvement aren’t always effective and very often they are not. However, the psychotherapy literature is subject to the same needed reforms, only when they are extended from the medical literature and often without recognition of the distinct issues involved in evaluating a psychosocial intervention. Simply put, journals more exclusively publishing psychotherapy studies, rather than pharmacological studies, have less incentive to embrace and enforce standards for obtaining trustworthy findings.

      The issue of harms of psychotherapy needs to be better articulated in terms of what adverse effects we can expect and how they will be tracked. Certainly, one harm is that commitment to prolonged and ineffective psychotherapy sometimes keeps patients from receiving more effective treatment. It’s also an empirical question of whether involvement in individual psychotherapy has effects that individuals in troubled relationships would seek, if they were adequately informed. Decades ago, there was lots of interest ind whether going into individual therapy rather than couples therapy increased likelihood of divorce. Whether that is a benefit or adverse effect is a complex and ultimately personal judgment, but we at least need data that we don’t have to even begin to contemplate it.

      Studies of CBT for schizophrenia and psychosis suffer from a poor quality obtained by reviewers having vested interests in promoting these treatments, particularly in the absence of evidence that CBT alone is appropriate in the absence of medication. I think this literature provides a good example of what John Ioannidis labels as having obliged replication – authors have to make certain claims in order to get published, even if their data do not support these claims. The lack of transparency in the review process keeps a lot from scrutiny. I think the review of evaluations of CBT for schizophrenia and psychosis would benefit greatly from relying on reviewers who have methodological skills, but do not have a dog in the fight in terms of trying to show that CBT for schizophrenia and psychosis should be widely disseminated and considered comparable to medication.

      Like

      • Nice point, Keith. I was just discussing a similar issue on another of James’ blogs. There’s a real problem when all – or nearly all – researchers working in a field are also “believers” in a particular philosophy or approach. Examples that quickly come to mind are CBT, psychodynamic theory, psychosomatic medicine. The culture that results is one of uncritical acceptance, where researchers almost never point out flaws in another’s work. Any study that supports the belief is welcomed – the ends justify the means, as it were (or even worse, confirmation bias: getting the “right result” itself validates the study’s methodological integrity). In this type of culture, critical voices either get drowned out (as you point out), or potential critics are just never sufficiently motivated to get involved.

        Like

  2. There seem to be so many vested interests stacked against trial being reported accurately. As you say, it’s in the authors’ (self-perceived) interests to make an unsuccessful study appear successful, and in the journal editors’ (self-perceived) interests to publish successful-looking studies. The incentives are all wrong, and when that’s the case, it’s tempting to think of legislastion rather than self-policing (or even voluntary policing, such as what you’re suggesting reviewers do).

    There can be no doubt that patients die because of null clinical trials having their results inflated through the kinds of practice that you document here. I think it’s time that certain poor research practices relating to clinical trials were against the law and that researchers who broke them face serious sanctions, including prison. Such serious harm can result from bad science in clinical trials.

    Lives are at stake: researchers and journal editors shouldn’t be able to treat patients like tokens in a game.

    Liked by 3 people

    • There are unscrupulous individuals who have built a career on bad science and seem to have no intention of stopping despite critics repeatedly pointing out problems. At some point these individuals should be properly viewed as criminals that harm patients and defraud the taxpayer.

      Like

  3. You say that reviewers should “just say no” to journals and editors who don’t support the sharing of data. We’re all hoping, of course, that PLOS One will enforce their data-sharing policy on PACE, as you’ve requested. Meanwhile, you might be amused to see this excerpt from the PACE statistical analysis plan, which I came across today (irony alert!):

    “There are many reasons why study-specific statistical analysis plans should be published in full, with electronic journals offering the greatest potential for this to be commonplace. Due to space constraints, the paper providing the principal results often contains only a very limited description of the analyses that were planned or carried out. If the study protocol is published, further information is likely to be available. However, this is often insufficient to enable full replication of the analyses.”

    So there they are, going to all that trouble of publishing their plan, so that others could fully replicate their analyses. With the data that they refuse to hand over.

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4226009/

    Liked by 2 people

  4. It sounds like the whole concept of peer review might be becoming outdated with the passing of paper journals and what is needed instead are a whole new set of rules for the digital age. First should be requiring the full dataset be included as a pre-requisite for publication since there are no more space constraints. Second should be transparency regarding peer review. There are simply too many papers being published in too many journals to stick to the old ways of pre-publication peer review by a relatively few individuals and what is needed instead are the tools to allow a robust post-publication review by anyone who wishes to do so.

    Liked by 2 people

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s