Ethical problems of Clinical Psychology Review with Triple P Parenting

Unreliable3LogoUpdate May 17, 2014.  One of my friends, Professor Jon Elhai is listed on the editorial board of Clinical Psychology Review. I wrote to him to ask his opinion of my blog post. He said that I had to be mistaken, he had never agreed to join the editorial board. He is nonetheless listed there without his permission.

I have now included a link to the formal complaint to Committee on Publication Ethics (COPE) filed by Philip Wilson, M.D. PhD, the author who were subject to the mistreatment when we submitted a manuscript to this journal. And here is a link to the committee’s correspondence with the Editor of Clinical Psychology Review.

The editors of Clinical Psychology Review have ethical problems.

They mishandled two manuscripts concerning meta-analyses of Triple P Parenting programs. Until these problems are resolved

  • Anyone contemplating submitting a manuscript to that journal should have second thoughts.
  • Readers should be more skeptical about what they read in this journal and what they do not get to read because of pressures on authors to express a particular point of view.
Photo Credit: Bill Burris

Photo Credit: Bill Burris

Skepticism is always a good stance to take with meta analyses. If you cannot read meta

analyses skeptically, you should not reading them. But recent events make it particularly important with Clinical Psychology Review.

A breach of confidentiality occurred in the review of a manuscript from Philip Wilson, MD that was critical of Triple P Parenting. He was subject to pressures in his workplace triggered by disclosures from someone associated with the review process. Wilson had not told administrators in his workplace about his manuscript. Without explanation, persons associates with Triple P also sent Wilson papers published after his meta analysis had been completed.

I am unimpressed by the lack of diligence that the editors have shown in dealing with Wilson’s complaints to the journal. He alerted him to the problems and asked them if anyone in the review process had a conflict of interest. They would not reply. He has now appropriately moved his complaint to the Committee on Publication Ethics (COPE).

Promoters of Triple P Parenting subsequently published a meta-analysis from promoters of Triple P in Clinical Psychology Review without a disclosure of conflict of interest. Meta analyses done by persons with financial stakes in the evaluation of a treatment are always highly suspect. And this meta-analysis was obviously written in response to the bad publicity generated by publication elsewhere of Wilson’s manuscript that had previously been savaged at Clinical Psychology Review. The article cited Wilson but misrepresented his criticisms.

One of the reasons cited by the editor of Clinical Psychology Review for rejecting Wilson’s manuscript was that there are already too many reviews of Triple P Parenting. Apparently that does not apply to another meta analysis from promoters of Triple P. Note also that one of the “too many” other meta analyses was by authors with financial interests in Triple P.

Triple P Parenting has been highly lucrative for its promoters. Their financial interests have been put at risk by the publication of Wilson’s paper and the attention it is generating. Wilson’s paper documented the weakness of the evidence for the effectiveness of Triple P and the thorough tainting of what evidence there is by involvement by promoters of Triple P.

Already European Early Childhood Education Research Journal has echoed the charges made in Wilson’s paper.  An editorial  announced it was tightening its requirements for disclosure of conflicts of interest.

Other authors have now gone public with complaints of unfair reviews of their papers with negative findings concerning Triple P, the difficulties they faced getting their papers published and of pressures from its promoters. The status Of Triple P as empirically supported has been seriously challenged. And it is getting easier to publish honestly reported negative findings from clinical trials. I saw a similar phenomenon when the validity of Type D Personality was put into question and all of the sudden more negative trials without spin began to be published

 Time for my disclosure.

I have not met Phil Wilson, but I was persuaded by his article in BMC Medicine to take a closer look at the Triple P literature. I concluded that he was actually being too easy on the quality of the evidence for the intervention. He noted serious methodological problems, but missed just how much claims of efficacy depended on poor quality studies that were so small and underpowered that their rate of positive findings was statistically improbable.

I blogged about this, stirring some further controversy that ultimately led to an invitation to expand the blog post into an article at BMC Medicine.

Wilson’s meta-analysis

The meta-analysis is well reasoned and carefully conducted, but scathing in its conclusion:

In volunteer populations over the short term, mothers generally report that Triple P group interventions are better than no intervention, but there is concern about these results given the high risk of bias, poor reporting and potential conflicts of interest. We found no convincing evidence that Triple P interventions work across the whole population or that any benefits are long-term. Given the substantial cost implications, commissioners should apply to parenting programs the standards used in assessing pharmaceutical interventions.

My re-evaluation

My title says it all: Triple P-Positive Parenting programs: the folly of basing social policy on underpowered flawed studies.

My abstract

Wilson et al. provided a valuable systematic and meta-analytic review of the Triple P-Positive Parenting program in which they identified substantial problems in the quality of available evidence. Their review largely escaped unscathed after Sanders et al.’s critical commentary. However, both of these sources overlook the most serious problem with the Triple P literature, namely, the over-reliance on positive but substantially underpowered trials. Such trials are particularly susceptible to risks of bias and investigator manipulation of apparent results. We offer a justification for the criterion of no fewer than 35 participants in either the intervention or control group. Applying this criterion, 19 of the 23 trials identified by Wilson et al. were eliminated. A number of these trials were so small that it would be statistically improbable that they would detect an effect even if it were present. We argued that clinicians and policymakers implementing Triple P programs incorporate evaluations to ensure that goals are being met and resources are not being squandered.

You can read the open access article, but here is the crux of my critique

Many of the trials evaluating Triple P were quite small, with eight trials having less than 20 participants (9 to 18) in the smallest group. This is grossly inadequate to achieve the benefits of randomization and such trials are extremely vulnerable to reclassification or loss to follow-up or missing data from one or two participants. Moreover, we are given no indication how the investigators settled on an intervention or control group this small. Certainly it could not have been decided on the basis of an a priori power analysis, raising concerns of data snooping [14] having occurred. The consistently positive findings reported in the abstracts of such small studies raise further suspicions that investigators have manipulated results by hypothesizing after the results are known (harking) [15], cherry-picking and other inappropriate strategies for handling and reporting data [16]. Such small trials are statistically quite unlikely to detect even a moderate-sized effect, and that so many nonetheless get significant findings attests to a publication bias or obligatory replication [17] being enforced at some points in the publication process.

A dodgy history and questions about the stringency of review at Clinical Psychology Review

The meta analysis published in Clinical Psychology Review was first distributed on the web with a statement that it was under review at Monographs of the Society for Research in Child Development.

Most journals have strict policy forbidding circulation of papers labeled as being under review by them. American Psychological Association and many journals explicitly warn authors not to do this. This is because use of the journal’s name may lends credibility to which might ultimately get rejected. Or it will generate confusion if the paper is later accepted but in a highly revised form. Readers of the unpublished paper might not check to see if it is claims held up through peer review.

Apparently, the manuscript was rejected at Monographs of the Society for Research in Child Development.

You can take a look at the manuscript here. It was much too long for Clinical Psychology Review and so some cuts had to be made before submitting it there. But this earlier manuscript allows a comparison with what actually got published at Clinical Psychology Review and raises serious questions about the stringency of the review.

Prior to submitting to this first journal, the meta-analysis was publicly pre-registered with International Prospective Register of Systematic Reviews (PROSPERO) with the number CRD42012003402.

Publicly accessible preregistration of meta-analyses facilitates transparency and reduces the likelihood that authors will revise their hypotheses after peeking at the results. PLOS One routinely reminds Academic Editors and reviewers to consult the preregistration of meta-analyses. Apparently that was not done at Clinical Psychology Review, but if anyone had bothered to look, they would have found some interesting things.

  • The preregistration clearly reports important conflicts of interest on the part of the authors that were not disclosed in the Clinical Psychology Review article.
  • What was promised in the preregistration concerning comparisons of Triple P Parenting with active treatments did not occur in the Clinical Psychology Review. The result was inflation of the effect sizes for Triple P Parenting.

I find it extraordinary that Clinical Psychology Review did not require disclosure of conflict of interests in the publication of the promoters’ meta-analysis, particularly after they had been sensitized by the issue rejected paper.

We should all be uncomfortable with the appearance of a lack of integrity to the review processes at Clinical Psychology Review. I would be reluctant to submit a manuscript there without these issues being resolved. As seen in my numerous blog posts, I often raise issues about the quality of evidence that is mustered in support of particular therapies being evidence-based. I sometimes get sandbagged in blinded peer review. This often cannot be anticipated, but when instances are called to the attention of journal editors, they should do something other than stonewall.

I would not be want to be a situation similar to what Wilson has faced: getting my manuscript savaged by reviewers with a conflict of interest, then attempting to pressure institutions to silence me. And then having Clinical Psychology Review publish an article meant to counter my paper if I succeeded in getting it published elsewhere, but without disclosure of the authors’ conflicts of interest.

Given the journal has not followed steps laid out in Elsevier’s own Publishing Ethics Resource Kit (PERK) flowcharts, I think the task falls to the publisher to demonstrate the situation has been resolved and any potential threat to other authors by the circumstances of the journal have been removed.

I will be blogging at length at PLOS Mind the Brain about Sanders’ meta-analysis in Clinical Psychology Review. I think it is a case study in utter disregard for the standards of conducting and reporting a meta-analysis— and yet getting published in a reputable journal. What ever peer review this article received, it was inadequate.

Among the problems that I will document:

cherrypickingThe meta-analysis

  • Substantially distorted presentation of results in the abstract.
  • Excluded comparisons between Triple P Parenting and active intervention after promising them I the protocol. This exclusion inflated the effect size of triple P that was reported
  • Combined nonrandomized studies with RCTs in ways that inflate estimates of effect size triple P.
  • Cherrypicked and misrepresented existing literature concerning triple P parenting
  • Cherrypicked and misrepresented the existing methodological literature to create appearances for decisions made in undertaking and interpreting a meta-analysis; choices made in Triple P meta analysis actually contradict recommendations in the literature cited in support of these choices.
  • Promised analyses of heterogeneity would be conducted, but these were never reported or interpreted. These could potentially have revealed that combining results are very different studies was not appropriate.
  • Failed to offer interpretation in the text for substantial effects due to investigator involvement in intervention trials (i.e., conflict of interest) and study size.
  • Had glowing summaries in discussion of results that contradicted what was actually found.

 

About these ads

2 thoughts on “Ethical problems of Clinical Psychology Review with Triple P Parenting

  1. I have been concerned about this program because California spends a lot of money intended to help people with serious mental illness on this program.
    When investigating MHSA Spending in CA on Triple P we found:
    Thirty two of the thirty three studies purporting to show it works were by the same people who created the program. A meta study “found no convincing evidence that Triple P interventions work across the whole population or that any benefits are long-term. The “evidence’ for it turned out to lack validity. See “Triple P-Positive Parenting programs: the folly of basing social policy on underpowered flawed studies” published in BMC. Available via NIMH at http://www.ncbi.nlm.nih.gov/pubmed/23324495. Also see “How evidence- based is an ‘evidence-based parenting program’? A PRISMA systematic review and meta-analysis of Triple P.” available via NIMH at http://www.ncbi.nlm.nih.gov/pubmed/23121760. See meta-study at http://www.biomedcentral.com/content/pdf/1741-7015-10-130.pdf

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s