The pitfalls of complaining about those in power: the Committee on Publication Ethics’ handling of an authorship dispute unsatisfactorily resolved by a university

whos on first

Who’s on first?

We shouldn’t expect any other outcome when a former PhD student complains to COPE. Note how the university stacked the deck in defining rules for how these disputes are resolved.

cope screenshot authorship dispute

My summary

Like a lot of these  authorship disputes, details of the official record do not reveal much. But…

COPE deftly passed responsibility for settling an issue of contested authorship back to the journal. No surprise, the journal passed responsibility back to the university, which predictably supported professors over a student.

This case history reaffirmed my expectations of COPE and of universities forced to judge between faculty and a former PhD student. But there is some interesting rules revealed in the telling.

Someone outside of academia might think, “For Pete’s sake, give the student the first authorship. What’s the big deal?” They don’t understand the logic of these kinds of situations.

Early career persons low in power risk compounding their mistreatment when formally complaining about abuse, unless they have the assistance of others in their immediate environment with more power.

Appropriation and denial of authorship often take advantage of prevailing norms in an environment favoring the more powerful over the weaker.

Early career persons challenging what they perceive are injustices may get more blowback than they expect because they are threatening some practices lots of people are doing and accepting without protest.

Caught in an authorship dispute, early career persons may have a sense of injustice. They may be motivated to complain and to rectify the situation and endure whatever cost that involves. But they should not miscalculate and have the expectation that the environment, both local and COPE, will respond with a shared sense of injustice.

What we know

A PhD student lost control of her project when she failed to complete her PhD.

The PhD student complained about not getting a first authorship resulting paper.

The corresponding author, presumably the PhD student’s advisor, declared that the PhD student at not contributed to the experiments and neither the writing nor the analyses reported in the paper were performed by the PhD student.

The University backed up the corresponding author by asserting that universities own the data generated by state funded projects. Furthermore, authorship by PhD students requires that they complete their PhD within the allotted time.

The Committee on Publication Ethics (COPE) called the decision by the University “punitive” and suggested that the journal contact the University and requested a re-investigation and a resolution.

Not surprisingly, the University sided with the corresponding author.

Things to be noted

We don’t know what led up to the PhD student not completing with the allotted time, something that the corresponding author could have influenced.

Journals increasingly require that authorship be based on documented contributions to a paper, but advisers and other faculty members can control students’ ability to make such contributions, even when the research is their own. Faculty can deny access to these activities in ways that undercut any claims of authorship, even for the student’s own project.

So, faculty members can take a student’s work, write up the research without giving the student the opportunity to participate. The faculty can then use the student’s lack of contribution to the writing as a justification for diminished rights to authorship.

I’m not surprised that a complaint to COPE proved ineffective. The committee simply advises journals, and cannot dictate what is done. I’ve seen COPE be quite passive in the face of editors abusing authors. COPE tends to pass issues back to journals, which then passed it back to institutions. The results can usually be predicted.

A case history

Authorship dispute unsatisfactorily resolved by institution [emphasis added]

The journal was contacted with a claim to first authorship of a paper currently published online ahead of print. Print publication was put on hold pending the result of the investigation. The claim to first authorship was based on the claimant stating that they had obtained most results published in the paper during their PhD studies under the supervision of the corresponding author, and contributed to the writing of the text. The claimant provided evidence of this in the form of screenshots of a submission confirmation email and subsequent rejection email from another journal for a manuscript with a similar title, a Word document labelled as the claimant’s PhD thesis and details of overlap with the published paper, and a screenshot of an email reported to have been sent by the claimant to the corresponding author in 2013 containing images used in the published paper.

The corresponding author was contacted and declared on behalf of all authors that the claimant had not contributed to the experiments or writing, and that none of the results shown in the article were performed by the claimant. They explained that the claimant was discharged from the PhD programme before successful completion. The claimant indicated that they wished to dispute this, and the institution was asked to investigate and resolve the dispute.

The institution informed the journal that the knowledge generated during state funded projects was the property of the institution, and only the institution has the ability to agree a copyright transfer in agreement with the corresponding author, and that the corresponding author had full legal and institutional support to determine the author list of papers resulting from the project. They stated that a graduate student may or may not be included as an author on papers deriving from projects to which they have contributed, and according to institutional guidelines, in order to be included as an author, a student must successfully complete their studies within a defined timeframe. The decision to remove the claimant as a co-author was confirmed to have been made because they were dismissed from the graduate programme before successful completion.

ADVICE:

The Forum noted it seems punitive on the part of the university regarding their decision to exclude the student from being an author because they did not complete their studies within a defined timeframe. If the student was in the middle of their training and had submitted a paper, would the institution have handled the case differently? Was the claimant’s role acknowledged in the published article? If not, might the claimant and authors agree to a correction to publish an acknowledgment?

Otherwise, a suggestion was to contact a higher authority at the institution—perhaps a committee on research integrity at the institution— or an oversight body and ask them to investigate and try to resolve the authorship issue. The Forum noted that it is up to the journal to set their own guidelines for authorship, and to clearly state that they follow the ICMJE and COPE guidelines, for example. The journal guidelines should take precedence.

FOLLOW UP:

Following advice from the COPE Forum, the journal approached the highest authority within the university to specifically confirm that the authorship of the paper was determined according to the criteria set by ICMJE/COPE, which they did. No further action was taken. The editor considers the case closed.

RESOLUTION:

Case Closed

Postscript

This could have gone differently.

For whatever reason, a student left the PhD program, presumably after investing a lot of time and effort. In some American programs, the student would be granted a terminal masters degree. As a faculty member, I would probably be inclined to help the student to write up a research paper so they had something to show for their time in the program. My decision would be a matter of charity, not a sense of what the student was owed.

I strongly suspect this case occurred in a European program. The evidence is that the student had a set time to complete the PhD. In the United States, that time period is often more flexible. I doubt there would be a rule in the United States the PhD student whose claims on the work it was not completed in an allotted time. Also, the student apparently was expected to publish their papers during their PhD, rather than waiting until the degree was awarded.

In the United States, PhD students are considered students.  any payment of their tuition or other expenses is considered a scholarship or fellowship. Students are seeing a working on the career. In contrast, in Europe, being a PhD student is a paid hourly position. A student receives health insurance contribution to the retirement fund. Yet, because they are being paid for their work, the workplace (university) retains control of their work.

Why, if you were a cook in a restaurant, you would not expect to take home the last meal you had been preparing, if you were discharged.

 

Rising early career female academics and second-to-last authorship

Anyone wanna talk gender now?

Are female early career academics getting less credit for work done on behalf of (usually male) faculty who get unearned senior authorships?

My posting of a link to a PLOS One article on gender differences in academic productivity at the wonderful Reviewer 2 Must Be Stopped Facebook page drew some interesting comments. I encourage more commenting at the Facebook pages and at this blog as well.

vicious cycle journal.pone.0183301.g007

My post

Always good when data enter into discussions of whether there are gender differences in productivity and impact and, if so, why? Here are some big data….

I have noticed in the Netherlands a tendency for early career women to be next-to-last author with senior (usually male) faculty last. Knowing the circumstances of this authorship order, I think this is due to rising women taking over responsibility for more day to day supervision of PhD students, but the head of the lab keeping last authorship. Some in the NL accord more prestige or credit to next to last author, but it does necessarily get appreciated elsewhere. Does anyone else notice such practices in the NL or elsewhere?

A comment from a female Australian vision scientist

Yes, been there with next-to-last authorship. I had to fight for last authorship on a PhD student paper for which I did ALL the supervision work.

A comment from a female academic in engineering

I feel there is a huge flaw with assigning such importance to the last authorship position aspect, as I am aware that it is different between disciplines even within a country (in this case Sweden, which happens to be highly relevant). Similarly to what Kyle and others have brought up, in my field the fight is for the first positions and second positions (to escape the oblivion of “et al.,”) while the last position is reserved for the person who probably has most cred already but did the absolute least (this is debatable, because some regard getting money for the project as a full justification for being included as an author even if you do f*-all on the actual paper, while others don’t – at any rate, such aspects should have been defined explicitly in the assumption of who is “productive”, and I sure couldn’t see that in the Methods section). When I was up for tenure it was my first authorships that were counted.

And at the same time I am aware that the medical sciences (where my father does research) regards the last few authorship positions very highly. Even within engineering I am skeptical that we all reason the same regarding author position, since some disciplines (e.g. bioengineering/pharma) may collaborate with medical researchers who adhere to their ideal order…

So in the end, it bothers me VERY much that someone would want to “make science” out of making an incorrect blanket assumption about authorship order when it may vary across disciplines. Especially when I as a young female researcher have used the last position of a paper to send a message to a higher-up coauthor that “we all know you didn’t help at all.” That in particular really bothers me that it could be interpreted as being the most productive/meaningful scientist in the bunch. Anyone wanna talk gender now?

A PhD comic suggested by one commentator

phd comics authorship

The open access article to which I posted a link is interesting in itself and worth a look.

van den Besselaar P, Sandström U. Vicious circles of gender bias, lower positions, and lower performance: Gender differences in scholarly productivity and impact. PLOS One. 2017 Aug 25;12(8):e0183301.

An excerpt from the abstract:

“As the analysis shows, in order to have impact quantity does make a difference for male and female researchers alike—but women are vastly underrepresented in the group of most productive researchers. We discuss and test several possible explanations of this finding, using a data on personal characteristics from several Swedish universities. Gender differences in age, authorship position, and academic rank do explain quite a part of the productivity differences.”

Some key quotes from the article itself:

Several possible explanations of the gender differences in productivity have been suggested. (i) Female researchers are on average substantially younger than male researchers (see Fig 1), and the high productive researchers are to be found in the more senior (higher age) groups [5; 6]. If this would be the only factor, one would expect that the observed productivity differences would further decline (in line with the Xie & Schauman study [9]) and disappear over time. But also other structural and/or behavioral factors may underlie gender productivity differences, hampering female academic careers [7; 15] and leading to a waste of talent. (ii) Women are rather strongly overrepresented in the lower academic positions, and in positions with a temporary contract (Fig 2), positions which are generally characterized by a higher teaching load, less access to funding, less career perspectives, and less opportunities for research [16; 17; 18; 19]. Indeed, there is a positive relation between job level and productivity. This situation is less prone to gradual change, as it may be the effect of gender bias and of a sustained existence of the glass ceiling in academic institutions [15]

iii) Women may have less access to research funding, whereas winning prestigious research grants is characterized by gender biased in favor of men, and above that very influential for the grant winners’ career [15; 14]. (iv) Female researchers have a lower status within teams and collaboration networks, and get less opportunities to become an independent researcher. This is reflected in different author positions on papers. Women more often get the less prestigious positions: the last author (= team leader) is more often a male researcher, whereas female researchers more often occupy ‘in between’ author positions. This may result in a slower career of female researchers compared to the career of male researchers [8; 13]. More directly, Van den Besselaar & Sandström showed that men progress faster through the various academic ranks [22]. (v) Productivity relates to the organizational environment where a researcher works [23], and if female researchers have more problems in being hired in top environments [24], this is expected to affect productivity differences between men and women.

In fact, gender differences may be the effect of a combination of these five factors….

From the integrated results and discussion:

What about the gender differences? In Biology, Life & Medical sciences and in Science and Engineering, women in the higher productivity classes outperform the male researchers, as they have on average a higher number of CSS3 papers: the dotted curves (representing female researchers) for these fields are above the straight curves (representing male researchers). Also in Psychology & Education we see such trend, although in the highest productivity class the scores are equal. In Agriculture and Food Sciences, and in the Social Sciences, the pattern is opposite. As already said, in the Humanities and in Computer Science & Mathematics the pattern is somewhat fuzzy, but in the latter field there are no female researchers in the highest productivity class to compare with male counterparts.

From the Conclusion:

The first question we aim to answer is whether the positive relation between productivity and impact differs between male and female researchers. We showed that this is not the case, and the relation between productivity and the number of high impact papers is about the same for men and women within the distinguished productivity classes. On average, female researchers have a at least similar impact as equally productive male researchers. In fact, we found cases where the ratio between top cited papers and productivity is considerable higher for women than for men. More specifically, the disciplinary demography seems to produce this effect: the lower the share of women in a discipline, the higher their impact compared to male researchers within the same productivity class. This may refer to gendered selection and/or to gendered self-selection.

Secondly, we found that the higher productivity classes are numerically dominated by male researchers. This leads to a lower overall productivity for female researchers, which is also in our sample about 70% of male productivity. This ratio seems to be stable over time. We should however be careful with averages in Lotka distributed data, although nonparametric tests (Mann-Whitney) show that women are outperformed by male researchers is we do not take other factors into consideration.

Thirdly, we investigated whether other variables influence productivity, and therefore explain part of the gendered productivity differences. We indeed found that a variety of factors have an effect on performance, and controlling for those reduced the effect of gender on performance considerable. So, a good part of the productivity differences are due to the fact that men are older and in higher positions, and that those in higher positions are more productive. Female researchers also occupy less last author positions than men do, and this factor also has a negative effect on female productivity. That women more often are in the middle author positions than men, reflects that women have on average lower positions, and that they are less often (conceived as) leader of a team or a collaboration network. This finding reflects that male researchers show a faster career than their female counterparts.

 

 

Conflict of interest in manuscript peer-review: Expert opinions

Discussion of a recent court case concerning a peer reviewer who failed to disclose a conflict of interest has broader application.

The Committee on Publication Ethics (COPE) has defined the need for authors to declare any potential conflicts of interest in review of the manuscript.

 Recently, Retraction Watch discussed a court case in which one issue was the failure of a reviewer to declare a conflict of interest. Retraction Watch interviewed that peer reviewer and obtained some expert opinions.

bruberg

Kjell Gundro Brurberg, the author accused of misconduct by the Editor of Journal of Health Psychlogy when he nominated reviewers with obvious conflicts of interest.

These opinions are directly relevant to a decision by the Editor of Journal of Health Psychology to reject an appeal of a previous negative decision about manuscript submitted to be part of the Special Issue on PACEgate. The author, Kjell Gundro Brurberg was given an unusual opportunity to nominate additional reviewers, but with the provision that the author ensure reviewers not have a conflict of interest. Nomination of three reviewers with blatant conflicts of interest led to the withdrawal of the opportunity for reconsideration and charges by the editor of authorial misconduct.

The issue of author conflict of interest is also relevant to the controversial Cochrane Review, which Brurberg  co-authored  with others associated with the PACE trial.

Committee on Publication Ethics: Basic Principles

The Retraction Watch article started with a link to the basic principles for reviewers of the Committee on Publication Ethics (COPE), which are clearly spelled out at COPE’s website.

cope basic principles

The legal case in whicn reviewer conflict of interest comes up.

Retraction Watch then summarized a case [A fascinating read, in itself]

Bryan Hardin testified that he was a peer reviewer on a 2016 paper in Critical Reviews in Toxicology, which found that asbestos does not increase the risk of cancer. In the deposition, Hardin—who works at the consulting firm Veritox—also said that he has testified in asbestos litigation on behalf of automakers, such as Ford, General Motors, and Chrysler, but said he had not disclosed these relationships to the journal.

Last year, the first author of the 2016 review withdrew a paper from another journal (by the same publisher) which concluded asbestos roofing products are safe, following several criticisms — including not disclosing the approving editor’s ties to the asbestos industry. In this latest case, the journal told us it believes the review process for the paper was up to snuff, but two outside experts we consulted said they believed Hardin’s relationships — and failure to disclose them — should give the journal pause.

After obtaining a transcript of the case, Retraction Watch interviewed the author and inquired whether he had disclosed a conflict of interest

“No. If — if that’s a new expectation, I’m not aware that as a peer reviewer you’re supposed to disclose that sort of thing, but I — I don’t recall that I did.”

The author further said:

“I have been a peer reviewer on more than one asbestos-related paper” and “I have been retained by ‘several’ companies involved in asbestos litigation.”

Expert opinon

In the article Retraction Watch raised the issue of whether the author’s  ties to the asbestos industry be considered a conflict of interest. An opinion was obtained from Elizabeth Wager, a member of the board of directors of The Center For Scientific Integrity, our parent non-profit organization:

“I’d definitely regard giving expert testimony as a conflict of interest that should be declared by a potential reviewer (or an author). This is also in line with the views of many journals, such as PLOS which specifically notes that “acting as an expert witness” constitutes a non-financial competing interest. In other words, it doesn’t matter whether or not you were paid for such services, just testifying suggests a strong allegiance or interest which editors deserve to know about.”

A further opinion was obtained by Retraction Watch from Trudo Lemmens, a professor of Law & Bioethics at the University of Toronto in Canada who was not involved in this case.

“In my opinion, a peer reviewer who has as an expert been working for a particular industry on a specific issue should at least disclose to the journal his ties to the industry whose interests can be affected by the publication of a paper on that topic.  And journal editors should exclude such peer reviewers from reviewing a paper on that or a related topic, or at least ensure that there are several other more independent reviews of the paper. If for one reason or another they think it is important to get a review by an expert with such a conflict of interest, they should assess that review much more critically. They should then also provide an opportunity to the authors to respond to the peer-review before making a publication decision.”

Lemmens added:

But the more cautious approach would be to exclude reviewers with such a clear COI.

Retraction Watch continued  the discussion with a further quote from Wager:

It’s up to the editor to decide whether a conflict of interest is so great as to disqualify the reviewer, but, whatever the editor decides, s/he obviously needs to be made aware of it by the invited reviewer…If a reviewer’s failure to disclose a relevant interest comes to light after publication the journal should look at the review comments again.

In the thread of comments on the Retraction Watch article

Commentor Richard David Feinman noted:

The problem assumes greater importance when it comes to competing theories in medicine and especially, my own area, medical nutrition. A editor is supposed to recognize a controversial subject and appoint reviewers from both sides of the controversy. Failure to do so constitutes de facto (or intentional) bias. The perfunctory, often inappropriate — take what you can get when you get people to work for you for nothing — guarantees that the party line will be heard and there will be little input from minority opinions.

And, if you do object, you are — if you are lucky — told to submit a letter to the editor. If they take it — no guarantee of publication — the original author has the last word and it is, in any case, like an objection in a court of law: even if sustained, the damage has been done.

What to look for in the Special Issue of Journal of Health Psychology concerning the PACE trial

special issue spread

 

A Special issue of Journal of Health Psychology concerns PACE, a trial of therapies for patients with myalgic encephalomyelitis (ME)/chronic fatigue syndrome (CFS) that has attracted a great deal of controversy

Key points summarized from the David F. Marks’ introductory editorial

 Background

The idea for Special Issue started with Journal of Health Psychology receiving a manuscript from Keith Geraghty providing a critical review of the PACE Trial.

Following peer review and acceptance of a revision, the PACE investigators were offered an opportunity to respond to Keith Geraghty with an Open Peer Commentary paper.

The PACE investigators made a number of efforts to get Geraghty’s article outright retracted, either directly or through pressures on the editorial board.

When it was clear that the journal would not cave to such pressures, the  PACE investigators then demanded a partial retraction of the Geraghty  article and that the journal issue a correction reporting that the author had an undisclosed conflict of interest: he had failed to acknowledge that he was a patient suffering from myalgic encephalomyelitis.

The PACE investigators also demanded that their commentary manuscript will not be subject to peer review.

The PACE investigators received reviews of their manuscript. Editor Marks notes [Having been involved in editing this issue, I agree]:

After receiving critical reviews, the pro-PACE authors chose to make only cosmetic changesor not to revise their manuscripts in any way whatsoever. They appeared unwilling to enter into the spirit of scientific debate. They acted with a sense of entitlement not to have to respond to criticism.

The review and response from the PACE investigators was sent to more than 40 experts on both sides of the debate for commentaries.

After the online publication of several critical Commentaries, the PACE investigators were offered a further opportunity to respond to their critics in the round but they chose not to do so. There was little response, but a notable declining of further comment by the PACE investigastors:

As always, we would refer interested readers to our original publications and trial websitewhere most, if not all, the issues brought up by commentators are addressed

Conflict of interest.

[My comment: The PACE investigators and persons associated with them were quick to complain about the conflict of interest in the developing issie. As noted, this including a complaint about Geraghtybeing a patient and not disclosing it, mentioned above. But the investigators also threatened a formal complaint to the Committee on Publication Ethics concerning my having reviewed their manuscript. But there is an ironic backstory about the matter of conflict of interest. Some of which are discussed in contributions to the Special Issue:]

The PACE authors themselves appear to hold strong allegiances to cognitive behavioural therapy (CBT) and graded exercise therapy (GET) – treatments they developed for ME/CFS. Stark COI have been exposed by the commentaries including the PACE authors themselves who hold a double role as advisers to the UK Government.

 Department of Work and Pensions (DWP), a sponsor of PACE, while at the same time working as advisers to large insurance companies who have gone on record about thepotential financial losses from ME/CFS being deemed a long-term physical illness.In a further twist to the debate, undeclared COI of Petrie and Weinman (2017) werealleged by two of the commentators (Agardy, 2017; Lubet, 2017). Professors Weinman andPetrie adamantly deny that their work as advisers to Atlantis Healthcare represents a COI.

[My comment: In another instance, three pro-PACE authors attempted to subvert the journal’s policy on COI by recommending reviewers who were strongly conflicted, forcing rejection of their paper. I discussed this in another blog post.]

[My comment: Nonetheless, having gotten any commentary from the PACE investigators at all was an achievement. Even the limited responsiveness of the PACE investigators to Geraghty’s commentary, along with a supported commentary by their close colleagues Keith J Petrie and John Weinman, the special issue  represents one of most significant exchanges between defenders of the PACE trial and critics in a long while.]

ME/CFS research has been poorly served by the PACE Trial and a fresh new approach is needed by the PACE investigators to engage critics and skeptics.

Overall, the special issue provides, readers to make up their own minds about the scientific merits and demerits of the PACE Trial.  

Note that the entire Special Issue is available with free access.

journal of health psychologyThe individual papers

‘PACE-Gate’: When clinical trial evidence meets open data access. Keith J Geraghty

Published reports of the PACE trial overstated the effectiveness of cognitive behavioural therapy and graded exercise therapy, and did so  by lowering the thresholds they used to determine improvement. Reanalyzes conducted by professionals and patients revealed that the treatments tested had much lower efficacy after an information tribunal ordered the release of data from the PACE trial to a patient who had requested access using a freedom of information request.

Response to the editorial by Dr Geraghty. Peter D White, Trudie Chalder, Michael Sharpe, Brian J Angus, Hannah L Baber, Jessica Bavinton, Mary Burgess, Lucy V Clark, Diane L Cox, Julia C DeCesare, Kimberley A Goldsmith, Anthony L Johnson, Paul McCrone, Gabrielle Murphy, Maurice Murphy, Hazel O’Dowd, Laura Potts, Rebacca Walwyn and David Wilks

The trial found that adding cognitive behavior therapy or graded exercise therapy to specialist medical care was as safe as, and more effective than, adding adaptive pacing therapy or specialist medical care alone. Dr Geraghty has challenged these findings. We suggest that Dr Geraghty’s views are based on misunderstandings and misrepresentations of the PACE trial.

Once again, the PACE authors respond to concerns with empty answers. David Tuller

This commentary examines how the current response once again demonstrates the ways in which the investigators avoid acknowledging the obvious problems with PACE and offer non-answers instead—arguments that fall apart quickly under scrutiny.

Investigator bias and the PACE trial. Steven Lubet

Standards for determining researcher bias are considered and it is concluded that the PACE nvestigators’ impartiality might reasonably be questioned.

The problem of bias in behavioural intervention studies: Lessons from the PACE trial Carolyn Wilshire

In the PACE trial, cognitive behavioural therapy and graded exercise therapy had modest, time-limited effects trial on self-report measures, but little effect on more objective measures such as fitness and employment status.  In non-blinded trials, the issue of reporting biases deserves greater attention in future.

PACE trial authors continue to ignore their own null effect. Mark Vink

Protocols and outcomes for the PACE trial were changed after the start of the trial,  leading to exaggerated claims for the efficacy for both cognitive behavior therapy and graded exercise therapy. Findings of small, self-reported improvements in subjective measures cannot be used to say the interventions are effective, particularly in light of the absence of objective improvements with objective outcome measures.

The PACE trial missteps on pacing and patient selection. Leonard A Jason

The PACE trial investigators were not successful in designing and implementing a valid pacing intervention and patient selection ambiguity further compromised the study’s outcomes

Do graded activity therapies cause harm in chronic fatigue syndrome? Tom Kindlon

The trial’s poor results on objective measures of fitness suggest a lack of adherence to the activity component of these therapies. Therefore, the safety findings may not apply in other clinical contexts. Outside of clinical trials, many patients report deterioration with cognitive behavioural therapy and particularly graded exercise therapy. Also, exercise physiology studies reveal abnormalities in chronic fatigue syndrome patients’ responses to exertion. Given these considerations, one cannot conclude that these interventions are safe and risk-free.

PACE team response shows a disregard for the principles of science. Jonathan Edwards

The response by White et al. fails to address the key design flaw, of an unblinded study with subjective outcome measures, apparently demonstrating a lack of understanding of basic trial design requirements. The failure of the academic community to recognise the weakness of trials of this type suggests that a major overhaul of quality control is needed.

Bias, misleading information and lack of respect for alternative views have distorted perceptions of myalgic encephalomyelitis/chronic fatigue syndrome and its treatment. Ellen Goudsmit and Sandra Howes

These interventions delivered in the PACE trial are based on a model which assumes that symptoms are perpetuated by factors such as misguided beliefs and a lack of activity. Our analysis indicates that the researchers have shown significant bias in their accounts of the literature and may also have overstated the effectiveness of the above treatments. We submit that their approach to criticisms undermines the scientific process and is inconsistent with best practice.

PACE investigators’ response is misleading regarding patient survey results. Karen D Kirke

A review of survey data published between 2001 and 2015 reveals that for most patients, graded exercise therapy leads to worsening of symptoms, cognitive behavioural therapy leads to no change in symptoms, and pacing leads to improvement. The experience of people with ME/CFS as reflected in surveys is a rich source of information, made more compelling by the consistency of results. Consequently, patient survey evidence can be used to inform practice, research and guidelines. Misrepresentation of patient experience must be vigorously challenged, to ensure that patients and health professionals make decisions about therapies based on accurate information.

 

 

 

 

Last ditch attempt to block publication of special issue of Journal of Health Psychology foiled

foiled againPublication of the special issue of Journal of Health Psychology will go forward as planned on Monday July 31.

But there was a last ditch attempt to block publication of the special issue by a powerful but unknown PACE trial advocate. It was finally foiled on Saturday morning, July 29. A weaselly coward suggested papers weren’t properly peer reviewed and that the special issue should therefore not be published.

That of course was nonsense.  It is also an ironic complaint, because the PACE investigator team had demanded that their response to the commentary by Keith Geraghty be published without having to undergo peer review. And the same PACE investigators rejected in their revision of their reply almost all of my detailed review.

journal of health psychologySome threats were made to Sage Publications, the publisher of Journal of Health Psychology, which expressed a reluctance to go forward as planned. As often happens with these kind of pressures, we weren’t told the identity of the complainant. It was clear that whoever s/he was,  this person was powerful in being able to grind to a halt of making the special issue available,  complete with the introductory editorial that was not previously available.

I announced to my colleagues that I would take responsibility for unilaterally breaking the embargo on the press release and post a blog about it. If necessary, I would explain that I did it without permission, because I was making an announcement, not a request.

I prepared the blog on July 28, but waited till 7 AM on July 29 before going live, in hope that Sage would relent. No decision was announced by  that hour, and so I posted the blog.

sage publicationsThe strategy was that we would get visible in social media and be prepared to create an embarrassment for Sage Publications if the publisher did not let us go forward. Actually, all of the papers were available Early Release except Editor David Marks’ great accompanying introductory editorial. We were prepared to move that editorial to a public repository, and if necessary the other articles from the special issue, as well. A few hours later, Sage agreed to go forward and publish the special issue on Monday morning.

Stay tuned for a historic issue of the journal, complete with Editor David Marks’ previously unavailable introduction.

Misconduct in an author’s nomination of reviewers for his manuscript

An author, Kjell Gundro Brurberg  appealed the rejection of his manuscript. He was offered an opportunity to nominate additional reviewers, but to ensure they did not have conflicts of interest. What happened next…

kjetilKjell Gundro Brurberg happens to be an author on a controversial Cochrane review about which there are serious concerns about conflict of interest and coordinated outcome switching across trials.

He is also an author on a forthcoming Cochrane review for which Cochrane refuses to share data for independent reanalysis. This  whole incident adds the importance of Cochrane releasing the data from that review for independent reanalysis, which the organization is currently refusing to do.

Now Brurberg is misrepresenting what happened in a blog post at Mental Elf, a blog site that has consistently shown itself to not vet what is posted by bloggers.

I will document in this issue of Quick Thoughts what really transpired. But his blog post indicates that Brunberg just doesn’t get how he blew an opportunity to publish his manuscript with his bad behavior.

Giving authors the opportunity to nominate reviewers has benefits to both the authors and to editors, but it depends on trust and the option can be abused by authors.

The whole story is part of a larger narrative of how hard it is to obtain an independent critique of the PACE trial of cognitive behaviour therapy and graded exercise for chronic fatigue syndrome.

I know from experience as a Senior Consulting Editor at the Journal of Health Psychology. Under Editor-in-Chief David Marks, the journal accepted an editorial commentary critical of the PACE trial and invited responses from a variety of perspectives.  The journal has endured repeated assaults on its editorial independence and integrity since.

hypocrisyHere is my updated account of the PACE investigators’ pressures on Journal of Health Psychology to retract portions of the published commentary and to issue a correction acknowledging the author had an undeclared conflict of interest. Ah, the kettle calling the pot black.

But before we delve into the details of the current incident, let’s discuss the practice of allowing authors to nominate reviewers.

Background: Authors being allowed to nominate reviewers

Authors can be instructed not to nominate reviewers with obvious conflicts of interest or indications of being unlikely to provide an unbiased review. Even, better editors can ask authors to indicate explicitly that the reviewers they have nominated are free of conflict of interest. Authors declaring there is no conflict when there is an obvious conflict of interest can be seen as tantamount to scientific misconduct.

Until recently, authors were routinely asked to nominate reviewers for their manuscripts. What had been a common practice became controversial and was outright stopped at many journals when a major scandal in peer review occurred in 2013- 2015.

In what turned out to be only the first wave of continually uncovered problems with a number of journals, Springer and BMC (which Springer now owns) undertook an investigation and retracted 50+ papers for peer review manipulation and other issues. Commenting to Retraction Watch, one publisher said:

Alongside investigation into the identified papers, we have taken action to ensure that no further compromised papers can continue through to publication. BioMed Central has changed our policy on suggesting peer reviewers so that authors may do so in a cover letter with evidence of the peer reviewers’ authenticity.

Once the crisis went public, at least one journal continued to ask authors to recommend reviewers, but then made sure that that papers didn’t go out to any reviewers whom the authors had nominated. Ouch!

The crisis had seemed to ease, but a new record was recently set with a major publisher retracting more than 100 studies from a cancer journal over fake peer reviews. 

Not surprisingly, there is evidence that reviewers who are recommended by authors are more likely to give positive reviews.

Our results agree with those from other studies that editor-suggested reviewers rated manuscripts between 30% and 42% less favorably than author-suggested reviewers. Against this backdrop journal editors should consider either doing without the use of author-suggested reviewers or, if they are used, bringing in more than one editor-suggested reviewer for the review process (so that the review by author-suggested reviewers can be put in perspective).

Obviously, editors need to be diligent and skeptically probe author-nominated reviewers. But relying on some of these nominations can be helpful if an editor, particularly if nominations are screened and provide only a portion of the reviewers for a manuscript. Flagging that a particular reviewer was nominated by an author is also helpful in the editor’s interpreting discrepancies in a set of reviewer recommendations for publication.

Presumably reviewers recommended by authors will be familiar enough with the topic of a manuscript to accept a request to review and provide in a timely fashion.

Authors may actually be best qualified to identify reviewers for their particular areas of research, even if editors have to evaluate the potential for bias and conflict of interest.

Though I’m most comfortable handling manuscripts in my areas of research, I am serving as academic editor for a megajournal with a broad interdisciplinary focus. I can’t expect to be on top of every nook and cranny of every subfield.

I also may not be as familiar as authors are with the up and coming researchers in a particular area of research who do not yet have a large number of publications, but who have an impressive brand new or “early view” paper or two out.

Relying on some reviewers nominated by authors can also help the balance the Anglocentric –and in some fields –older male bias of editorial boards.

Sure, there are problems with relying too heavily on reviewers that authors suggest, but let’s not get too misty eyed nostalgic about peer review in the old days when editors tended to rely just on people that they only personally knew.

There was a lot of old boy cliques looking after each other that could be the formidable obstacles if you wanted to challenge dominant theories. Some specialized journals were dominated by senior investigators who kept out threats to their theories or particular findings.

That is certainly still a problem in particular areas of research, but with more democratization of peer review and transparency, it gets a bit easier to uncover and challenge old boy cliques.

But unless publishers provide user-friendly tools to evaluate suggestions or can hire private investigators, editors have to rely to considerable extent on simple searches of published papers and trusting the report of authors to evaluate whether reviewers are appropriate and that they have no conflict of interest. Instructing authors ahead of time that nominations have to be free of conflict of interest is important.

I once was suspicious of an author nomination in an appeal of a rejected manuscript in a highly contested area of research. Google Scholar revealed no shared publications, but a broader internet search quickly yielded wedding pictures with the reviewer as the author’s best man.

This brings us to a recent problem we faced at the Journal of Health Psychology. The Senior Editor, David Marks, rejected a manuscript based on a rather thorough negative review. The author appealed this decision and was offered the opportunity to nominate additional reviews. However, because the manuscript concerned the contentious issue of the PACE trial, the author was given the responsibility to pre-screen his nominations for reviewers and indicate in writing that there were no relevant conflicts of interest.

The letter to the author rejecting his nominations because he misrepresented the conflicts of interest of reviewers

Dear Dr. Brurberg:

I write you in regards to manuscript # JHP-17-0254 entitled “A PACE-gate or an editorial without perspectives” which you submitted to the Journal of Health Psychology within an appeal procedure. Your manuscript is rejected due to your misrepresentation of conflicts of interests.

I have sought advice from my Associate Editors and this email is therefore copied to two of them.

I was going to wait until the end of the month before letting you have a decision, but new information came to light about Dr. Y that makes further delay unnecessary. The three reviewers that you recommended were supposed to be neutral, independent experts with no known conflicts of interest.   Unfortunately, however, one declined the invitation to review (X) and the other two (Y and Z) have objectively proven conflicts of interest.

In light of your appeal, you were given the generous opportunity to have your manuscript re-reviewed by one or more impartial experts chosen by you. It is highly disappointing and curiously naive that you have attempted to subvert the appeal process by recommending people who are strongly conflicted, in one case (Dr. X) by his own admission is an associate of the PACE investigators. You stated that Prof X is: “Interested in medically unexplained diseases. Holds the needed distance to the ongoing PACE debate.” The latter could not be further from the truth. In his email on 26 April 2017 Prof X stated:

“Good evening.

Bit of a curve ball this one I suspect!

Having been trolled briefly by [ the author on whose paper a comment was submitted] a while ago I might have a personal axe to grind, and having been supervised by Michael Sharpe (who may or may not have anything to do with this manuscript) between 5 and 10 years ago I would probably be regarded as irrevocably conflicted by the anti-PACE-ists. Mind you, I’ve also sat next to Peter White at a couple of (enjoyable) conference dinners – even that probably renders me tainted to some eyes.

Anyway, I’m perfectly prepared to be grown-up, reflectively self-aware, and as neutral as possible, in carrying out a review for you. But if you do open peer review you will probably be trolled for asking me.

If you still wish me to review, then please just let me know. But I thought I would give you a more than usually full COI summary first!

-X

Hardly, a description of a reviewer who is likely to be independent and unconflicted.

In the second case, Dr. Y, has a clear COI by association with pro-PACE researchers through joint work and publication. You stated that Y has: “Expertise in exercise therapy and CBT. No COI with regard to commentary authors.” Yet I discovered on Google Scholar that Y was actually a visiting staff member of the School of Psychology at the University of Southampton. The working visit was made possible by a grant from the Dutch MS Research Foundation (Stichting MS Research). The original RCT was funded by a grant awarded to R. Moss-Morris by the New Zealand Neurological Foundation.” I am sure you don’t need me to tell you that Prof Moss-Morris is closely connected to the so-called ‘Biopsychosocial Model’/’Dysfunctional Beliefs Model’ of ME/CFS advocated by the PACE Trial team. This fact is evidenced by:

[A specific publication co-authored by someone central to the PACE controversy removed because it identifies the proposed reviewer.]

It is important to consider, in addition, your own conflicts of interest as a person who worked for the Cochrane Collaboration in the analysis of individual data on exercise therapy for CFS including data from the PACE Trial and from studies by Moss-Morris (2005) (already mentioned above). The relevant reference is:

Larun L, Brurberg KG, Odgaard-Jensen J, Price JR. Exercise therapy for chronic fatigue syndrome. Cochrane Database of Systematic Reviews 2015, Issue 2. Art. No.: CD003200. DOI: 10.1002/14651858.CD003200.pub3.

It is not often in my experience that an author misrepresents the facts about his/her recommended reviewers in such an audacious and palpable manner. If you lie these days, exposure is only a few clicks away. Norway is a small country, a country that I dearly love, not a place I normally associate with cheats and rogues. That impression just took a nose-dive. You have wasted a lot of my time and you won’t be given a 3rd or 4th chance. You have already blown it.

I recommend that you reflect on the ethics and professionalism of your actions and the potentially serious consequences for your professional career. Better luck next time! But please don’t try it on again with this journal.

Regards,

David F Marks PhD

Editor

Journal of Health Psychology

editorjhp@gmail.com

The authors’ response

On 26 June 2017 at 09:02, Brurberg, Kjetil Gundro <KjetilGundro.Brurberg@fhi.no> wrote:

Thank you,

You are right, this is completely my fault. It is obvious that I should have known that your ‘journal’ only accept reviewers who disregard the PACE-trial as well as the PACE-researchers. It would have helped me, though, if you had stated this policy when you first invited me to write this comment.

Best regards

Kjetil G. Brurberg

Postscript

 Some past posts relevant to complaints about the bias of the Cochrane review and my efforts to obtain the data from it for independent re-analysis. Unfortunately, despite advocating that others share data for independent reanalysis, Cochrane refuses to share.

 Why I am formally requesting the data set from a Cochrane review (April 13, 2017)

Conflicts of interest in Cochrane reports on psychological interventions January 15, 2017

 

Probing an untrustworthy Cochrane review of exercise for “chronic fatigue syndrome” April 23, 2016

 My response to an invitation to improve the Cochrane Collaboration by challenging its policies April 21, 2016

An open letter to the Cochrane Collaboration: Bill Silverman lies a-moldering in his grave March 6, 2016

 

 

Asserting privilege: PACE investigators’ request that their manuscript not be peer reviewed or receive replies

rabbits and knights

From the Art of Anna-Maria Jung http://www.annamariajung.com/

After demanding parts of an article published in the Journal of Health Psychology be retracted, the PACE investigators requested their response be published without peer review and with no comments allowed.

This episode is part of a continuing saga of the PACE investigators attempts to exert extraordinary control over what is said about their work.

The predicament of the scientific community with respect to the PACE trial fits well with John Ioannidis  has termed scientific inbreeding where an interconnected group is able, temporarily at least, to dominate a scientific area and control and contain criticism of flaws consistently characterizing their work. We may well be witnessing a break in that control and the beginning of a decline effect, where independent critique and re-analysis of data make those flaws more inescapably obvious.

The emails that will be reproduced below came after the PACE investigators lobbied some members of editorial board and asked them to demand the article be retracted and to threaten resignation if it were not retracted.

As I previously described, when I reviewed their submission  , the PACE investigators refused to revise their manuscript. Instead, they threatened to complain to the Committee for Publication Ethics because my public criticism of them should have established that I had a conflict of interest in reviewing the paper.

journal of health psychologyBefore that, the PACE investigators requested partial retraction of a commentary by Keith Geraghty in  Journal of Health Psychology. They said he must remove claims and language that offended the PACE investigators. The PACE investigators further demanded that the journal issue a correction to the article that acknowledged that Geraghty had not revealed that he suffered from chronic fatigue syndrome. This, the PACE investigators, constituted an undeclared conflict of interest.

The article to which the PACE investigators were objecting is:

Geraghty KJ. ‘PACE-Gate’: When clinical trial evidence meets open data access. Journal of Health Psychology DOI: HTTPS://DOI.ORG/10.1177/1359105316675213

The email to the Editor of Journal of Health Psychology

Dear Dr Marks,

Thank you for your quick response. We appreciate your offer of a published response, but suggest that a response to a published article would normally be reviewed by an editor, rather than going to blind review. This would also speed up publication, and quick publication of our response is essential. We would also appreciate your reassurance that our response will be electronically linked to the Geraghty article.

You say that Dr Geraghty used moderate language, but we would respectfully disagree. Saying something appears to be so implies that one believes it is so. Dr Geraghty questioned the integrity of the PACE trial team, and did so without any evidence. For example, where is the evidence that we neglected or bypassed  “accepted scientific procedures and standards”, when the trial was peer reviewed for funding, ethically approved, independently overseen, and published after peer review in a high impact journal? Whilst we are less concerned by the number of simple errors in this piece – although surprised that they passed peer review – we suggest that the personal and arguably defamatory (how else would one describe “bypassing accepted scientific procedures and standards”?) comments made in this article should have no place in a scientific journal. We therefore again ask you to revise the article to remove all these comments.

Yours sincerely

Professors White, Chalder and Sharpe

On behalf of the PACE trial team

Then, the Principal Investigator objected that others would be allowed to publish a commentary on their paper. Actually, in communications with the journal can elect for there to be comments. Three times, White and the PACE investigators (at submission, resubmission, and final approval of the proofs), the group had endorsed commentaries, but now White was asserting that they had done so in error.

From: PD White <p.d.white@qmul.ac.uk>

Date: 21 December 2016 at 12:04

Subject: RE: Journal of Health Psychology – Decision on Manuscript ID JHP-16-0873.R1

To: David F Marks <editorjhp@gmail.com>

Dear Dr Marks,

 

I am surprised that you have asked for commentaries on our editorial response to Dr Geraghty’s editorial, and now ask you to reconsider this.

I pressed the “open peer commentary” button in error, thinking our response was a “commentary” on the original editorial. As I wrote yesterday, it should be considered an editorial, consistent with your promise to publish our response alongside Dr Geraghty’s editorial, and linked to it. If you publish commentaries about our editorial, these would be comments on our commentary on a commentary on our original work. When would this iterative process end? If the new commentaries mention new criticisms, we would want a right of reply to those criticisms.

Thank you for reconsidering this.

Yours sincerely,

Professor White

Postscript

The PACE investigators were offered a chance to reply to the responses that their article elicited. Despite requesting that option in the email above, they have now indicated that they will not respond. But of course, they may change their mind has they have done have in the past.

There are more posts to follow about how demanding and threatening the PACE investigators have been. They are obviously used to getting their way. We can’t readily determine the true extent to which journals have caved to them or when critics have been silenced, beyond what gets reported in social media.

In discussing inbred scientific groups, John Ioannidis has described an obliged replication, whereby proponents of “a particular approach are so strong in shaping the literature and controlling the publication venues that they can largely select and mold the results, wording, interpretation of studies eventually published.” However, the occasion of Ioannidis’ comments was the publication of both a n.on-replication in a critique of type D personality by my colleagues and myself. Type D personality had been a dominant perspective in psychosomatic medicine. However, we can now see that our  two papers marked a distinct beginning to the rapid decline that followed rapidly. Perhaps the publication of the string of commentaries in Journal of Health Psychology will have the same effect on the biopsychosocial model and cognitive behavior therapy and graded exercise therapy for chronic fatigue syndrome.

Related posts

My peer review of a PACE investigators’ article that the authors refused to heed

Should authors declare a conflict of interest because they suffer from the illness they are writing about? 

PfferYou can see other art of  Anna Maria Jung at http://www.annamariajung.com/ where it is available as  prints, shirts, mugs and much more. She is also offering “The Pepper Chronicles,” a kick-ass, 200 pages graphic novel full of adventure, action and filthy jokes – that was released in German this June. 

Should authors declare a conflict of interest because they suffer from the illness they are writing about?

Some researchers issued a novel demand for correction of an undeclared conflict of interest stemming from the author of a criticism of their work suffering from the illness targeted by their intervention.

Should the concept of conflict of interest be expanded?

Recently, a hostile reviewer demanded that authors of a manuscript submitted to The BMJ provide proof that they had a confirmed diagnosis of a illness from which they claimed they had suffered for decades. Should patient-authors get notes from their physicians to accompany their conflict of interest statements?

The critique that upset the PACE investigators is available here:

Keith J Geraghty. ‘PACE-Gate’: When clinical trial evidence meets open data access Journal of Health Psychology DOI: https://doi.org/10.1177/1359105316675213

conflict-of-interestThe email conveying the demand is reproduced below. Basically, the investigators from the PACE trial of cognitive behavior therapy and graded exercise therapy for chronic fatigue syndrome demanded:

 

  • Partial retraction of an article critical of their work.
  • Issuing of a conflict of interest because the author of the critique suffered from the illness targeted by the trial.
  • The corrected article be posted with a full response from the PACE investigators,  and not appear until readers could compare the two.

Note that the author did not mention in the article that he suffered from any illness. Should he have?

How far should we extend this requirement? Should principal investigators be required to declare on their grant applications whether they suffer from any relevant chronic illness? Should NIMH require a formal psychiatric evaluation of applicants for depression grants?

Should authors of HIV/AIDS articles declare their viral status?

How about blanket declarations: “The authors have all had recent physical examinations and declare they have no relevant health conditions?

How about reviewers? Reviewer conflict of interest can be important.

Finally, was it an invasion of the author’s privacy for the PACE investigators to seek out evidence of any illness and write to the journal editor about it?

Dear Dr Marks,

We were surprised and alarmed to read the on-line editorial by Dr Geraghty, published on Monday in the Journal of Health Psychology. http://m.hpq.sagepub.com/content/early/2016/10/27/1359105316675213.full.pdf

While we would support robust criticisms of science and believe people are entitled to their opinions, we were more than surprised by the personal criticisms made in the piece, which were often unsubstantiated. We do not believe that fellow scientists should indulge in ad hominem attacks and innuendos. For instance, Geraghty wrote ” However, there are accepted scientific procedures and standards that appear to have been neglected, or bypassed, by the PACE Trial team. Their actions have arguably caused distress to patients, added a million pounds of additional costs to a publically funded trial and have left us with two versions of ‘truth’ concerning the trial’s findings – the published analysis versus the recent re-analysis.” Where is the evidence for these statements?

Therefore we ask you:

  1. To revise the piece in order to remove all the personal attacks and innuendos.
  2. To include in a revision the author’s potential conflict of interest as a sufferer of the illness he writes about. See: http://iacfsme.org/PDFS/2016MayNesletter/Attachment-08-Dr-Keith-Geraghty-Doing-CFS-research.aspx
  3. To enable us then to respond with equal prominence to the remaining criticisms as a whole, in the same online first and print versions so that readers can see both articles side by side and then make up their own minds. At present this is not possible because of the selective, one sided nature of the editorial as it stands.

We look forward to your early reply.

Yours sincerely,

Professors White, Chalder and Sharpe

Co-principal investigators of the PACE trial

My peer review of a PACE investigators’ article that the authors refused to heed

 

UPDATED 7/18/2017: OK, Michael Sharpe, I should not make fun of a serious matter. Being an investigator on the PACE trial has attracted you a lot of ridicule and cruel jokes.

I get it that 400 peer reviewed publications don’t qualify me as a reviewer of your paper, I am just not seasoned enough. But  could you maybe show me what you look for in a reviewer worthy of evaluating your manuscript?

mooning elf

“I am sure you could find many reviewers who are more qualified and who would do this. Maybe, you should advertise in social media.”

 

 

Chronic fatigue syndrome or myalgic encephalomyelitis (CFS/ME for short) is one of these.  The particular issue is the role of psychiatric or psychological approaches in the treatment of such patients.  Protest against this form of enquiry has been present for decades.  However, the increasing use of social media and blogs have co-ordinated and expanded the protest to an international one.  

-Professor Michael Sharpe

Yes, Professor Sharpe, Viva los blogs!  Viva los internationales! Your group has controlled  peer review. for too long and you are losing your grip.

hatfield mccoyAuthors do themselves a disservice by refusing to make changes suggested by reviewers, even when they have the power to do so.

I offered a tough-minded review of Peter White, Trudie Chalder, and Michael Sharpe’s Response to the editorial by Dr Geraghty   that recently appeared in Journal of Health Psychology with almost no responsiveness to my critique. The authors offered a partial correction of the authors’ misrepresentation of CONSORT as a guideline for conducting a clinical trial, rather than a checklist for reporting.

I don’t think that there was any ambiguity as to my identity or that I would be less than enthusiastic about what seemed to the authors’ hastily written response to Keith Geraghty.

Presumably Editor David Marks was of the same opinion as me about this general principle: Reviewers who have publicly disagreed with authors can often nonetheless offer valuable feedback, especially when editors interpret their reviews with their pre-existing opinion in mind – and the reviewer’s expertise.

I present my review below. I invite comparison to the published paper. I also invite the authors to share their bullying response, in which they threatened to get COPE involved. As empty and silly as a threat as what we have had to become accustomed to from Donald Trump. Make my day, Peter, Trudie, and Michael, proceed with  a complaint to the Committee on Publication Ethics (COPE).

The authors deserve to have an opportunity to respond to criticism of their work. However, this should not become an occasion for their repeating themselves to the point of self plagiarism, their invoking of dubious sources, or for their ignoring past criticisms.

I think there is a broader issue here – the effect of their trial and the way they have conducted themselves on the overall untrustworthiness of psychology, and health psychology interventions in particular. They flagrantly disregard basic standards for conducting and interpreting a randomized clinical trial, switch outcomes providing a dramatic demonstration of P hacking, attempt to block publication of criticism, and refuse to share their data, even when they have published with the requirement that it be made available. Particularly in this journal, it is important that these matters not be ignored. For the authors’ comment to be published in its present form, it would invite associating the journal with endorsement of untrustworthy psychological science.

For instance, the authors make a spirited defense of their right to switch outcomes. But they don’t address the good reasons that so many psychologists are fighting against such P-hacking, or for that matter, anyone concerned with the integrity of clinical trials. They are essentially arguing that they should have an exception, without acknowledging the important reasons why they are typically not granted. Should they be given a free pass to ignore efforts to reform both psychology and clinical trials?

The authors’ opening point about adherence to CONSORT displays an embarrassing ignorance of CONSORT. Surely they jest. The checklist is concerned not about the adequacy with which a trial is conducted, but the adequacy of the reporting of it. A randomized trial can be abysmal in its conduct, but possibly score close to perfect with the CONSORT checklist, if its flaws are appropriately transparently reported. The authors have made their point in numerous places and have been corrected. That they persist in making it reflects on the seriousness with which they approached responding to the paper which criticize them.

The authors point to patient involvement on the Trial Steering Committee and Data Monitoring and Ethics Committee. Anyone knowledgeable of international standards for these kinds of committees would find it astonishing that the authors/investigators themselves were involved on these committees particularly the data monitoring and ethics committee. Having been involved in numerous such committees, I think their presence would have the perception of compromising the requisite independent judgment of the committee. Patient advocates could legitimately question whether patients on the committee were representative or free from coercion from the investigators.

The short time in which one of the key papers was under review at The Lancet raises serious questions about the adequacy of its peer review. Moreover, the refusal by the editor of Psychological Medicine to consider a letter to the editor based on re-analyses using their originally selected primary outcomes raises issues about the integrity of peer review, both prepublication and post publication.

It’s extraordinary that 16 papers would come out of a single psychotherapy trial. The authors seem to be pointing to that as an accomplishment, but for others who are concerned with broader issues, it raises issues about duplicate publication and the integrity of the peer review under which those 16 papers were published.

I of course assume the Journal of Health Psychology is intended to reach an international audience and be responsive to international standards. I don’t think the author should be allowed to ignore the US committees which operated under considerable orderly and transparent rules in rejecting their diagnostic criteria and their assertions about the effectiveness of the treatment.

Their citing of  Action for ME (2011) is inappropriate for a number of reasons. It’s an unrepresentative that was not subject to peer review. The claims that patients believe their treatments lead to improvement of contradicted by the extraordinary petitions signed by thousands.

I’m not sure that citing of Action for ME (2011) is appropriate in this context, but the authors make a self-serving lumping of response categories and inaccurately portray the survey, Action for ME (2011). Given that Graded Exercise Therapy was also assessed the figures for it should also be given: 31% had received GET and 48% thought that it should be made available.

These are the kind of issues that would be missed by a casual reader, even a very intelligent and methodologically sophisticated reader who is simply not familiar with this literature and the authors as partisans for particular perspective on it that is not shared by all.

Advocates for the improvement of the trustworthiness of the psychological literature should be particularly offended by the distorted view offered in their point (3). They are dodging the very important issue of not only investigator allegiance, but investigator conflict of interest. Raising this issue about other interventions in other contexts have led to dozens of corrections and erratum. (I think it’s important that if this reply is published in any form, it should be accompanied by an editor-approved declaration of conflict of interest).

I’m quite familiar with the authors’ 2015 Lancet Psychiatry paper, but it is unrecognizable in the way is described in this comment. Numerous reasons why the authors cannot interpret the follow-up data in the way they do here and elsewhere are presented in letters to the editor, which the authors ignore. This is not the place to elaborate, but basically the authors abandoned their protocol and a substantial portion of the patients available for the follow-up were no longer receiving the treatment to which they were assigned.

The authors substantially misrepresent the use of Freedom of Information Act requests and their response to them. They exaggerate the number of requests that were made by counting multiple times any request that involved multiple variables. Furthermore, they misrepresent their responsiveness. They have released their data when they were involved as authors and had control over subsequent publications. They misrepresent the multiple times they’ve invoked the excuse that people who requested the data were vexatious.

I find it odd that they retreat to a blog post by Simon Wessely as a defense of their many methodological problems. The blog post was not peer reviewed and received a huge amount of substantive criticism. At best, Simon Wessely invokes his authority in place of evidence. Surely the authors can do better than an authority-based argument. It is my opinion that they should not embarrass themselves by bringing it the blog post here. If nothing else, I think they should respect the journal as a more formal forum.

Their excuse that they do not release their data because the consent forms do not allow it was argued in proceedings that cost them over 250,000 pounds. The final decision of the lower tribunal soundly rejected this excuse after reviewing it in explicit detail, including direct testimony from the author group. Here, as elsewhere in their reply, they are pleading for an exceptionalism which I cannot understand the basis.

It is debatable that improvement rates of 20% and 21% over 10% for the SMC alone group justifies a claim that the therapies “moderately improve” outcomes. But the authors do not exposed readers to this issue, just gloss over it.

I could continue with some serious substantive methodological, statistical, and interpretive issues. However, I think I have sufficiently established that the authors have not made effective use of their opportunity to reply to the editorial. Any effort to continue to exercise that option would have to be with a thoroughly revised manuscript requiring another peer review. What we see in the present version, however, is a thorough rejection of international standards, as well as the principled reasons behind efforts to improve the conduct and reporting of psychosocial interventions in clinical and health psychology.

There is another possibility available to the journal, however. Simply published the authors’ response as is, but allow reviewers to respond in print and to point out that the authors have repeated themselves, citing instances of this, and that the authors resisted encouragement to revise the manuscript from its present form. Rather than leaving this all behind the curtains, early investigators could get an interesting look into the process of challenging bad science and efforts to resist these challenges.

 

Is your manuscript ready for uberized readers and radically changed journal websites?

read iwth your earsPublishers are spending millions revamping journal websites. What were formerly simple portals where you accessed articles  are being radically redesigned as online content delivery platforms. As the website for the JAMA Network of journals announced late last year:

We have aimed to make the platform more usable, discoverable, and faster on any device.

In case you haven’t noticed, many journals no longer publish articles on paper, which are bound in journals and stored on library shelves.

library shelvesMany journals still preserve the appearance of publishing articles organized in “issues” scheduled on particular dates. But articles no longer have to wait being organized in an issue. They now typically appear as quickly as possible as “early views.”

[There is a sneaky trick here. Journal impact factors are supposedly calculated based on the number of citations articles receive within two years of publishing. But Web of Science starts the two-year window from what is an increasingly artificial date of publication. Journals can exploit early views to get citations coming before that publication date. Journal impact factors can be manipulated with lots of early view articles available months before being assigned an issue and page numbers.]

You can still download old-fashioned PDFs resembling the pages of paper journals, but PDFs no longer must be papercentric, modeled after what was printed on dead trees and bound in volumes. Online enhanced content delivery platforms  increasingly offer enhanced PDFs. As described by Wiley

Whilst keeping the clear layout and simple design of the standard PDF, PDFs opened in the ReadCube Enhanced PDF format, feature hyperlinked in-line citations and clickable author details, allowing quick look up and cross reference. Supplementary information, figures and other valuable article data, are always just a click away, making it easier for researchers to discover, access and interact with our scientific literature. Integrated social sharing and social metrics data are also available from Altmetric, connecting our articles in new, innovative ways.

Wiley, like other for-profit publishers capitalizes on enhanced PDFs to make money in new ways, by providing:

More flexible options for “Pay-Per-View” journal article access, replacing our existing PPV service for individual users with affordable rental, cloud and PDF download options, ensuring we give users a range of options in how they view and access that content.

Particularly with publishers of medical journals like the American Medical Association, you may have noticed that email alerts have for a while had links for temporary free access to articles. However, when the links take you to the online platform, you find that you can’t download the PDFs without paying a fee, but must read them in the ReadCube or other enhanced PDF platform.

Scientific publishing is is also digitalized, with journals no longer accepting submissions by snail mail, but requiring that submissions be uploaded through portals like ScholarOne.

Digitalization also means that access to articles can be monitored, with readers, individually and collectively, put under surveillance concerning their viewing habits.  Once harvested , these data become extremely valuable in guiding decision-making.

Big data also provide instant feedback about what topics and what specific articles are getting attention and can even be used to make judgments about how successful particular authors in drawing traffic to their website.

In the process of converting to online content delivery platforms, publishers have followed the lead of Amazon, Netflix and Uber. Sometimes hiring programmers from these organizations , publishers have created algorithms to collect and process big data to personalize what is being offered to readers accessing their websites.

you may also likeYou may have noticed that when you access an article, it is accompanied by recommendations about other articles that may interest you.

Yup, journals are evaluating  topics and authors in terms of their ability to draw traffic to their websites, keep visitors at their websites and coming back.

In authors’ cover letters accompanying submissions, it has become strategic to inform the editor how specifically acceptance of your manuscript would serve this aim of the journal. Authors might do well to cite altmetrics of their last paper published in a journal:

Dear Editor,

We are submitting our manuscript [title] to your journal because of the extraordinary altmetrics achieved by our last paper published there.

Depending on the value editors attach to your submission, journals may expect you to collaborate with them in increasing traffic to their website, once your manuscript is accepted.

For papers promising to be particularly successful, the journals may expect you to provide press releases; write additional shorter, more engaging abstracts; and even prepare audio and video presentations and especially podcast interviews.

Over the coming months, I will be writing a series of blog posts about these huge changes in scientific publishing.

I’ll also be making available free videos about the opportunities and challenges these transformations mean for you as an author. This summer I will be releasing a web-based course of five, one hour videos entitled How to Write High Impact Papers: a Strategic Approach. I’ll still be offering live workshops, such as at the European Health Psychology Conference in Padova at the end of August, but I am seeking to reach a broader audience by going to the web. The videos will be in English, but they will have subtitles in a variety of other languages, as well as English.

screenshot me

You can sign up now for email alerts about my blogs, web-based courses, and e=books at @CoyneoftheRealm.com.

But for now, I will simply point to the temptations that the aims of these online content delivery platforms pose for authors.

van gogh grieving old man

Editor reading another cover letter saying a manuscript is paradigm-changing.

 

Editors seek manuscripts with newsworthy, attention-grabbing  story lines, not just another brick in the wall. They want claims of –God save us- paradigm-changing findings. They are less concerned what is robust and enduring, but in what draws in traffic to their platforms. Yet, so much well executed and transparently reported science does not fit this picture. Are you tempted to make your manuscript attractive in these terms, even if you have to get flexible in what data you report and how you analyze and interpret these data?

Basically, publishers are transforming their websites based on big data that suggests that if they going to maximize the success and profit, they must adapt to new readers.

These readers are being profoundly changed by mobile devices and the internet. Their behavior has been shaped by their experience with social media and digital devices. The new wave of readers is accustomed to ordering pizzas and Uber cars easily and to being guided by the ratings of other uses of these services. They expect similar experiences accessing scientific articles.

I don’t think we can ignore all this and still get the best benefit in reading and writing scientific papers.

The old ways of doing things are working less and less well.

Are you gearing your manuscripts and cover letters to the new wave of readers that publishers want to attract to their platforms and keep coming?