Spokesperson clarifies that American Psychological Association does not endorse routine data sharing

I provide an update of the recent blog post about a researcher requesting a small set of variables to verify claims appearing in an article published in an APA journal. My update involves a further exchange of emails between this researcher and the APA publications office.

In the new exchange:

  1.  APA verifies that it has a weak policy concerning authors having to share data and it does not really endorse routine sharing of data.
  2.  APA indicates that any re-analyses of data requested from an article published in APA Journal have to stick strictly to reproducing analyses in the published article, and not go further without the express permission of the original authors.
  3.  Basically, analysis of data provided in response to a request can only be used to check specific statistical calculations and not to conduct further analyses that might shed light on the appropriateness of the original author’s conclusions.

That is usually not where any clarification of controversies will be found.

I present further evidence of contradiction and even hypocrisy in what APA says about data sharing.

I make three nominations to a Wall of Shame of those who resist correction of potential untrustworthiness in the psychological literature by penalizing and threatening those who request data for checking by reanalysis.

reusing-data-makes-me-research-parasite In an editorial entitled Data sharing,  the editors of New England Journal of Medicine have condemned as “research parasites” researchers who seek to test alternatives to authors’ hypotheses or new exploratory hypotheses with data shared from published articles. The APA seems to have much the same attitude.

Improving the trustworthiness of psychology depends crucially on the work of such research parasites.

Why? For many areas of psychology the scope and expense of research projects make replication initiatives impractical. Efforts to improve the trustworthiness of psychology involves insisting on completeness and transparency in what is published, but also on routine data sharing from published papers.  Well aware of the limitations of review of manuscripts before they are published, we need independent, post-publication review with access to the data. APA’s position is making any scrutiny more difficult.

This  incident again demonstrates the institutional resistance to data sharing and the institutional support available to authors who want to protect the claims from independent scrutiny.

I wonder what these authors were hiding by presenting obstacles to accessing their article. We won’t get to see.

After all, Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results.

 What was reported so far

  1.  In the recent guest blog post, Professor Christopher Ferguson of Stetson University described the extraordinary resistance he experienced when he attempted to obtain a few variables from an author to decide for himself if her claims held up about the effects of exposure to media violence on youth violence.
  2.  To summarize:
  3.  Chris first asked for data sets from two articles which were needed to check the claims in one of the articles.
  4. The spokesperson for the Flourishing Families Project, from where the data came, objected. She considered what Chris was proposing to be new analyses rather than merely fact-checking the original paper.
  5.  Chris revised his request to a more modest set of variables across 3 time periods, plus some covariate/control variables.  Maybe  a total of14-15 variables.
  6.  In response, he received an invoice for $450 and a contract that he was required to sign to obtain the data. The contract stipulated that Chris could only reproduce exact analyses presented in the article and would have to obtain permission of the Flourishing Families Project to publish any results. The contract carried the penalty of ethical charges if these terms were not met.
  7.  Chris launched a GoFundMe drive and raised the $450, but also contacted APA Ethics Committee.
  8.  While Chris was waiting for a response from APA, he received a new contract from the spokesperson for the Flourishing Families Project. The revised contract seemed to relax requirements that Chris get permission from the project to publish results. But the contract retained the restriction that he could only reproduce analyses contained in the article. It introduced a threat of a letter to Chris’s Dean if he did not accept this restriction.
  9.  Chris then received a letter from the APA Ethics Committee explaining that it is not appropriate for authors to restrict independent re-analyses needed to confirm their conclusions.
  10. A victory? No, not really. The spokesperson for the Flourishing Families Project then wrote to Chris, stating that her previous letter had been written in consultation with the APA Journals Office and General Counsel. Wow, brought in the lawyers. That should scare Chris.

The big picture issues

An earlier post at Stat How researchers lock up their study data with sharing fees had nailed the key issues in the author’s response to Chris:

The story highlights a potentially uncomfortable aspect of data-sharing: Although science is unquestionably a public good, data often are proprietary. Drug companies, for example, spend millions upon millions of dollars on clinical trials for their experimental products. While the public certainly has a right to know the results of those trials — favorable or not — researchers who want access to the data to conduct their own studies can’t reasonably expect the original investigators not to recoup the costs of sharing it.

Okay, but:

 Trying to recoup costs is fine in the abstract, but if it’s used as just another way to avoid sharing data, then it’s deeply objectionable.

And:

One thing about the Ferguson-BYU example is clear: We need explicit policies. Winging it will work about as well in this arena as it does in, say, presidential debates. Existing rules about data sharing, if they even exist, are vague and institution-specific, and permit researchers to erect obstacles, financial or otherwise, that give them effective veto power over the use of their data.

Now, a further exchange between Chris and APA

The next chapter in this story started with an email from Rosemarie Sokol-Chang, who identifies herself as Publisher, APA Journals (Acting). The full email is reproduced below, but I want to highlight:

“Certainly there is a group of scholars and organizations that advocate for open sharing of data – free and without restriction. While these movements exist, the extent to which to share data is up to the author, and in this case, the author chose not to freely share it.”

I guess we can count APA out of a “group of scholars and organizations…”

 The actual emails

 Subject: APA Journals – data share request

Dear Chris,

Jesse and I met to discuss your request to reuse data, in light of the letter you received from the Ethics Committee. From our read of the letter, we (Journals and General Counsel) are not interpreting the code differently from the Ethics Committee – that is, data should be released to those who “seek to verify the substantive claims through reanalysis and who intend to use such data only for that purpose…”

We understand the claim you are making regarding data sharing, but that extends beyond the ethical code APA set and follows. Certainly there is a group of scholars and organizations that advocate for open sharing of data – free and without restriction. While these movements exist, the extent to which to share data is up to the author, and in this case, the author chose not to freely share it. Per Standard 8.14:

“Psychologists who request data from other psychologists to verify the substantive claims through reanalysis may use shared data only for the declared purpose. Requesting psychologists obtain prior written agreement for all other uses of the data.”

I understand this is not the outcome that you want, but the author is complying with the current APA Ethics Code and the APA Journals policy of sharing data for verification.

All best,

Rose

Rosemarie Sokol-Chang, PhD
Publisher, APA Journals (Acting)
American Psychological Association
750 First Street NE
Washington, DC 20002

Decide for yourself, but I think this email indicates that APA is contradicting earlier communication, but denying it is doing so.

Chris replied:

From: Chris Ferguson (Psychology Professor) [mailto:cjfergus@stetson.edu]
Sent: Wednesday, November 30, 2016 6:01 PM
To: Raben, Jesse <jraben@apa.org>; Sokol-Chang, Rose <RSokol-Chang@apa.org>
Subject: Re: APA Journals – data share request

hi Jesse (and Rose):

Thanx for being willing to continue to dialogue with me on this.  Unfortunately I don’t think we’re at “apples to apples” yet.  To be clear (I thought I had been, but if not then I apologize) my intent is, consistent with 8.14, to verify the substantive claims of the original paper not to do “anything I want”…I don’t think I *could* do anything else with this dataset as it has so few variables in it.

However, my interpretation of both 8.14 as well as the letter I received from the Ethics Committee is that, while I must test the same hypotheses as the original paper, I am not restricted to the original analyses.  The Ethics Committee letter appears rather clear on this in fact as they say “Thus, the Committee feels that Standard 8.14(a) promotes the sharing of data for reanalysis, whether this be a straight replication of the published analysis or not.”  This makes a great deal of sense because, of course, the initial analyses may be wrong…and it makes no sense for a verification effort not to check this.

Let me ask you a few pointed questions.

1.) If I were to discover that a variable had been miscalculated in the original dataset, would I be able to recalculate it and rerun the analyses with the corrected variable?

2.) If I discovered that the original analyses were misspecified…would I be able to rerun a corrected analyses with proper specifications?

3.) If I were to learn that the analytic approach itself were inappropriate for the data, would I be able to test the substantive claims using alternate, more appropriate analyses.

4.) If I discovered other, unforeseen, errors in the data or analyses, would I be able to report these?

As I read the current contract, the answer to each of these questions would be “no.”  If I am incorrect in my interpretation (and I may be) I think it would be important for the contract to make clear what I *can* do, particularly in light of the unpleasant language in it threatening ethical complaints and calls to my dean.

Thank you for your consideration.

Chris

And then Rosemarie Sokol-Chang responded:

Hi Chris,

The intent of the 8.14 as we apply it to authors is to offer a check of the validity of what was reported. If you were to receive the data, and run the same analyses, and get different results – we would want the scientific literature cleaned up in light, so that the article didn’t persist with inaccurate results. If you were to find that the data looked “fishy” – which happens rarely but there are some big-name cases of numerous retractions by the same author – this is something APA Journals would also want to know to be able to take measures to clean up the record. This is the “verification” step.

Replication is duplicating an entire experiment – you’d be collecting new data following the same method. Reanalysis is using the same data set – and whether or not a requestor can use a data set to run any particular analyses not reported in the manuscript is ultimately up to the author.

All best,

Rose

Rosemarie Sokol-Chang, PhD

Publisher, APA Journals (Acting)

American Psychological Association

750 First Street NE

Washington, DC 20002

rchang@apa.org202-336-5667

www.apa.org/pubs/journals

Déjà vu all over again

 APA has been here before. See:

The APA and Open Data: one step forward, two steps back?

And what APA really meant in:

Access to Archives of Scientific Psychology Data

One of my earliest blog posts ever was about a study from this department. Did a Study Really Show that Abstinence Before Marriage Makes for Better Sex Afterwards? Requesting data was not an option  in 2011. I don’t think  getting a look at the data was needed to establish the patent absurdity of this study’s methodology and conclusions.

 Three Nominations for the Routine Data Sharing Wall of Shame

use the wall of shame.jpg
sarah-coyne

Sarah Coyne blocked efforts to independently verify her claims about effects of media violence on children.

laura

Laura Padilla-Walker got APA General Counsel involved and raise threat of going to Chris’s Dean.

rosemaire-soklol-chang

Rosemarie Sokol-Chang withdrew what had been apparent APA support for Chris’s request and clarified that APA does not support routine data sharing.

Postscript

Is it really wrong to shame researchers who are not willing to share data or do so with unnecessary obligations?

If you can earn badges for uploading new data, why shouldn’t we also give badges for sharing old data? – Uli Schimmack on FB. November 29, 2016

A Quixotic Quest to Obtain a Dataset on Media Violence With an Unexpected Price Tag

ferguson_chris-2013A guest blog post from Christopher Ferguson, Stetson University with a commentary from Coyne of the Realm

Evaluating the trustworthiness of psychological research cannot depend on independent replication. Many studies that claim clinical and public policy relevance are too large, required too many resources, or have too long a follow-up for replication to be feasible. Improving the trustworthiness of published psychological research depends heavily on the routine availability of data sets for independent scrutiny. Many journals, including those of the American Psychological Association require that authors make data available upon request. But as this invited blog post demonstrates, enforcement of data sharing is spotty. Those unwise enough to request data invite a lot of frustration.

The situation will only improve if researchers running into difficulty obtaining data publicize their frustrations, so that opinion can be mobilized to correct a pervasive and unfortunate situation.

  • The story starts with a request to the Flourishing Families group for a small amount of data that would allow re-examination of their analyses and conclusions concerning effects of media violence.
  • The request made use of a little-known requirement for publication in APA journals that authors make their data available to other competent investigators to verify their substantive conclusions.
  • The response from Flourishing Families was an invoice for $450 and a contract that would restrict any analyses of the data to those narrowly required to verify the exact same analyses, but not to be able to explore the robustness of the paper’s claims through alternateanalyses.
  • The APA ethics committee acknowledged that restricting analyses was probably not permitted, but failed to provide clarity or guidance on other issues, such as the invoice or requiring Flourishing Families’ permission to publish any reanalysis.
  • Under the terms of the contract, publication of any analyses beyond those that were stipulated, even correcting computational errors in the original article, could be subject to ethical sanctions.
  • The $450 was raised in a GoFundMe campaign but the restrictive contract remained.
  • Ultimately, current APA ethics policy appears to fail to prevent substantial roadblocks to data sharing.

Media violence research was once the darling of psychological science.  Back in 2000, the American Academy of Pediatrics claimed  there were 3500 studies of media violence with only 18 not finding effects on youth aggression.  Since then, confidence in media violence research has crumbled, with numerous null studies, failed replications and controversies about the poor methods and possible questionable researcher practices used in many studies.  The AAP claim about 3500 studies proved to be apocryphal with the number pulled, not from a scientific source, but from a pop psychology book.  Yet the debate goes on, with new studies adding fuel to the fire.

In early 2016, one longitudinal study, authored by Dr. Sarah Coyne at Brigham Young University, argued that watching violence or relational aggression on TV would lead to the same behaviors in youth (henceforth, S.Coyne, 2016).

Coyne SM. Effects of viewing relational aggression on television on aggressive behavior in adolescents: A three-year longitudinal study. Developmental Psychology. 2016 Feb;52(2):284-95.

The evidence for such claims from the study appeared weak.  The authors used a path analysis/SEM approach.  Effect sizes were near zero (standardized coefficients between .02 and.06) and, in the case of physical aggression, dependent on a relaxed (p < .10) criterion for statistical significance.  The data came from a larger Flourishing Families dataset, which included many variables not used in S.Coyne (2016).

Eventually I got thinking…maybe these results aren’t robust.  SEM based approaches to data can be squishy.  And the authors hadn’t included many potential control variables (indeed, relationships tested were basically bivariate.)  So I wrote to the Flourishing Families group in July, 2016 to request the S.Coyne (2016) data.  Authors publishing in APA journals sign documents acknowledging they’ll provide data on request for verification of substantive claims.  Thus, a data request seemed straightforward.  But it would begin a strange journey into the land of APA ethics policy that is heavy on loopholes and low on clear guidance.

Since it’s at the crux of the matter, I’ll post the relevant APA ethics policy here (8.14):

After research results are published, psychologists do not withhold the data on which their conclusions are based from other competent professionals who seek to verify the substantive claims through reanalysis and who intend to use such data only for that purpose, provided that the confidentiality of the participants can be protected and unless legal rights concerning proprietary data preclude their release. This does not preclude psychologists from requiring that such individuals or groups be responsible for costs associated with the provision of such information.  Psychologists who request data from other psychologists to verify the substantive claims through reanalysis may use shared data only for the declared purpose. Requesting psychologists obtain prior written agreement for all other uses of the data.

My request eventually made its way to Dr. Laura Padilla-Walker who, though not an author on S.Coyne (2016), served as contact person for Flourishing Families.  Since other data, including some potential control variables on family environment from Flourishing Families had been published in another APA journal, I originally requested both datasets to see if the substantive claims of the S.Coyne (2016) would be robust once control variables were included.  Dr. Padilla-Walker objected to this request, noting that this would constitute new analyses rather than merely fact-checking the original paper.  Fair enough, I figured, what about just the data from S.Coyne (2016)?  That dataset seemed pretty small…just four variables (TV relational and physical aggression, real-life relational and physical aggression) across 3 time periods, plus presumably age and gender.  Maybe 14-15 variables.

In response to this request I was sent an invoice for $450 dollars and a contract.

The wording of the contract appeared to restrict me to only conducting the exact analyses (even if they were wrong) that S.Coyne (2016) had done, and requiring permission of the Flourishing Families group before publishing any findings.  The contract explicitly threatened the filing of ethics charges were I to deviate from this.

Regarding the $450 charge, APA ethics policy allows for “costs associated with the provision of such information.”  But what a reasonable cost is isn’t clarified.  Generally, I’d assume this would be anything material…such as if I requested the data on a USB drive. Or if I made some unusual and time-consuming requests of the data (combining multiple unrelated datasets into one file, say.)  But I’d imagine having a data file and code book ready is simply expected of scholars, not a recoupable cost.  And I’d guess transferring roughly 15 variables from one file to another should take a graduate student ten minutes.  But I was charged $300/hour for 1.5 hours work.  Clearly, I’m in the wrong business (my salary as a full professor is an embarrassing fraction of $300/hour.)  I called the APA Ethics office about this and the investigator I spoke to confirmed the APA had no particular guidelines on what constituted a reasonable charge.  Brigham Young University’s Office of Research and Creative Activities likewise confirmed they had no policy against such a financial proviso.

gofunmeStipulation of fees can represent a non-trivial barrier to data requests.  Being at a small, liberal arts school and without grant funding, I imagine many scholars like myself would find (arguably spurious) financial invoices a burdensome block to data requests.  I came up with a creative solution: I launched a GoFundMe campaign!  Within a short time, generous donations poured in and I was able to fund the invoice.  But the problem of the contract remained.

Regarding the contract itself, the stipulation about only being able to reproduce the *exact* analyses of the original S.Coyne (2016) article seemed to prevent any substantive reanalysis that could verify or question the claims of the original article.  What if different SEM models produced different results, or using regression raised questions for the conclusion of the article?  Might Bayesian analyses reach a different conclusion?  Or what if a variable had simply been miscalculated?  As I read the language of the contract, I understood it to mean I couldn’t even fix a miscalculated variable if there was one (I was not corrected by Flourishing Families when I pointed this out to them).  Is this level of specificity allowed under APA policy?

I reached out to the APA ethics committee in writing.  While I waited for a reply, I sent Flourishing Families, a revised contract, suggesting some loosening of the restrictions on what I could do to verify their substantive claims.  Ultimately after having sought input from the APA Journals Office and General Council, Flourishing Families wrote back with a new contract of their own.  Dr. Padilla Walker informed me that the APA General Counsel had helped her with some of the specific wording.  Some of the language regarding requiring permission from Flourishing Families to publish any results appeared to have been softened, but the stipulation regarding exact analyses had not been.  Additionally, the new wording added a new level of negativity with a threat to call my Dean if I did not comply!

After a month or so, the APA Ethics Committee wrote back to confirm that it is not, in fact, permissible to include analysis restrictions on data requests for verification:

In their discussion the Committee focused on the key terms bolded in the above citation.  Namely that data from published research should be made available for purposes of ‘verify[ing] substantive claims through reanalysis…”  The Committee noted that there is a difference between a reanalysis and a replication study.  Thus, the Committee feels that Standard 8.14(a) promotes the sharing of data for reanalysis, whether this be a straight replication of the published analysis or not.  However this does not necessarily require the release of variables/data that were not included in the original published study.

However, as to the issue of whether the contract could include a demand that I obtain Flourishing Family’s permission to publish the results of a reanalysis (as in the original contract), the APA Ethics Committee whiffed, suggesting I contact the Journals Office or open science groups like the Open Science Framework.

When I contacted Dr. Walker at Flourishing Families with the information from the ethics committee she reminded me she’d gotten endorsement for her revised contract  from the APA Journals Office and General Council.  She forwarded an email to her from Dr. Rosemarie Sokol-Chang at the Journals office endorsing Flourishing Family’s revised contract and even suggesting some of the wording.  I reached out to Dr. Sokol-Chang about this and provided the letter from the APA Ethics Office.  As of this writing, she promised to confer with APA General Council and get back to me.

In the meantime, as the APA Ethics Committee suggested, I reached out to Dr. Brian Nosek at the Center for Open Science.  Dr. Nosek stated to me that the stipulations of the Flourishing Families contract were not consistent with transparency and openness in science.

And there things grind, nearly five months after my original request.  I am being charged $450 for a tiny dataset I can’t do anything with.  This would be a bad deal, even ignoring the ethics issues.  And, aside from possibly the proviso regarding whether I can be restricted only to exact analyses from the original study, there appears to be nothing in APA ethics policy to prevent original study authors from throwing up multiple financial and contractual roadblocks to a data request vetting their work.

As the charming conclusion of the Flourishing Families contract notes, were I to find evidence of a mistake in S.Coyne (2016) and decided the scientific community needed to know about it, it could be me who faced an ethics complaint.

Most eye-opening to me is the inherent vagueness of APA ethics policy which does little to prevent shenanigans from authors seeking to protect their policy from independent and open verification.  APA policy remains vague, apparently purposefully, on key issues.  Perhaps this is due to an inherent conflict of interest…arguably it’s not to the advantage of APA journals to have their articles fact-checked by independent scholars…fact-checking that could reveal errors and perhaps lead to retractions.

The funny thing about S.Coyne (2016) is in considering, really, what’s the worst that could come from my independent verification?  Let’s say I disagreed with the Flourishing Families analyses and came to a conclusion of null effects of viewing violence on TV on adolescent behavior.  Assuming I ever got my reanalysis published, they’d write a rebuttal…we all get pubs and, given how intractable the media violence debate is, the opinions of scholars in the field would likely change barely at all.  I’ve yet to see a career shattered by such an exchange.

Instead we’ve exposed the limits of APA ethics policy.  Policy which, at present, appears – whether intentionally or not – to discourage data sharing with independent scholars who are neither associated with the original authors or the APA itself.  Until the APA adopts policies that truly promote open science, articles such as S.Coyne 2016 should be considered apocryphal, the evidentiary value they provide for science limited.

Chris Ferguson is a professor of psychology at Stetson University.  He studies various media influences, including video game violence, thin-ideal media and body dissatisfaction and “sexy” media.  He is a fellow of the American Psychological Association, and received an early career scientist award through APA’s Division 46 (Media Psychology and Technology).  Aside from this professional work, he also writes speculative fiction which can be found at his website ChristopherJFerguson.com.  He lives in Orlando with his wife and son. 

Storing the mentally ill and suicidal persons in the new asylums

I recently blogged about anti-suicide smocks as an effective, but unacceptable way of reducing suicides in incarcerated inmates, whether prisoners in US jails or detainees in Gitmo.

There is evidence that anti-suicide smocks reduce suicides, but they are often used to justify keeping suicidal persons in inhumane settings. Because the smocks prevent suicide, they become an excuse for not providing more appropriate, less restrictive care. Anti-suicide smocks can become a form of cruel and unusual punishment.

I referred to two prison settings in Massachusetts. Billerica House of Correction reduced suicides by requiring at-risk persons wear anti-suicide smocks.  Hampden County Correctional Center had adopted enlightened measures that made anti-suicide smocks unnecessary.

Then I came across a brilliant longread article, one of The Desperate and the Dead series  on the treatment of the severely mentally ill from the Boston Globe’s Pulitzer Prize winning Spotlight investigation team.

There may be no worse place for mentally ill people to receive treatment than prison yet a growing number end up in the ‘new asylums’

The article to which the link above takes you has an excellent 4 minute video giving a prisoners’ first person account of not being able to get appropriate treatment.

I strongly recommend the Boston Globe article, as well as the longer series of which it is part.*

The article documents how of the more than 15,000 prisoners discharge from Massachusetts jails and prisons, more than a third suffer from mental illness. Over a third of inmates being released from state prisons with mental illness will be locked up again, which is higher than the percentage of those without mental health problems.

There is  a lack of mental health treatment in Massachusetts prisons for persons known to have mental health problems:

Altogether, specialized mental health treatment units in the 15 Massachusetts prisons have space for 285 inmates ­­— 10 percent of the 2,900 with diagnosed mental illness, and less than half of the 725 whose illnesses are designated as serious by prison officials.

What treatment is provided occurs under bizarre circumstances that are unlikely to allow the therapeutic relationship needed for therapy to be effective.

Even in the units with the best available care, treatment is distorted by security demands. Inside the 19-bed Secure Treatment Program at maximum-security Souza-Baranowski, group therapy sessions take place in a room dominated by a semicircle of six imposing metal cages painted periwinkle blue, known as “therapeutic modules.” Each inmate is locked into his or her own cage for group talks — thick plastic spit shields between them — to eliminate the risk of violence. Even during outdoor recreation time, each inmate is locked into his own large metal cage.

The situation in Massachusetts prisons has attracted the attention of the federal government.

Seven of 15 prisons in Massachusetts are federally designated “health professional shortage areas” in the realm of mental health, meaning they exhibit “extreme need” for more clinicians and employ fewer than one psychiatrist per 2,000 inmates.

The Globe also documents that prisoners with mental health problems are discharged under circumstances likely to exacerbate those problems:

The Harvard-led Boston Reentry Study found in 2014 that inmates with a mix of mental illness and addiction are significantly less likely than others to find stable housing, work income, and family support in the critical initial period after leaving prison, leaving them insecure, isolated, and at risk of falling into “diminished mental health, drug use and relapse.”

The released prisoners get little assistance in obtaining the vital  follow-up to whatever treatment they got within prison, incluing renewing prescriptions for medication :

Last year, 90 percent of the estimated 6,000 inmates with mental illness who were released from jails and prisons got little or no help from DMH [Department of Mental Health] as they tried to find treatment in the community, according to numbers provided by the state.

healthcare-not-handcuffsThe Globe article provides first-person accounts from prisoners and prison officials that convey an utter helplessness in coping with the harsh experiences facing the mentally ill, while imprisoned and when there are released. I highly recommend the article.

[Not everybody agrees with the Spotlight series or my assessment of it. For a critical response to the series, see Response to The Boston Globe Spotlight Series ]

*But here is a  link to three brief videos for the rest of the series:

The mental health care system in Massachusetts is broken

When despair meets deadly force

Crisis in the woods

Means restriction to prevent suicide in the LA County Jail: the safety smock

Last Sunday night, CNN re-ran an episode of This is Life with Lisa Ling, Inside the Largest Jail System in the Country. My interest was that the LA County Jail is also by far the largest mental health facility in the United States. I think most of my readers will be shocked by what I discuss in this blog post.

Here is 1:27 minute video that is a tantalizing teaser that makes some important points.

Here is a link to the just-under-2-hour full value of the episode. You can fast-forward through some of it to come to more unsettling material about the life of the mentally disordered persons who make up more than 20% of the LA County jail prison population.

I noticed that some of the suicidal patients had on prison clothes that resembled something out of Star Wars.

Here is what Wikipedia says about these anti-suicide smocks

An anti-suicide smock, Ferguson, turtle suit,[1] or suicide gown is a tear-resistant single-piece outer garment that is generally used to prevent a hospitalized, incarcerated, or otherwise detained individual from forming a noose with the garment to commit suicide. The smock is typically a simple, sturdily quilted, collarless, sleeveless gown with adjustable openings at the shoulders and down the front that are closed with nylon hook-and-loop or similar fasteners. In an The thickness of the garment makes it impossible to roll or fold the garment so it can be used as a noose. It is not a restraint and provides modesty and warmth while not impeding the mobility of the wearer.

These items are formally known as Safety Smocks and were designed and developed by Lonna Speer in 1989 while she was a nurse working in the Santa Cruz, California, county jail.[2] Safety Smocks are now standard issue throughout jails and prisons in the United States. [3] The same material is used for the anti-suicide blanket. Prior to use of the Safety Smock many jails and prisons stripped inmates naked and held them in a stripped down padded cell with no furniture or protusions of any kind. Some facilities opted to use paper gowns to provide modesty. However, inmates are able to fashion a noose from a paper gown in less than 15 seconds, so most institutions no longer use them. The American Correctional Association (ACA) has established use of appropriate Safety Smocks and Safety Blankets as one of the Standards used to judge jails and prisons for accreditation.

To learn more, I went to the website of what is probably the leading manufacturer of anti-suicide smocks, Ferguson with an aptly named link,  Preventsuicide.com.

claim-of-stronger-for-longer-smock

Along with ads asking, “Have a smelly situation?” and offering a product Reverse-It as a solution, you can find links to a full line of anti-suicide products.

him-1anti-suicide-smock

antisuicide-bed

Ferguson also makes a sanitary belt for suicidal females that “Cannot be twisted into a cord. Produces a gagging reflex if an inmate attempts to choke herself.”

anti-suicide-sanitary-belt

Anti-suicide smocks come in colors, but not yellow or gold in Gitmo

I learned from another site about use of anti-suicide smocks for detainees at Gitmo.

Any Guantanamo detainee thought to be a suicide risk is also clothed in green. What’s up with that? Green and navy blue are both popular colors because they’re considered “soothing.” Manufacturers try to avoid colors like yellow or gold because the brightness could agitate the inmates and perhaps keep them up at night.

Suicide prevention in the “Hole” Segregation Units

The CNN video revealed the horrible mistreatment of mentally ill patients under horrible conditions in the LA County Jail. The patients are placed in a special Observation Unit, but in numerous other American prisons, mentally ill patients are put in the Hole, which

is formally called “Segregation” or “Seg”—is off-limits to visitors. It’s where prisoners are separated from the general population and held in a small, barren cell with a cot and a toilet, 23 out of 24 hours, with one hour of recreation a day.

Seg units have become controversial as solitary confinement is starting to get a hard look. Solitary confinement was recently called “cruel and unusual punishment” by Sen. James Eldridge (D-Acton), who along with Rep. Elizabeth Malia (D-Boston), filed a bill this month to put more restrictions on the use of solitary confinement in Massachusetts prisons.

Here’s a little-known fact about prison Segregation Units: Besides housing those in protective custody (such as sex offenders who need protection from the general population) or those locked for disciplinary problems, some prisoners are isolated because they are suicidal.

A recent article in the New York Review of Books  gave a chilling description of such solitary confinement at New York’s Sing Sing Prison :

If you look inside a solitary confinement cell such as the ones I have visited in New York’s Sing Sing prison, you’ll see a gray-walled, eight-by-eight-foot room with a concrete slab bed; it’s underground, more like a tomb than a cell. The light is always on. Usually there aren’t any windows, but there is a toilet (no toilet seat or paper) and a shower.

The solitary cell is home to a single prisoner, twenty-three or twenty-four hours a day; the extreme isolation and sensory deprivation imposed by the cell can last for days, months, years, or decades on end. Someone who visits a solitary cell might not notice the feces or the urine that leaks from the cells above, down the walls into a puddle on the floor. He or she would not be shown prisoners mutilating themselves or fighting guards or one another to the death, or men in their underwear, or naked, shackled by their hands to the bottom of bunks, deprived of books, paper, radio, pens, or pencils. I have represented a range of defendants in constitutional and criminal cases during the last fifty years, and my clients who have spent time in solitary consistently testify to having witnessed, or been subjected to, these abuses.

Anti-suicide smocks are effective, sort of…

An article that I found in Boston Magazine discussed how a rash of suicides in the Massachusetts prisons had led to adoption of anti-suicide smocks.

Billerica House of Correction bought smocks after a rash of eight prison suicides in Massachusetts in 2010. According to the Middlesex Sheriff’s Office, Billerica had 11 suicide attempts that year, and after The Boston Globe reported that, in 2010, Massachusetts prison suicides were four times the national average, re-evaluation and re-training occurred in facilities throughout the state. Also adding fuel to the fire was a study in 2007 by the Justice Department finding that “64 percent of inmates across the country reported mental health problems within the past year.” That meant more and more facilities were housing troubled—and not just dangerous—people. In 2013, two years since Billerica instituted the smocks, the suicide rate dropped to zero.

Suicide prevention without anti-suicide smocks

The Boston Magazine article also discussed how suicides and been reduced without anti-suicide smocks:

If there is another way, maybe it’s along the lines of what’s being done at the Hampden County Correctional Center (HCCC), a House of Correction that serves Springfield, Chicopee, and Holyoke. HCCC has had zero suicides in the past 20 years. Spokesperson Rich McCarthy says HCCC has gone in another direction in their Seg Unit. Their website discusses how the facility aims to “counter the mental deterioration that can take place in lockdown” by a number of behavioral “carrots” including offering “in-cell programming for one hour, twice a week, through the use of an MP3 Headset System.” Similar to electronic books, the Seg Unit offers a variety of meditation, classical and contemporary music, “how to” and instructional-type material on their headphones. All prisoners stay in their regular orange or green jumpsuits.

A better solution to preventing suicide than anti-suicide smocks or MP3 headset systems

Serious structural dysfunction in the American approach to suicidality and the severely mentally ill leads to many vulnerable persons being placed in jails, rather than having access to beds in units specifically designed for their needs – or having services organized so that inpatient stays are not needed for many patients or more readily available for those who do. The real solution to suicidality in American prisons is to allow ready access to appropriate mental health care in the community, especially the now scarce inpatient beds.

Paradoxical tip for revising a manuscript with which you are dissatisfied.


Revisiting my written work is sometimes painful but usually valuable. This is particularly true when I am dissatisfied with what I produced but can’t figure out how to change it. Ever been there?

I’m quite dyslexic, and I often don’t see things looking at what I have written on the screen. So I write early in the morning on my PC, save it in Dropbox. I then download it to my iPad and take it to a comfortable nearby café with excellent cappuccino. I have a fine espresso machine at home, but I make a point of not learning how to make cappuccino so that I am forced to go the café.

I know that people can edit manuscripts on iPads but I refuse to learn that also. I think it’s a very important part of my creative process to take what I read into another environment and confront it without the opportunity to change it. I have routinized the struggle and actually look forward to it.

Paradoxical writing tip: Sometimes it is easier to identify and correct what is wrong with your writing if you are freed from the responsibility from having immediately to respond to it. I’m sure someone could provide a cognitive behavioral analysis of this proposition.

Evidence-based Skeptic’s Challenge: this may not work for you. There is no randomized trial of which I am aware. But conduct a N = 1 trial and decide for yourself. I don’t think you are required to be dyslexic for this strategy to work, but who knows whether it works at all.

Try an ABABA design, alternating (A) simply sitting at your PC and struggling with this  versus (B) condition. Make sure you pick a café with good cappuccino. It will at least be a consolation.

But maybe just be pragmatic and wait until you’re stuck at your PC and then try this. Maybe this would not be a good experimental design, but so what, necessity is the mother…

I’d  be interested in anybody’s experience trying this technique.

Note: I was the youngest member of the Palo Alto, MRI (Mental Research Institute) Brief Therapy Center for six years and received live supervision from Watzlawick, Weakland, and Fisch. I abandoned doing workshops when I was not confident that it was a format in which I could communicate strategies to people who were sensitive to context and willing to make observations whether they worked. And I didn’t have time or resources to develop evidence that what we did worked. I did publish a radical behavioral analysis of a strategic family case in Journal of Behavioural Therapy and Experimental Psychiatry with Tony Biglan. I also presented workshops to some people who went on to develop Acceptance and Commitment Therapy (ACT) and become ACT gurus. I see many of our MRI strategic interventions incorporated into ACT, but I’m not convinced its promoters have done a good job of developing an evidence base for its sometimes extravagant claims.

Mindfulness-based stress reduction to improve the mental health of breast cancer patients

A systematic review and meta-analysis claimed a moderate-to large-effect of mindfulness-based stress reduction (MBSR) among breast cancer patients for perceived stress, depression, and anxiety.

  • The article recommended MBSR be considered an “option as part of their rehabilitation to help maintain a better quality of life in the longer term.”
  • I screened the article and concluded that its conclusions were biased and estimates of the efficacy of MBSR were likely inflated. The quick exercise demonstrates tools that readers can readily apply for themselves to other meta-analyses and particularly to meta-analyses of mindfulness-based treatments, which are prone to low-quality you can read more about some tips for screening out bad meta-analyses from further consideration here.
  • This exercise adds to the weight of concerns that we cannot trust the “scientific” mindfulness literature. We need the literature to be scrutinized by researchers not making money or having investment in the promotion of mindfulness-based treatments.

The article appears in a pay walled journal but available through a repository identified on Google Scholar. According to Google Scholar the 2013 meta-analyses has already been signed an impressive 90 times.

 Zainal NZ, Booth S, Huppert FA. The efficacy of mindfulness‐based stress reduction on mental health of breast cancer patients: a meta‐analysis. Psycho‐Oncology. 2013 Jul 1;22(7):1457-65.

The abstract announces its conclusions:

On the basis of these findings, MBSR shows a moderate to large positive effect size on the mental health of breast cancer patients and warrants further systematic investigation because it has a potential to make a significant improvement on mental health for women in this group.

But the abstract also disclosed a paucity of data on which this conclusion was based:

Nine published studies (two randomised controlled trials, one quasi-experimental case–control study and six one-group, pre-intervention and post-intervention studies) up to November 2011 that fulfilled the inclusion criteria were analysed. The pooled effect size (95% CI) for MBSR on stress was 0.710 (0.511–0.909), on depression was 0.575 (0.429–0.722) and on anxiety was 0.733 (0.450–1.017).

I was skeptical already. We know from the comprehensive US Agency for Healthcare Research and Quality (AHRQ) report on mindfulness, Meditation Programs for Psychological Stress and Well-Being,that there are thousands of studies of mindfulness-based treatments, but few that are of adequate sample size and methodological quality. The exhaustive search produced 18,753 citations, but only 47 randomized controlled trials (RCTs; 3%) that included an active control treatment.

  1. Is the meta-analysis limited to RCTs? The answer should be “Yes, of course.” but the answer is “No.” Only a minority of the studies (2) are RCTs.

RCTs are preferred for evaluating psychological interventions over other evaluations of treatments .

Moreover, efforts to combine effect sizes from RCTs with those from non-RCTs are generally problematic and produce inflated estimates.

The problem with effect sizes obtained from non-RCTs is that they are likely to be exaggerated by a host of nonspecific factors. But to understand that, let’s first consider what an effect size from RCT provides.

The important principle is that treatments do not have the effect sizes, but comparisons between active treatments and control conditions occurring in RCTs do. Appropriate effect sizes obtained from an RCT are between-group differences in outcomes. A comparison control group allow some controlling for nonspecific factors and any natural improvement in outcomes that would occur with the passage of time. These are particularly important issues for studies of cancer patients, because it robust literature indicates that initial levels of psychological distress decline in the absence of treatment.

So, the within-group effect sizes available from non-RCTs can readily be adjusted and will be exaggerated estimates of the efficacy of the treatment, particularly when combined with effect sizes from RCTs.

We already know that evaluations of mindfulness-based treatments have a serious problem that control groups are typically inadequate and lead to exaggerated estimates of the efficacy of these treatments. Now these authors have compounded the problem but by combining estimates of efficacy from RCTs that are likely exaggerated with those from studies that don’t even have the benefit of between-group comparisons. The credibility of this meta-analysis is in serious jeopardy.

If I were simply searching the literature for an understanding of how effective mindfulness-based treatments are for cancer patients, I would simply move on and find another source

A broad search yielded few suitable studies.

 The authors reported systematically searching nine electronic databases using the search terms ‘mindfulness’ or ‘mindfulness-based stress reduction’ and ‘breast cancer ‘and their efforts yielded 625 studies. That’s a lot, but they were able to quickly screen out most of them (n=592) based on examining titles and abstracts. Reasons for exclusion were:

  • Not MBSR intervention (n=107)
  • MBSR mixed with other intervention (n=14)
  • Non cancer populations (n=310)
  • Commentaries or review or systematic review or meta analyses (n=133)
  • Psychometric measurement (n=28).

That left 33 articles, of which they were able to exclude 24:

  • Mixed cancer populations (n=19)
  • Not studying effect on mental health (n=2)
  • Multiple publications (n=2)

So now we’re down to 9 studies. Personally, I would excluded all but the two RCTs.

Lengacher CA, Johnson‐Mallard V, Post‐White J, Moscoso MS, Jacobsen PB, Klein TW, Widen RH, Fitzgerald SG, Shelton MM, Barta M, Goodman M. Randomized controlled trial of mindfulness‐based stress reduction (MBSR) for survivors of breast cancer. Psycho‐Oncology. 2009 Dec 1;18(12):1261-72.

This was a study comparing 40 survivors of breast cancer assigned to MBSR to 42 survivors remaining in usual care.

Henderson VP, Clemow L, Massion AO, Hurley TG, Druker S, Hébert JR. The effects of mindfulness-based stress reduction on psychosocial outcomes and quality of life in early-stage breast cancer patients: a randomized trial. Breast cancer research and treatment. 2012 Jan 1;131(1):99-109.

The second study compared three groups: 53 early-stage breast cancer patients assigned to MBSR, 52 to a nutritional education program and 58 assigned to usual care.

These two RCTs at least met my usual criteria of having 35 patients per group, which means they had better than a 50-50 chance of detecting a moderate effect if it were present. But how was methodological quality taken into account?

  1. How was the methodological quality of the studies taken into account?

It was ignored.

It is  important to consider methodological quality conducting a meta-analysis. Methodologically poor studies produce higher estimates of efficacy. We know from the report that most studies of mindfulness are of poor quality. We should be particularly concerned about whether investigators were appropriately blinded to randomization procedures so we did not influence patient assignment. We should also be concerned about whether data for all patients entering the trial were available at follow-up or that there was appropriate compensation for any loss. That would allow the gold standard intention-to-treat analyses. Particularly when conducted with cancer patients, studies often lose substantial numbers of patients to follow-up and lose any benefits of randomization.

The authors were already in trouble by including mostly nonrandomized trials, which have their own risk of bias. But the authors simply ignored any consideration of risk of bias, further damaging the credibility of their analyses.

Figure 3 of the article presents effect sizes for all nine studies included in the meta-analysis. We can see that the Lengacher et al, 2009 study did not have a significant effect on depressionor anxiety, only perceived stress. The Henderson et al, 2011 did not measure perceived stress or anxiety, only depression and the effect size was not significant.

Below, I have excerpted the display of effect sizes for perceived stress. As can be seen, the significant overall effect is driven by two small, nonrandomized trials. It’s not surprising that nonrandomized trials would appear to have larger effect sizes, given the manner in which their effect sizes are calculated.

hubbert-forrest-plote

So, we have a meta-analyses of nine studies, only two of which are RCTs. There are no ratings of methodological quality of the studies. Considering past mindfulness research, the methodological quality is expected to be poor and needs to be taken into account. Neither of the two RCTs had significant effects on the mental health outcomes and both were of at least minimally required sample size. The overall effect sizes are driven by small, underpowered, nonrandomized trials. A different conclusion would be reached by limiting consideration to the two randomized trials, but only two trials would not be a good basis for a meta-analysis

So, I’m inclined to dismiss the claims of this meta-analysis as extravagant and to excluded from further consideration. Case closed.gavel_side_md_clr-gifc200

  1. Who are the authors? Might they have undeclared conflicts of interest?

The senior author, Felicia A. Huppert, is a  Founder and Director – Well-being Institute and Emeritus Professor of Psychology at University of Cambridge, as well as a member of the academic staff of the Institute for Positive Psychology and Education of the Australian Catholic University. She is also author of the study of mindfulness training for schoolchildren that was featured in my last blog on the UK Mindful Nation report [http://blogs.plos.org/mindthebrain/2016/11/16/unintended-consequences-of-universal-mindfulness-training-for-schoolchildren/ ]. Recall that the Mindful Nation cited Professor Huppert’s study along with another one as sole support the efficacy of mindfulness training for students with “the lowest levels of executive control and emotional stability.” Yet are critical review of the study revealed that the pair of studies were methodologically poor studies with absolutely null results.

I’m frustrated repeatedly going to the literature and finding methodologically inferior mindfulness studies, which are then evaluated by merchants of mindfulness in flawed meta-analyses that conclude that mindfulness is highly effective and ready for dissemination. Schoolchildren, and in this case, cancer patients, are being misled. Clinicians and policymakers are being misled.

A high level of skepticism is warranted in approaching mindfulness literature, and glowing conclusions about its effectiveness, particularly from those having financial and professional interest in promoting mindfulness should be dismissed.

What can I (and you) do about this flawed review?

 Psycho-Oncology is the official journal of the international Psycho-Oncology Society (IPOS). It is strongly biased toward presenting a positive view of what mental health professionals can provide cancer patients, ignoring weaknesses in the evidence. I previously reported my unsuccessful complaints about a biased review claiming that psychotherapy promotes survival of cancer patients that was published without any peer review. I also reported my failed efforts to publish a letter to the editor concerning a flawed meta-analysis of couples interventions for cancer patients. As indicated in the title of the blog post, I got shushed. The letter was initially invited and accepted, and then withdrawn because of the complaints of the author. The Editor then promoted the article that I was complaining about by offering free access to what was otherwise pay wall. Finally, the Journal does not accept letters to the editor or corrective actions undertaken more than six weeks after the article has been published.

pmcommons-bannerBut there’s still something I can do, I can post a comment at PubMed Commons  detailing the shortcomings and on reliability of this meta-analysis.

I have done so and you can see the comment here. Now, when someone is doing a literature search and comes across the entry for the study on PubMed, they will be alerted that comment has been made and they can read my comment.  And you can comment yourself.

Viva post-publication peer-review that is not controlled by editors!

What should be done about the MEGA (ME/CFS Epidemiology and Genetics Alliance) project? Concerns and response

Update October 21, 2016: Professor Jonathan Edwards is now urging signing the petition opposing MEGA. Find the petition here.

7976304-special-edition-stampEarly this morning some thoughtful comments were approved at my PLOS Mind the Brain blog concerning the MEGA (ME/CFS Epidemiology and Genetics Alliance) project. After careful consideration, I felt these comments should not be left simply buried there, but put into a larger conversation. Below I have posted them, along with a directly relevant statement from Dr. Charles Shepherd.

I am not a ME/CFS patient or parent of a patient, I’m not even a resident of the UK. But I have been drawn into a long and complex struggle, starting with a comment that I made on Twitter, a rejection of Simon Wessely’s direct message to me that I should not get involved in the controversy over the PACE trial, and my making of a request for data that the PACE investigators had promised would be available as a condition of publishing in PLOS One. The PACE investigators publicly labeled my legitimate request as “vexatious” and almost a year later have not turned the data over.

mega-image-481x230At the outset, I should note that Professor George Davey Smith has key responsibility for the genomics section of this complex project. I have the greatest respect for his intellect and intellectual integrity. I have learned immensely from him.

However, I have serious concerns about other personnel involved in this project in terms of their recent conduct as physicians and scientists. Among other issues, the nature of their role in the project needs to be clarified. Conditions need to be in place that they will not use their role to inflict further abuse and bad science on the patient and scientific communities. Other personnel must step in and demonstrate that patients have an appropriate role in the design, implementation, and interpretation of the data published in peer-reviewed journals in a timely fashion. Patients should be heard, welcomed to  high-level participation in research, and not just used.

Concerns, criticisms and questions about the MEGA study are being expressed by the ME/CFS [   Myalgic encephalomyelitis)/Chronic Fatigue Syndrome] patient community on internet discussion forums.

Some clear inaccuracies are circulating, but there are some big issues yet to be settled.

If we are going to make progress in trying to sort out the different clinical and pathological sub-groups/phenotypes that currently come under the very messy umbrella of ME/CFS, as well as those with unexplained chronic fatigue, AND in the process develop diagnostic biomarkers that could then be used as objective diagnostic tests to identify specific sub-groups of patients that come under this ME/CFS umbrella, ALONG WITH helping to identify specific forms of treatment that are aimed at these specific sub-groups, we are going to have to look at the whole spectrum of patients who are currently being diagnosed with ME, CFS or ME/CFS, and possibly unexplained chronic fatigue as well. –Dr Charles Shepherd

And

“I think the project must be welcome but I am surprised by this sort of canvassing for support. So far no details are available of who would do what. Surely patients are entitled to judge a project on the basis of a written application, just as scientists do” – Professor Jonathan Edwards

A comment left at the PLOS Mind the Brain

There are now also concerns about Esther Crawley’s involvement in the MEGA project, which is presented as an omics and big data approach to stratifying ME/CFS patients.

Esther Crawley and Peter White are involved in this project as ME/CFS experts, despite significant patient opposition. That these are even involved calls into question the integrity of the rest of the team. White has engaged in fraud in the PACE trial. Crawley’s unethical behaviour is well described in this blog’s article.

That aside, the concern is that they (or other BPS model proponents) will introduce flawed definitions of the illness and its symptoms into the project. Crawley and White certainly have a history of downplaying, ignoring, or psychologizing physical symptoms of this illness, or simply conflating this life destroying illness with the not uncommon and often transient symptom of tiredness. On the project’s petition page, the important symptom “post-exertional malaise”, which is an objectively measurable, typically delayed, decline in function with an increase in symptom severity, is referred to as “post-exertional stress”. Redefining words and concepts in a misleading manner is something the PACE authors have done repeatedly, so one wonders if we’re already seeing the redefinition of an important physical symptom to make it fit into a narrative preferred by PACE authors and their colleagues. A vague term such as “post-exertional stress” certainly fits well into a “health anxiety” narrative of patients supposedly worrying excessively about ordinary muscle soreness after exercise, and mistaking this for symptoms of an illness.

Such a narrative would be particularly easy to construct if permissive case definitions were used. The project plans to recruit patients from NHS referral centers, which are operating according to the BPS / PACE paradigm, with NICE criteria which are permissive. NICE instructs doctors to only refer patients to these centers when they are mildly or moderately affect and believe that CBT and GEt are appropriate. For this, and other reasons, it is likely that this recruitment strategy will exclude or underrepresent the more severely ill.

In a large and important project, a solid foundation of knowledge and methodology is more important than ever.

There are also concerns about the patient advisory groups. Patient involvement is important according to MEGA study authors, and two patient advisory groups will be created. No details have been given on how patient representatives will be chosen. It would be problematic if the authors chose patient representatives that are not considered trustworthy by the larger patient community.

We suspect they will be drawn from the AFME and AYME charities. Many patients don’t trust these organizations. Crawley is medical officer of AYME, and AFME has a history of collaborating with PACE authors and generally being lenient and ignoring problems with the BPS approach and the PACE trial. AFME approved of the removal of actometers from the PACE trial with dubious justifications. It was repeatedly mentioned that patient advisors in the MEGA project will be able to prevent certain data from being collected and certain tests being performed. Will we see important questions not being asked, important data not being collected, important tests not being done because these undemocratic patient advisor groups with ties to the PACE authors believe that doing so is in the best interest of patients?

In general it is a problem that communications go through the untrusted intermediary AFME.

ME/CFS Research in the UK needs to divorce completely from the failing BPS model of the illness. Patients hate it, it’s scientifically flawed, and has produced no results when reasonably standards of evidence are applied. Consider that over 12000 patients signed a petition to the Lancet against the PACE trial, while the MEGA study has collected only 2130 signatures (with number of signatures essentially having stopped). The distrust of the BPS model is so great that any project touched by its influence becomes tainted. The MEGA study team should reconsider its current approach and whom it collaborates with.

Give this MEGA project a chance to fly – don’t try to strangle it at birth, says Dr Charles Shepherd | 3 October 2016

The MEGA (ME/CFS Epidemiology and Genetics Alliance) ‘big data’ research study – some comments from Dr Charles Shepherd following last week’s third annual scientific meeting of the UK CFS/ME Research Collaborative.

I can understand all the concerns, criticisms and questions about the MEGA study that are being expressed by the ME/CFS patient community on internet discussion forums.

I can also assure people that they will be transferred back to those at the CMRC (CFS/ME Research Collaborative) who are involved in preparing what is probably going to be the largest ever research grant application relating to ME/CFS here in the UK.

There are clearly a number of key decisions still to be made. And .if anyone followed the proceedings at the CMRC conference in Newcastle last week. they will know that I raised the crucial issue of patient selection criteria (narrow or broad) with Professor George Davey Smith and Dr Esther Crawley during the discussion section.

The key point I want to make at this stage is that the MEGA study is an important and complex new item of ME/CFS research that is going to make use of a wide range of relatively new and exciting technologies – metabolomics, proteomics, genomics, epigenetics etc.

The MEGA study will also involve some very high profile BIOMEDICAL scientists of international repute – several of whom are completely new to ME/CFS.

Researchers who will be involved include:

* Genomics – Prof George Davey-Smith (Bristol), Prof Chris Ponting (Edinburgh), Prof Colin Smith (Brighton)
* Epigenetics – Prof Caroline Relton (Bristol)
* Proteomics – Mr Tony Bartlett (Somalogic)
* Metabolomics – Dr Rick Dunn (Birmingham)
* Routinely collected data – Prof Andrew Morris (Edinburgh) and Prof David Ford (Swansea)
* Infection – Prof Paul Moss (Birmingham)
* Sleep – Prof Jim Horne (Loughborough)
* Pain – Prof Maria Fitzgerald (UCL)
* Prof Julia Newton (Newcastle)

The MEGA study has also attracted the very positive attention of the Wellcome Trust _ the largest provider of non governmental funding for biomedical research here in the UK and the largest research funding charity in the world

Wellcome Trust: >https://wellcome.ac.uk

And the numbers of patients involved is going to be huge – around 10,000 adults and 2,000 children.

However, when it comes to the aims and objectives of the research, there are some serious misunderstandings and inaccuracies being circulated on the internet as to how this ‘big data’ is going to be collected, analysed and used. This is NOT a treatment trial in any sense of the word and it has nothing to do with PACE, CBT or GET.

If we are going to make progress in trying to sort out the different clinical and pathological sub-groups/phenotypes that currently come under the very messy umbrella of ME/CFS, as well as those with unexplained chronic fatigue, AND in the process develop diagnostic biomarkers that could then be used as objective diagnostic tests to identify specific sub-groups of patients that come under this ME/CFS umbrella, ALONG WITH helping to identify specific forms of treatment that are aimed at these specific sub-groups, we are going to have to look at the whole spectrum of patients who are currently being diagnosed with ME, CFS or ME/CFS, and possibly unexplained chronic fatigue as well.

So the numbers need to be huge and a study of this nature may also need to include people with chronic fatigue states whom we will then want to exclude for both our benefit and for their benefit.

In my opinion, getting this right will clearly be dependent on having very detailed clinical information accompanying the biological samples – as is the case with the ME/CFS Biobank where we can check what diagnostic criteria (and symptoms) accompanies each individual blood sample that has been collected and stored.

I am not yet clear as to how this will be done in this study, which Is why I asked the question on patient selection at the conference. The nearest information we have was the reply from Dr Esther Crawley in which she stated that patients will meet NHS diagnostic criteria for ME/CFS and will be recruited from the NHS hospital-based referral centres for people with ME/CFS

So I would ask the ME/CFS patient community to see how the protocol develops and what information and inclusion criteria are going to be used.

If you are happy with the final research proposal, then there will obviously be ways of expressing public support.

If not, there will be ways of saying so as well!

As Professor Jonathan Edwards has said on the Phoenix Rising forum:

“I think the project must be welcome but I am surprised by this sort of canvassing for support. So far no details are available of who would do what. Surely patients are entitled to judge a project on the basis of a written application, just as scientists do”

So I hope that those people who are wanting to simply strangle this proposal before it has even been properly finalised will think very carefully about what they are doing – especially if this is mainly because they disagree with the inclusion of certain specific researchers.

It is difficult enough getting new and distinguished scientists and researchers, and major research funders such as the Wellcome Trust, interested in this subject without trying to scare them off almost as soon as they express a serious desire to get stuck into in a huge multidisciplinary project such as this, and the protocol is still being developed.

If people want to express concerns, criticisms, or have questions to ask, then I suggest that this should be done in the form of an open letter to the Board of the CMRC, which could be signed by anyone expresing such concerns, rather than a petition.

Dr Charles Shepherd
Hon Medical Adviser
The ME Association

So far, Dr. Shepherd’s statement has attracted numerous comments, which you can see by going to the website. However, I would like to, reproduce one comment by a noted patient citizen scientist.

Simon McGrath October 3, 2016 at 5:35 pm

Thanks, Charles.

I agree this study has huge potential and it’s great to see new biomedical talent come into the mecfs field.

“the reply from Dr Esther Crawley in which she stated that patients will meet NHS diagnostic criteria for ME/CFS and will be recruited from the NHS hospital-based referral centres for people with ME/CFS” I didn’t hear that (looking forward to the conference videos being posted) but it reassures me too.

Equally, I don’t think MEGA have made a great job of communicating the study to patients, and I understand why many feel aggrieved at being asked to back a study. I like the idea of an open letter.

Maximize your productivity by controlling your breathing?

Updated October 11, 2016 to include a discussion at the end of whether the self-help industry is largely women selling their products to other women.

Nirvana only a breath away?

Garbage claims about office Zen by Gerbarg

I say it over and over again: So much bad advice being sold to consumers with a pitch that what is being offered is more sciencey than the rest, but so little time to blog about it.

We need to recognize more quickly the click bait of wannabe self-help gurus and move on.

We need to practice quick dismissal of nonsense that intrudes into our social media news feeds. Simple as that, mindlessly delete it and get back to your life.

The common pitch of wannabe self-help gurus

With only minor variation, you are being repeatedly told:

  • You are not being as productive, likable, successful, and loved as you could, and you probably won’t live as long as you could, either.
  • [Implicit message: you are a loser, even if you don’t think so, and if you don’t think so, you are more of a loser.]
  • Fortunately, I have a novel, simple, effective, and easily implementable solution.
  • [That usually turns out to be not new, not simple, not effective, but intrusive to the point you will want to abandon it anyway.]
  • If you do not succeed at first with my advice, of course I have an app that can make it work better.

Fortunately, this time one of my trusted go-to’s had the time to dismiss some claims quickly. But first, here are the claims:

Escape being a loser with coherent breathing [Or: Breathing, the part of your life that you can control ]

breathing-the-part-you-can-control“For maximum productivity, you want to breathe in a way that will keep you in the parasympathetic zone so you are calm and stress-free, but not too far into it to the point where your mind is mush,” Gerbarg said.

And

To achieve office zen, Gerbarg suggests a breathing practice called Coherent Breathing, which features equal-length inhalations and exhalations at a very slow pace, without holding your breath. For most adults, the ideal breathing rate is four and a half to six full breaths per minute.

..But

Of course, it can be difficult to get used to such slow breathing. Gerbarg suggests practicing with a breath-pacing app (popular options include Breathing Zone for iOS and Paced Breathing for Android. If starting at five full breaths per minute proves difficult, start with six before bringing the pace down. New hardware devices such as the Spire clip-on breath and fitness tracker also offer real-time feedback on breathing patterns, which could make it easier to reach goals.

Fortunately–

The best part is that unlike some breathing exercises, which are evident to anybody in a room, this technique is relatively discreet after a little practice. Try it any time you are looking for a brain boost or to keep your cool-whether you’re in the middle of a meeting or being peppered with questions during a big presentation.

For those over six feet tall, the rate drops to about three and a half to four breaths per minute.

James Heathers: A go-to to the rescue

I’m drawing on the expertise of James Heathers, who played a pivotal role in our thoroughgoing critique of Barbara Fredrickson’s claims that loving-kindness meditation extended life through its effects on cardiac vagal tone. James went on to write an excellent technical guide for anyone attempting to use heart rate variability as a biological outcome. And no, heart rate variability and cardiac vagal tone are not biomarkers, only biological variables that vary greatly with the circumstances under which they are assessed.

bukowski-tavern

Charles Bukowski

Although James and I are in regular contact on social media, I have only met James once, at a meeting I arranged at the infamous Boston dive bar,  Bukowski’s Tavern. James looked much like the picture on his Facebook fan page.

james-heathers-photo

James Heathers

I’m inclined to dismiss rumors he does not always looked this way, especially when he works at his day job in a stylish suit, allegedly as a high-end junk bond day trader.

Anyway, I highly recommend going to his Facebook Fan page and liking it, so you can keep up on all his devastating critiques. Warning: as you can see in the one that follows a brief summary of something I found on the Internet, he is rather indelicate in his approach and does not mince words. That’s why I like him so much.

From James Heather’s Facebook Fan page

james-heatherspng

It’s weird to see this in print – the crapulent, lazy, swollen assumptions that I see in academic work. For some reason, part of my brain thinks that bad science being communicated at bad angles is solely for people who don’t do physiology.

You can’t sell out my stuff! My crew deal in facts! They aren’t purveyors of empty sensationalist garbage!

Well, some of them are, I guess.

And you know the funny thing?

There’s no discovery here. This book is four years old. Research on breathing like this was well described in the early 90s. And god only knows how old meditative traditions with slow breathing are.

For some reason, the awful newspaper which needs to obtain/syndicate/buy content about random health bollocks just happened to be in the market for some garbage about breathing. The author has written half a dozen books about various curlicues of lies and silliness.

I hate everything about this, and everyone involved should walk quietly out to sea and stay there.

So there. ‘Nuff said.

Updated October 11, 2016: Is the self-help industry largely women selling self-help to other women?

I came across an interesting interview with  Ruth Whippman, author of “America the Anxious: How Our Pursuit of Happiness is Creating a Nation of Nervous Wrecks.” Ruth was asked “Q: Are specific groups of Americans particularly preoccupied with achieving happiness?” Her reply had an interesting observation:

The self-help industry in America is female-dominated; approximately 80 percent of all self-improvement books are bought by women. There is a natural inclination among women to try to improve themselves and their lives, which isn’t a bad thing. But embedded in that is the belief that women need to be improved. Consider the titles of the “Women who __ too much” series: “Women Who Love Too Much,” “Women Who Think Too Much,” “Women Who Do Too Much” and so on. The culture tends to blame women for life’s problems. It is who women apologize too much or don’t “lean in.” The implication is women should try harder. If you’re not getting paid enough, you should be better at negotiating your salary and not worry about the system. The culture’s tendency is to downplay the structural and systemic realities.

Overcoming perverse disincentives to honest, transparent reporting of science

I was impressed by a Student Blog posted  by Ulrike Träger in PLOS’ Early Career Research Community  0000-0002-4493-1136

Reforming scientific publishing to value integrity

I personally recommend that my readers follow that blog. It consistently has exceedingly fresh and interesting material, and the early career bloggers need to be recognized and encouraged for what they are doing. They are also often more in touch with recent developments that are being missed by more established investigators busy doing things the old way. See for instance –capture1

http://blogs.plos.org/thestudentblog/2016/09/30/some-thoughts-on-methodological-terrorism/

capture2http://blogs.plos.org/thestudentblog/2016/08/09/social-media-for-ecrs-serious-scientists-can-and-do-use-twitter/

Investigators in my cohort have handed early career investigators a demoralizing mess. They need our support in cleaning it up and fixing how science is produced and disseminated and corrected.

I’ve taken the liberty of singling out a couple of passages and provided minimal commentary. If you like the sampling, go to the original blog post  for more.

 We need to deal with honest, transparently reported science being considered boring, even if it proves more durable and dependable. Solid science is built with what might be seen as a lot of just bricks in the wall .

A study published in PLOS Biology that investigates on what factors scientists’ reputations are judged.

When comparing a scientist that produces boring, but reproducible studies with a scientist that publishes exciting, but not reproducible studies, the public perceived the boring scientist as “smarter, more ethical, a better scientist, more typical, and more likely to get and keep a job.” Scientists given the same choices agreed that the boring, but certain scientist was smarter, more ethical and the better scientist. But in a departure from the public’s opinion, scientists found the exciting but unreliable scientist to be more likely to get a job and be more celebrated by peers. This is a stark contrast to the public’s view of science, which seems to favor well-done science over flawed science. Worryingly, when scientists were asked which of the two model scientists they would rather be, more said they wanted to be a scientist that produces exciting results, even though the majority knew that publishing reproducible research is better overall. While one survey of 313 researchers does not represent the whole science community, these results paint a surprising picture of scientists’ priorities.

We need to stop worshiping the journal impact factor and wean ourselves from pay-walled journals. We need-

Valuing science based on scrutiny from an open access community. A lot of scientific journals only let paying subscribers view their publications, which limits exposure to research published in these journals. Open access policies allow anyone interested in a study to access the research, without barriers. More importantly, not just the access should be free but also the peer review process.

We need to give post publication peer review greater incentive and link it inseparably and directly to the already published papers being reviewed.

Peer review before publication is a key step in checking the quality of science, however the current peer review system is imperfect. I believe that post-publication peer review should be a key process to improve science integrity. Ideally both pre- and post-publication peer review would be made available alongside the published manuscript for increased transparency in the scientific process. A few publications have introduced open reviews including EMBO, BMJ Open and F100research. Alternatively, you can find online journal clubs like PubPeer where articles are discussed post-publishing, or leave comments on articles post-publication.

Or organize your own PubMed Commons journal club.

missing-pieceWe need to do a better job of making negative and inconclusive data widely available.

Currently, a lot of sound science remains unpublished, as negative or inconclusive data are less likely to be published due to reporting bias. A 2010 study in PLOS ONE showed that 82% of papers published between 2000 and 2007 in the United States included positive results only, in spite of the value of negative data. By publishing negative or null results the scientific literature captures a more complete picture of a particular field, and includes more balanced information. I feel a well-done study with negative results deserves the same recognition as a positive one, as it still expands human knowledge and saves resources for other researchers. For example, publishing what isn’t the cause of a given disease will prevent other scientists from spending time and money looking into the same thing. The PLOS Missing Pieces Collection includes negative, null or inconclusive results, and is a great platform for scientists who conduct an experiment and yield a result of this type. In addition, PLOS ONE is a journal that does publish negative, null, or inconclusive results. Replication studies also receive limited recognition in spite of their importance to advancing the scientific field. They are key in validating scientific findings, but few scientists risk doing them as it is hard to publish them for their “lack of innovation,” – a notion we should start to forget.

See also

What is open access?

What is open peer review?

 

Contemplating American hype: Donald Trump and Barbara Fredrickson

Over the top versus effective self-promotion, where is the line? An author’s dilemma

When I’m teaching science writing workshops, one of the first points I emphasize is that good science doesn’t sell itself. Many decent manuscripts receive desk rejections because the authors fail to sufficiently promote themselves and the value of what’s in the manuscript. Authors cannot assume that editors even read their manuscripts in making decisions not to send them out for review. I then go on to discussing strategies for authors grabbing the editor’s attention with the title, abstract, and cover letter.

See my recent blog post:

Coyne of the Realm on the Daily Show and teaching scientific writing

For many, Self-promotion is initially a daunting task. A lot of academics are attracted to what they do because their job doesn’t seem to involve any salesmanship – they just have to do good science and report it transparently. If that is their attitude, they may be uncomfortable giving a pitch for their work and worry about the ethics of this kind of thing

I think that authors effectively pitching their manuscripts to editors is a necessity if the going to get the attention they deserve. Where I draw an ethical line is when authors make statements in the pitch simply because they will attract the kind of attention that is needed. Having to pitch a manuscript does not free authors from having to believe what they are saying is true.

What I’m teaching abroad, I often get confronted with the retort that self-promotion is not an American strength, but a vice some participants don’t want to acquire. I don’t deny the two sides to self-promotion, but I suggest people from outside the culture find the limits of their comfort zone and test these limits. They can safely act like an American without becoming one.

king-hokumAmericans do seem to have a knack for spinning hype and hokum. Maybe that is reflected in the latter term being a distinct American contribution to English.

In this blog post of contrast some distinctly American hype. We will see that the boundary line between satire and reality gets quite blurred in American self-promotion. But there are rewards to be had.

The first specimen is a spoof scientific paper written by a psychologist in the style of American presidential candidate Donald Trump. It can almost pass for the real thing.

The second specimen is the abstract of a successful grant application by American psychologist Barbara Fredrickson. If you didn’t know its source, it can easily be taken as a spoof. But whether you like it or not, it succeeds admirably, because the grant got funded, along with a number of similar grants by Fredrickson. I don’t like the rhetoric of the abstract and it helps calibrate my comfort zone.

Matt Crawford, the author of the first specimen disavows much thought having gone into a quickly written piece. Yet it is making the rounds and even showed up on the Democratic Underground.com.

trumpThis piece got circulated on the Internet without the real author, social psychologist Matt Crawford having his name attached. He’s from the American Midwest, got his PhD at Indiana University. He spent a lot of his career at Victoria University, Wellington, New Zealand and apparently he has contracted some modesty, perhaps from the raw oysters.

The piece is meant to be enjoyed, not carefully analyzed, but it does capture some features of the bombastic self-promotion of Donald Trump, but also some key features of a hyped article or grant application. You can see the overenthusiastic introduction documented with a very selective review of the available evidence. Then there is a vague method section, followed by thoroughly obfuscating results section. Is all very gushy, but leaves the reader at the mercy of the author in terms of the reader not having the details to make independent judgment

A title for a really good piece of research, just the best, really

 matt-crawford-trump-spoof

An affective intervention to reverse the biological residual of low childhood SES grant funded by the National Institute of Aging

The Fredrickson abstract was obtained from a freely accessible database, of the current and past NIH research portfolio, RePORT.

It takes a little practice to learn to navigate RePORT, especially the drop-down menus, but it can be an invaluable resource for so many purposes. In this case, I was seeking an example of how a successful American researcher pitched her projects to funding agencies.

Of course, we don’t get to see the unsuccessful proposals, but I think it is it is apparent that what succeeds in the United States might be beyond the comfort zone of grant applicants from other countries and culture.

The abstract is intended to shock and awe reviewers. Maybe like to spoof Trump piece, the abstract shouldn’t be subject to careful analysis, because it doesn’t stand up to scrutiny. But it’s a fine example of something worth considering. Maybe it has some elements worth emulating by some authors, but it also sets the boundary conditions around self-promotion that some won’t want across.

In textbook fashion, the abstract creates a tension by defining a serious threat to life and well-being. It then identifies the proposed research as offering a solution – loving kindness meditation for the poor. The abstract claims a miraculous effect of this meditation on trendy biological parameters. The abstract wraps up with a resounding promise, surely intended to bring reviewers to their feet and a standing ovation.

This research stands to identify evidence-based interventions to drastically reduce the disease burden that disproportionately affects Americans raised in low SES households.

Wow! We must fund the study because we won’t want to pass on this opportunity to help the poor.

 DESCRIPTION (provided by applicant): Individuals raised in low socioeconomic (SES) households have been found to bear 20%-40% increased risk of costly chronic and infectious diseases and all-cause mortality, even after accounting for adulthood SES. Illuminating the biological mechanisms of these health risks, recent research has determined that severe and chronic stress endured early in life can embed a decades-long “biological residue” within the immune system, as reflected in leukocyte basal gene expression profiles, leukocyte telomere length, and levels of chronic inflammation indexed by C-reactive protein. These biological risk factors are further exacerbated by behavioral proclivities, namely, impulsivity (indexed by delay discounting) and mistrust, which are also more probable among those reared in low SES households. The overarching goal of the proposed research is to investigate whether and how this identified biological residue can be reversed in midlife. An innovative upward spiral theory of lifestyle change positions warm and empathic emotional states as key pathways to unlocking the body’s inherent plasticity to reverse entrenched biological risk factors. The PI’s team has identified an affective intervention – the ancient practice of loving-kindness meditation (LKM) – that produces salubrious biological effects in healthy midlife adults. The innovation of the present study lies in testing this affective intervention in a sample of midlife adults on poor health trajectories by virtue of having low childhood SES plus present-day pathogenic behavioral tendencies (i.e., impulsivity and mistrust). A dual-blind placebo-controlled randomized controlled trial (RCT) is designed to provide proof of principle that early-established biological risks factors are mutable, not permanent. It targets three Specific Aims: (1) To test whether LKM, through its effects on positive emotions, can reverse the biological residue of low childhood SES as reflected in (a) leukocyte basal gene expression (up-regulation of pro-inflammatory genes and down-regulation of antiviral and antibody genes), (b) leukocyte telomere length, and (c) C-reactive protein; (2) to identify plausible behavioral and biological moderators of the hypothesized benefits of LKM in this at-risk sample, with candidate moderators being (a) time spent meditating and (b) metabolic profile; and (3) to identify plausible biological, behavioral, and psychological mediators of the hypothesized biological benefits of LKM-induced positive emotions in this at-risk sample, with candidate mediators being improvements in (a) cardiac vagal tone, (b) delay discounting, and (c) mistrust. This research stands to identify evidence-based interventions to drastically reduce the disease burden that disproportionately affects Americans raised in low SES households.

Just what basis in pilot work is for these astounding claims? Barbara Fredrickson and colleagues conducted an RCT of the kind proposed in this abstract.

You may be familiar with one high profile report of the study. You  probably don’t know that it was an RCT, and certainly not an RCT published twice, without either publication acknowledging the other. Only recently, a brief corrigendum in Psychological Science  acknowledged the duplicate publication of the same data, but without an apology.

fredrickson-correction

Two publications that are not acknowledged to be from the same data set scrambles efforts to integrate studies to obtain any estimate of the overall effectiveness of loving kindness meditation. Meta-analyses assume that effect sizes being entered come from independent studies.

But what Barbara Fredrickson did was worse. You have to read the original Psychological Science to discover that it is an RCT. Reporting does not conform to the universally accepted CONSORT reporting standards. The first item of the CONSORT checklist concerns whether the title or abstract acknowledges that a report comes from a clinical trial. That’s to facilitate retrieval and systematic searches. Disclosure that the study is a clinical trial is left to a supplement.

Furthermore, the Psychological Science article is a mediational analysis of how loving kindness meditation influences cardiac vagal tone, which is misrepresented as a biomarker. If you carefully examined the analyses, as we did  in published commentary, you can find that practicing loving kindness meditation did not affect cardiac vagal tone.

Then there is a second paper from the same data set that was published in Biological Psychology. The findings are reported in that article contradict what is declared in the Psychological Science article, which is not cited. Again, there is no evidence that practicing loving kindness meditation is beneficial to health.

Curiously, Bethany Kok, the first author on both of these articles doesn’t bother to cite them in her subsequent publications concerning loving kindness meditation.

Sometimes magicians conjuring up successful grants and papers in prestigious journals don’t want you to notice what they’re doing and certainly don’t want to explain their magic.

I nonetheless suggest that authors who are inhibited in their self-promotion consider the strategies that were employed here and sort out what is within their comfort zone to emulate. Choose carefully!