No Dissing! NHS Choices Behind the Headlines needs to repair relationship with its readers

my knightA knight in tarnished amour

Although by no means perfectly dependable, the NHS Choices Behind the Headlines is generally a useful resource for lay and professional consumers bombarded by distorted coverage of science and health information in the media.

I cheered the headline and NHS Choices Behind the Headlines’ coverage  of a PLOS Medicine article

Half of medical reporting ‘is subject to spin’

According to its website

NHS Choices Behind the Headlines

Is intended for both the public and health professionals, and endeavours to:

  • explain the facts behind the headlines and give a better understanding of the science that makes the news,

  • provide an authoritative resource for GPs that they can rely on when talking to patients, and

  • become a trusted resource for journalists and others involved in the dissemination of health news.

NHS choicesThe website explains its origins:

Behind the Headlines is the brainchild of Sir Muir Gray, who set up the service in 2007.

“Scientists hate disease and want to see it conquered,” said Sir Muir. “But this can lead to them taking an overly optimistic view of their discoveries which is often reflected in newspaper headlines.

And then a quote that I will surely use in the future with appropriate credit:

‘In the 21st century, knowledge is the key element to improving health. In the same way that people need clean, clear water, they have a right to clean, clear knowledge’ Sir Muir Gray.

A while ago I covered exaggerated claims by a UK scientist that he was on the verge of providing an inexpensive blood test for pregnant women to determine if they were at risk for depression. The peer-reviewed article  was seriously flawed in itself, but then journalists and media – including notably The Guardian – uncritically broadcast the scientist’s hyped, crassly self-promotional account of the importance of his work. I was pleased that Behind the Headlines  offered a much more sober and balanced perspective.

But in this blog about a story in Behind the Headlines, I’m going to chastise NHS Choices for its disrespect of patients. I will suggest an apology to its readership is in order.

Apparently Behind the Headlines has already received a number of complaints about the story and summarily dismisses them.

trust me 2Being ‘a trusted resource’ is a relationship with a readership that can be damaged by a single bad story that violates readers’ expectations that they are dealing with an unbiased source.

NHS Choices Behind the Headlines’ recent coverage of the PACE follow-up study fell short of its usual performance. Instead of an independent assessment, it uncritically accepted authors’ self-promotion. True to its Behind the Headlines branding, it suggested coverage of the study in the Daily Mail of the PACE was exaggerated, but then NHS Choices and its Bazian consultants failed to apply a basic understanding of clinical trials in parroting the PACE investigators hyping of their results.

Compare the title of the Behind the Headlines article: “Exercise and therapy ‘useful for chronic fatigue syndrome’ “ to my recent blog post. You’ll see explain why I think the PACE follow-up was uninterpretable.

By the time of follow-up, many patients were receiving treatment, but not the one to which they had been randomized.

The effects of the initial assignment could no longer be evaluated.

At follow-up, the most straightforward interpretation of the clinical trial was that any differences between groups and outcome had disappeared.

A later blog post will compare Behind the Headlines account to my critique of the PACE study. But in this one, I’m going to cover an offensive “box” in the Behind the Headlines that was amplified in the gloating comments of the journalist on Twitter and in the comments section of the Mental Elf blog. I then note that in 2014 he had made offensive comments that should have been actionable by Behind the Headlines.

The mark of ethical and responsible coverage of medical articles in the media is that journalists do not rely on investigators’ own account of what they found and what it means. Such single sources have a well-established bias and are known to be vulnerable to investigators’ self-promotion and exaggerated claims.

Rather, the media coverage should introduce a named independent sources who can be reasonably expected to offer their interpretations of a study without apparent conflict of interest or other bias. The named independent source should be recognizable as an expert.

The unnamed author/editor of the text in the box

NHS Choices is supposedly committed to providing tools to empower patients and professionals to get behind the headlines to decide for themselves about news coverage, Behind the Headlines is not transparent. The identity of specific journalist-editor having responsibility for what is said is shrouded in secrecy. So much for being able to evaluate source credibility.

But in this case, the journalist unmasked himself with a tweet.


more 2015 gerald

And then

gerald 2015

oh fuck




Blair became traceable to a website bio

Gerard Blair bio from blog

He can be further associated with a comment left on the blog post by Simon Wessely at Mental Elf

orig Gerald on Mental Elf

But he is a repeat offender. He ridiculed patients last year when he was celebrating another article at he edited for Behind the Headlines.


An analysis of the text in the box

To view in context the text that I will be analyzing, click here.

An ambiguous opening move

It’s fair to say research into PACE has attracted a great deal of controversy. This body of work has been criticised by one of the leading patient associations for people with CFS/ME, the ME Association.

This latest study prompted a press release from the ME Association, claiming its methodology and that of previous studies was flawed and lacked “objective measures of outcome”.

Keep in mind that what is contained in the box is supposed to serve as the sole independent evaluation of the PACE investigators’ well practiced promotion of their study. What is going on here? Is this a bold move at patient empowerment and recognition of patient citizen-scientists as credible sources of critique. After all many people with CF/ME had considerable educational and professional achievement before they were laid low by their illness. And many of them continue to contribute to the scientific literature, as in letters to BMJ.

Here is a well-crafted, well documented example.

But is this nomination of a patient voice as the outside expert a setup? Is this a more devious effort to inoculate the arguments of the PACE investigators against criticism by the mysterious Behind the Headlines author, now unmasked as someone who holds patients in contempt?

The readership of Behind the Headlines is a mixed audience, including GPs and other physicians. Are they going to buy into patient empowerment and patient as citizen-scientist to accept its expertise? Or are readers being primed to dismiss it? And maybe with the suggestion that no credible critics could be found?

We aren’t given the details of why the ME Association considers the PACE as flawed. We don’t know what previous studies the Association criticized as also flawed. We really don’t have a means to understand “objective measures of outcome” unless we do some digging into the literature. But Behind the Headlines is supposed to free readers of the necessity of literature reviews, because it is a trusted source that we can depend on having done it for us.

Then Behind the Headlines undermines any appearance of credibility for the ME Association:

In turn, the ME Association has been criticised by some as pursuing a specific agenda that aims to shut down any research that suggests CFS/ME may have psychiatric, as well as physical, factors.

Among the cheap tricks: who is the mysterious source of this criticism and how do readers evaluate source credibility? We are nonetheless encouraged to dismiss the ME Association as having a “specific agenda.” Can Behind the Headlines find no critics of the PACE trial who have not been criticized or do not have a specific agenda? With critics having only one shot, as a matter of fairness, Behind the Headlines should present a critic untarnished by mysteriously undermined credibility with an argument that can be independently evaluated.

And let’s keep in mind the context: the PACE investigators are accumulating lots of criticism for their aggressively pursuing an agenda in distorted reporting of their study at great odds with the evidence they have produced. And for defying international standards in withholding the data that would allow evaluation of their credibility.

Washed down the drain with any credibility for the ME Association is the notion that research should be scrutinized if it emphasizes the unsubstantiated, but alleged strong psychiatric component to CFS/ME. What evidence is there for this emphasis and what is being lost when the emphasis is so exclusive of other considerations?

Apparently it is okay for journalist identifying himself with a Behind the Headlines article to disparage a patient group in the social media.

Behind the Headlines provides a means for readers to complain. Some readers used it to complain about this story and the journalist’s depiction of it in the social media. Behind the Headlines replied by indicating they had reminded the journalist of a policy prohibiting use of offensive or inappropriate language. The Respondents for behind the Headlines deemed the article otherwise appropriate.

In the UK, it is still okay to disparage patients with chronic fatigue syndrome (CFS) and myalgic encephalomyelitis (ME) because their humanity and commonality with us is not appreciated. Ridicule and disrespect are acceptable speech in polite society, not hate speech.

I lived through a period in the United States where some people were awakened that they could not similarly openly speak with disrespect for black people. Sure, recent events clearly show the Americans have a long ways to go in solving the problem of racism. But most white people are much more careful about how they talk about black people.

Birmingham_campaign_water_hosesI recall as a white high school student living on welfare in a public housing project, the younger white people learned to speak differently of black people than some of their elders did. We were provoked by black people trying to assert their rights to use the same public facilities as white people in the South and the ugly response they received. For many of us, 1963 was when we learn to speak of black people differently.

Acceptable language of a fictional 1963 Review of lunch at Woolworths, Birmingham Alabama that soon became unacceptable.

 The Woolworths lunch counter gets consistently positive reviews, especially for its pleasant waitresses. However, a handful of Negroes complain about segregated seating and that they would be arrested if they sat in the white section. But why would they want to sit with white people anyway?

The locals point out that these complaints come mainly from outside agitators. Birmingham Negroes view the large colored section of the lunch counter as just as clean as the one reserved for white people. The same goes for the toilet for colored peoples except maybe for when the Negroes don’t clean up after themselves. Furthermore, outside agitators like the Southern Christian Leadership Conference have failed to attract many mature Negroes to their protests and are relying on high school, college, and even elementary school students, often recruited without their parents’ permission.

Why should we care about disrespect in the media people with directed towards chronic fatigue syndrome (CFS) and myalgic encephalomyelitis (ME)? It’s too easy to think of many of these people (and healthy able-bodied persons get to talk to them as THESE people) as incurably sick people just cost the UK government money. Why, they are just a burden to themselves and to others. And the UK government has big problems ahead deciding how to deal with people with long-term medical conditions such as diabetes and dementia threatening to overwhelm the NHS. Hmm..

Martin_Niemöller_(1952)Pastor Martin Niemöller (1892–1984) warned us that we should all be concerned about a casual attitude toward hate and disrespect directed towards one vulnerable group eroding the respect accorded to others whom we think are immune to it.

First they disparage people with CFS and ME.

Trump mocking journalist

Then journalists with visible disabilities

Then homosexuals

homos curse with cancer

And then they went after the old, frail, and demented



then thet came

A “Moral equivalent of war” and the PACE chronic fatigue trial

My hastily arranged Skeptic in the Pub talk received wide distribution, first through SlideShare  and then with the first of a number of videos posted on YouTube. Many thanks to all those who made the pub talk happen and then disseminated it, especially Barbara Collier.

Some of slides became a source of comments in social media. I especially liked the blog post from #ME Action that was headlined

I declare moral equivalent of war


I have received permission to re-post it here in its entirety. I introduce it with some comments about just what a moral equivalent of war (MEOW) means in this context. I realize that MEOW is more commonly used in the United States and might be subject to misunderstanding elsewhere.

What is a moral equivalent of war?

William_James_b1842cThe term originates with American philosopher and psychologist, William James. It comes from the title of his 1910 talk  in which he called for a “war against war” [that was] going to be no holiday excursion or camping party.”

A more recent reference is a 1977 speech by US President Jimmy Carter  in which he compared the energy crisis with the “moral equivalent of war”. Echoing William James, his speech was intended as a “rallying cry for service in the interests of the individual and the nation.” For better or worse, Carter’s speech and energy recommendations became known as MEOW.

meow1Something becomes the moral equivalent of war in the sense that other peacetime activities and priorities are harmonized and focused in pursuit of a goal as they would be in a war.

My war, of course, is against practices and assumptions that guide them, not people.

Here is the #ME Action blog post in its entirety.

James Coyne gives a public talk on PACE Trial

In a public talk in Edinburgh on Monday, psychologist Professor James Coyne declared the “moral equivalent of war” on the practices and assumptions that, he said, have allowed the “bad science” of the PACE trial to go unchallenged by scientists and the media.

The authors of the UK’s £5 million PACE trial have claimed that it showed that cognitive behavioural therapy and graded exercise therapy were beneficial for patients with chronic fatigue syndrome. Patients have criticised the trial’s methodology since its publication but criticisms have been dismissed by the study authors as reflecting “the apparent campaign to bring the robust findings of the trial into question.”

Professor Coyne’s attention was drawn to PACE by the authors’ latest claims, made in a recent Lancet Psychiatry paper, that long-term follow-up of patients confirmed these benefits. Coyne published a detailed blog post condemning the paper as “uninterpretable” and as having used “voodoo statistics” in a failed attempt to correct for “fatal flaws.”

The problems, Professor Coyne said, are “obvious to anyone who looks carefully. That virtually no one else picked them up reflects badly on the editing and peer review at Lancet Psychiatry… and media portrayals of this trial.”

PACE’s results were, Professor Coyne said, “being badly misrepresented by the investigators,” “going unchallenged” and being “uncritically passed on by journalists and the media, with clear harm to patients.” There were, he said, “murky politics about who can speak and who is silenced.”

In a move that will delight many patients, Professor Coyne stated that he was now refocusing his existing goals and activities on exposing more of the “questionable research practices” of PACE; establishing the culpability of journal editors and reviewers; and educating the media and journalists on “responsibilities they have not exercised” in reporting the trial.

He would also, he said, expand his focus to include questionable research and publication practices that “have maintained [the] illusion that there is validity to [the] psychosomatic model for [the] treatment of ME, CFS, and [post-viral syndrome]”. He added that he would “validate and legitimize what patients have been saying all along and bring them into [the] conversation as credible citizen-scientists” and would “identify and dismantle [the] structure by which PACE investigators bullied and neutralized critics.”

Professor Coyne, of Pennsylvania University, is one of the world’s most cited psychologists and is well known for his work in debunking false scientific claims, including that having a positive attitude can help cancer survival. He said in his talk that “the story of PACE will be rewritten to underscore [the] necessity of [a] strong patient voice in [the] design and conduct of clinical trials” and that it would mark a “turning point in [the] use of language indicating greater respect for patient activism, healthy assertiveness, and self-determination.”

Slides from Professor Coyne’s talk have been posted online and received over 4,500 views in less than two days. A video recording of the first part of his talk is now on YouTube.

I've had it


Data sharing policies: Do the Dutch do it better?


As noted in my recent post at PLOS Mind the Brain, time is running out for Queen Mary University London and the PACE investigators to to appeal an important decision:

The UK’s Information Commissioner’s Office (ICO) ordered Queen Mary University of London (QMUL) on October 27, 2015 to release anonymized from the PACE chronic fatigue syndrome trial data to an unnamed complainant. QMUL has 28 days to appeal.

As of this writing, the University and the investigator group have less than a week to appeal. Hopefully by the time you’re reading this blog post, we will know that there is no appeal and the decision stands.

The whole matter puts to an important test just what teeth there are two increasing unanimity of commitments by governments and funding and regulatory agencies to the idea that investigators must share data.

But as I detail in my blog post at Mind the Brain, my personal experience is that investigators can flagrantly withhold their data and garner support from institutions for doing so.


Below is a guest blog post by Dutch research biologist Klaas van Dijk.


It came about  after Klaas made aa detailed comment on an excellent blog post from Dorothy Bishop, Who’s afraid of Open Data. I asked Klaas if I could share his comment as a guest blog post and he kindly agreed.


Dutch flagHe describes a situation in the Netherlands with data sharing policies are explicit and taken seriously. That’s quite laudable. However, it explains his frustrations dealing with Oxford University in the UK. Overall, it highlights the necessity for the scientific community that the Queen Mary University London and PACE investigators comply with the order to share their data. Otherwise, they will provide yet another instance of the meaninglessness of data sharing policies in the UK, and bring embarrassment to the UK, Queen Mary University, and their own investigator team.

Disclosure: Klaas discusses an interesting request at the University of Groningen for the data from a student’s PhD thesis and how it was resolved. This reflects well on the University, where I have an appointment in the Division of Health Psychology, UMCG. I, not any institutional affiliation, have responsibility for the content of by blog posts.

Klass’ comment:

The “European Code of Conduct for Research Integrity” (ESF/ALLEA, 2011 states: “All primary and secondary data should be stored in secure and accessible form, documented and archived for a substantial period. It should be placed at the disposal of colleagues.”
Such a statement is also part of Principle 3 (“Verifiability”) of the VSNU “The Netherlands Code of Conduct for Academic Practice” . The Dutch Code states:

“Raw research data are stored for at least ten years. These data are made available to other academic practitioners upon request, unless legal provisions dictate otherwise.”

All researchers at any of the 14 Dutch research universities must always work fully in line with this VSNU Codes, and already since 1 January 2005. Complaints can be filed when there are indications that a researcher is violating any of the rules of the Code. [See here for English versions of the current guidelines at RUG (University of Groningen.
Frank van Kolfschooten, a science journalist, reports on 1 July 2015 in the Dutch newspaper NRC about a recent case at RUG of researchers who refused to share raw research data of a PhD thesis with colleagues. RUG concluded that these researchers, Dr Anouk van Eerden and Dr Mik van Es, had violated the rules of research integrity, because they refused to share the raw research data with others (Dr Peter-Arno Coppen [Radboud University Nijmegen], Dr Carel Jansen [RUG] and Dr Marc van Oostendorp [Leiden University]. These three researchers had filed a complaint at RUG when both researchers of RUG were unwilling to provide them access to the raw research data.
At RUG, all PhD candidates are even obliged to promise, in public and during the PhD graduation ceremony, that they will always work according to the VSNU Code of Conduct. This is already the case for around two years.
I am currently confronted with a very persistent refusal of a researcher of Oxford University, Dr Adrian Pont, to give me access to the raw research data of a questionable paper. Details are listed here. Dr Pont is the Associate Editor of the journal in question, but is persistently refusing to start a scientific dialogue with me about this case. There have as well been multiple contacts from my side with (officials at) Oxford University. I was turned down, and already a few times. I fail to understand how the current acting of Oxford University and the current behaviour of Dr Pont is in line with current policies.

In a subsequent email, Klaas added:

All researchers at any of the Dutch universities must make in advancement good arrangements with non-university parties on the topic of data sharing when starting to collaborate with such a party. The funding body NWO for example takes it of course also for granted that all raw research data of published work (= papers in scientific journals) are of course available to other scientists, for example to check the findings. That’s simply how research is organized.

Let’s hear it for the Dutch, and public funding sources and regulatory bodies in the UK and United States please take note.

Talking Headlines: with Professor Jim Coyne

James C Coyne:

My interview with Silvia Paracchini

Originally posted on :

James Coyne is Emeritus Professor of Psychology in Psychiatry at the University of Pennsylvania. He is also Director of Behavioral Oncology at the Abramson Cancer Centre and Senior Fellow at the Leonard Davis Institute of Health Economics. His main area of interest is health psychology and depression. Professor Coyne’s work in psychology and psychiatry is consistently cited for its impact. Jim is also an active blogger, confronting editorial practices and media science reports that favour sensationalism at the expense of scientific content. Jim is currently visiting Scotland as a 2015 Carnegie Centenary Visiting Professor at the University of Stirling.

View original 1,194 more words

Guest blog post: Fact checking dubious therapy workshop adverts and presenters

This guest post is an excellent follow-up to my debunking of the neurononsense [1,2] used to promote psychotherapy trainings as being better and more closely tied to brain science than competitors. My good friend and neuroscientist Deborah Apthorp* wrote it, prompted by an email advertisement she received by way of her University staff email list. She takes a skeptical look of advertisements for workshops with wild claims from presenters whose credentials prove dubious when checked against information available on the internet. She presents evidence for skepticism about workshop offerings and demonstrates a set of fact checking strategies useful to anyone who might be looking for continuing education credit or simply enhancing their ability to serve their clients/patients.

It is disconcerting that these workshops are marketed in university settings and seem to offer continuing education credit. I have trouble believing that academics will be attracted by the outrageous claims, and at least some of the credentials claimed by presenters should trigger academics’ skepticism.

But some of the target clinician audience are masters level or holders of practitioner-oriented PsyDs and PhDs. Their training did not include the critical skills needed to evaluate claims about research. And an unknown, but large part of the audience are not formally trained, licensed, or regulated therapists. They get by in a shadowy area of practice as coaches or counselors or go by other terms that sound like they are licensed, but they aren’t. These practitioners don’t have to answer to regulatory bodies that set minimal levels of training and expertise or even ethical constraints on what they can do. Instead, they acquire dubious certifications from official sounding boards set up and controlled by those profiting from the workshops. The certification does not really mean much outside the world of the workshops, but is intended to convey status and confidence to unsuspecting people seeking services. And those who attend these workshops can find themselves in a ponzi scheme where they have to attend more workshops to maintain their level of credentialing, and not be able to get credit by other means.

Here’s Deborah…

DebSo this lobbed into my inbox today, via the official channel of our staff email list:

Mindfulness, Neuroscience and Attachment Theory: A Powerful Approach for Changing the Brain, Transforming Negative Emotions and Improving Client Outcomes

The course costs $335.00 and is available in all of Australia’s major capital cities. It’s being held at mostly conference-centre type venues, so presumably they’re expecting pretty big numbers. There are some pretty big promises here, and as a neuroscientist my alarm bells immediately started ringing.

“….advances in neuroscience and attachment theory have led to revolutionary work in the application of mindfulness in the treatment of anxiety, depression emotional dysregulation, anger and stress.”

“…In this seminar, we will explore an integrated approach — incorporating advances in neuroscience, new insights about attachment theory and The Five Core Skills of Mindfulness — that accelerates healthy change and improves client outcomes.”

“…Take home cutting-edge information on the interface between neuroscience, mindfulness and therapy. “

Is this workshop endorsed by the APS? Apparently not, though the organisers are somewhat evasive about it: …”APS: Activities do not need to be endorsed by APS. Members can accrue 7 CPD hours by participating in this activity”

So who is this Terry Fralich (LCPC)? (And what does that stand for? Licensed Clinical Professional Counsellor, apparently, although it’s not clear which body did the licensing.) According to the official website, “Terry Fralich is an adjunct faculty member of the University of Southern Maine Graduate School and a Co-Founder of the Mindfulness Centre of Southern Maine.” However, although it seems that there is a Ms. Julie Fralich listed on the official University of Southern Maine faculty list , there is no Terry Fralich listed. The only mention at all on the website is his wife Rebecca Wing (a co-presenter at the workshops and co-founder of the Mindfulness Center – see below), who is an alumnus of their School of Music (class of ’84).

He does show up on a lot of sites about mindfulness, the top hit being his “Mindfulness Retreat Center of Maine”, which showcases its lovely views and comfortable accommodation (prices are available on application). They also sell “Books and CDs”, although the only actual book listed is Mr. Fralich’s book “Cultivating Lasting Happiness – a 7-step guide to mindfulness”. According to Amazon, this seems to have been the only book he has written (reviews are generally positive, though one reader found it did not cover any new ground). It seems to be a pretty standard practical guide to mindfulness meditation – nothing wrong with that in itself, I guess.

So where are this guy’s credentials in neuroscience and attachment theory? A search on Google Scholar turned up only the aforementioned book, but no academic papers. His only relevant qualification seems to be a Masters Degree in Clinical Counselling (although I could not find out where this qualification was obtained – if anyone knows, mention it in the comments). Apparently he has studied with the Dalai Lama for more than 25 years; according to his website, “Prior to becoming a mindfulness therapist, academic and counsellor, Terry was an attorney who practiced law in New York City, Los Angeles and Portland, Maine.” I guess this experience should make him careful about making claims which can’t be verified.

Here’s a YouTube teaser for one of his lectures.

I also found a link to a PDF for the program.

It incorporates sciencey-sounding things like “The triune brain” (huh?), “Fight-or-flight-or-freeze and stress responses”, and of course today’s essential buzzword, “neuroplasticity”. A particularly scary phrase is “Reconsolidation of negative memories: transforming unhealthy patterns
and messages.” How are they going to teach therapists to do this – these people who have no training at all in neuroscience, attachment theory, memory or indeed, it seems, even CBT?

tatraDelving a little deeper, I had a look at the list of trainers on Tatra Training’s website. It seems that a number of them are associated with an organization called the Dialectical Behaviour Therapy National Certification and Accreditation Association (DBTNCAA), allegedly “the first active organisation to certify DBT providers and accredit DBT programs.” – notably, the appropriately-named Dr. Cathy Moonshine (alcohol and chemical dependency treatment counselor), and Lane Pederson (PsyD, President/CEO). However, this organization is not in any way endorsed by the founder of Dialectical Behaviour Therapy herself, Marsha Linehan. In fact, there is a disclaimer on Cathy Moonshine’s site to this effect:

“All trainings, clinical support, and products sold by Dr. Moonshine are of her own creation without collaboration with Dr. Linehan, or Dr. Linehan’s affiliated company, Behavioral Tech, LLC. Dr. Moonshine’s products are not sanctioned by, sponsored, licensed, or affiliated with Dr. Linehan and/or Behavioral Tech, LLC.”

Thus, it seems Linehan herself has a competing company, but she does at least have an impressive CV with many research articles to back her up. I attempted to contact her for comment on the DBTNCAA and Tatra Training, but she has not yet repled.

Let’s have a look at some of the other “trainers” and their biographies. Dr. Daniel Short is listed as a Faculty Member at Argosy University, a for-profit college in Minnesota that has changed its name and is at now being sued by former students for fraud.

Dr Gregory Lester’s biography claims that he has published papers in “The Journal of the American Medical Association, The Western Journal of Medicine, The Journal of Marriage and Family Therapy, The Journal of Behaviour Therapy, Emergency Medicine News, The Yearbook of Family Practice, The Transactional Analysis Journal, and The Sceptical Inquirer”. But a PubMed search reveals none of these publications. An online list from his own website reveals very few relevant publications, and does not even include all of the outlets listed above; instead, there are things like “Dealing with the Difficult Diner”, in Restaurant Hospitality, and “Dealing with personality disorders” in The Priest Magazine. In addition, there are several books, which you can presumably buy at his workshops.

Interestingly, his bio also states that “… he has specialised in Personality Disorders for over 25 years, and has been a participant in multiple studies that form the basis for the DSM V revision of the section on Personality Disorders.” He has participated in these studies? Was he a control, or does he have a personality disorder himself? Because he certainly wasn’t an author on any of these studies.

Dr Brett Deacon seems to check out OK. Surprising to find him associated with this bunch.

Dr Daniel Fox is said to be the author of “numerous articles on personality, ethics, and neurofeedback”, but only one on neurofeedback (in the Journal of Applied Psychophysiology and Biofeedback) turned up in PubMed. This seems to be a review rather than an original research piece. An author search on “Fox, DJ [AU] and Ethics” returned no hits, and neither did “personality”, He also seems to have a book, which comes with optional seminar bundles!

Jerold J Kreisman is another of the Tatra stars and seems to feature as the Borderline Personality expert. The site claims breathlessly that he ‘…has appeared on many media programs, including The Oprah Winfrey and Sally Jesse Raphael Shows. He has been listed in “Top Doctors,” “Best Doctors in America,” “Patients’ Choice Doctors,” and “Who’s Who.”’. It also, more seriously, claims he has published “over twenty articles and book chapters”; however, PubMed only turns up four publications, one if which is from 1975 so is probably not by the same JJ Kreisman. Of the three remaining publications, only one 1996 paper (on which he is third author) is related to BPD, and this seems subject to an erratum (though the erratum itself seems impossible to find; the article seems to have been subject to a Letter to the Editor, which also is hard to find.). The only relevant publications seem to be, again, pop-psych-style books with titles like I Hate You–Don’t Leave Me: Understanding the Borderline Personality.

What about Ronald Potter-Efron, the facilitator of Healing the Angry Brain: Changing the Brain & Behaviours of Angry, Aggressive, Raging & Domestically Violent Clients? Again, he is a prolific author of self-help books. Google Scholar and Scopus do turn up about five academic publications, all from the late 1980s and early 1990s. Since then, he seems to have turned to the more lucrative self-help industry.

So all in all, it seems the Tatra Training people work via a fairly aggressive marketing campaign to clinical psychology academic departments and, presumably, clinicians themselves, as well as other “Corporate and Allied Health” practitioners. Their main address is in Adelaide, South Australia; Google Street View shows an anonymous-looking office block. Tatra was founded by Hanna Nowicki (LLB, BA Psych., Postgrad. Soc. Admin, Cert IV Training & Workplace Assessment), who seems to have no qualification in psychology other than her B.A. Psych, although this hasn’t stopped her developing and presenting “…multiple workshops on personality disorders, self injury, suicide risk assessment, depression, engagement techniques and introduction to mental health.”

Disturbingly, the list of clients includes many government organisations such as Centrelink, Correctional Services, Housing SA, Worklink Queensland, and more nebulously-named organisations such as “Residential Care Services”, “Brain Injury and Disability Services”, “Public Mental Health Services”, “Hospital Social Work Departments”, and so on.

How much money are these people making per workshop? Well, if the Sydney venue is anything to go by, the Wesley Conference Centre in Sydney seats 875 people, so if that sells out at $335 a head that’s $293,125. (The Wesley Centre does have smaller venues, so perhaps the organisers aren’t expecting such a large crowd. Their general preference for booking conference centres and Leagues Clubs, though, suggests that they are.) If the workshop is held in all five major cities (assume most are smaller than Sydney, so let’s be conservative and assume gross takings of $200,000 per workshop), then that’s a tidy sum ($1m per workshop, so $2m per year if only 2 workshops are held, as in 2014 and 2015). Of course one must subtract venue hire, advertising costs, speaker fees, catering, etc. etc., but all the same this seems quite a promising business model, particularly when combined with the in-house training offered.

I am concerned that these people are pushing a product that is not what is advertised, and claiming to be experts when they are not, sometimes supported by what seem to border on fraudulent claims. I am concerned that naïve young mental health professionals, looking for accreditation hours, are being fed misleading information that is not based on scientific evidence. If anyone has direct experience of these workshops, I would be very interested to know about it.

*Deborah Apthorp is a neuroscience researcher working at the Australian National University in Canberra, Australia. She holds an NHMRC Early Career Fellowship, and is interested in EEG, visual neuroscience, visual attention and the dynamics of postural control. In addition to this, she is a keen sailor, cyclist and windsurfer, and a passionate supporter of the Open Science movement.

Here’s her Google Scholar page and her own WordPress blog.

Talking back to the authors of the Northwestern “Blood test for depression” study

translational psychiatry[Update 9/25/2014]This post critiques the press coverage of a recent article in Translational Psychiatry concerning whether a blood test for depression would soon be available. A critique of the bad science of the article itself is now available at PLOS Mind the Brain.

Judging from the extraordinary number of articles in the media, as well as the flurry of activity on Twitter, a recent study coming out of Northwestern University is truly a breakthrough in providing a blood test for depression.

Unfortunately, the many articles in the media have a considerable, almost copy/paste redundancy. Just compare them to the Translational Psychiatry article’s press release. In many instances, there is more than churnalism going on, there is outright plagiarism. Media coverage offers very few demurs or dampening qualifications on what the authors claim. How do journalists put their names on such lack of work?

Similarly, the tweets appear to be retweets of just a couple of messages, although few are labeled as retweets.

I had my usual doubts as to whether the journalists or tweeters have actually read the article. Journalists could always have gone for second opinions to Google Scholar and looked up similarly themed articles and then maybe contacted the authors of similar articles for comments. Journalists could also have loaded the abstract of the Translational Psychiatry article into EtBlast and gotten dozens of recommendations for relevant experts based on text similarity. I see no evidence that this was done.etblast

There must be something intimidating about an article that claims to be testing not for genes, but for gene transcripts associated with depression. Shuts down the critical faculties. Lacking relevant expertise, journalists and tweeters may be inclined to simply defer to the claims of the authors and not further scrutinize the text or tables of the article with whatever relevant knowledge they do have. If this had been done, they might have found things that they could understand that would be very relevant to evaluating the credibility of this article.

Almost all of the hype that is been written about this Translational Psychiatry article originates with its authors, either in the article itself, the press release, or the well-crafted soundbites provided to the media. Yet, some of the latter are simply excerpted from the press release and made to look like the quotes arose in an interview. I promised a full thorough demolition of the article, and that will be forthcoming. However, here, I will analyze some of the statements attributed to two of the authors in the press. There is a fascinating logic, an ideology even to the statements that is of interest in itself. But you also can take this blog post as a teaser for a soon to arrive blog post at PLOS Mind the Brain in the next week or two.

Keep in mind as we scrutinize what the authors say about their study, just how modest it is. The study started by comparing 32 primary care patients participating in a clinical trial to 32 control persons match for age, ethnicity/race, and gender. Five of the primary care patients were lost to follow-up and another five were lost from the 18 month blood draws. Of these last 22 remaining patients, nine were classified in remission of their depression, and 13 not in remission.

So basically we are talking about some exceedingly small samples and comparisons of subsamples. These shrink to a comparison of 9 patients in remission and 13 not in remission for any statements about prediction of treatment outcome. In any other context, how could anyone who knows anything about clinical research accept the results of such analyses?

Furthermore, if we want to talk about any differences observed at baseline versus what was seen at follow-up, it could well be attributed to simple selective loss to follow-up. This is just one of the many alternative explanations of results reported for these data that cannot be adequately tested because of the small sample sizes. The articles talks about utilizing multivariate statistical controls, but that is statistical malpractice in a sample this size that is highly likely to produce spurious findings.

The authors make a number of statements about predicting remission from cognitive behavior therapy, but from the beginning of the study and into follow-up, all of the patients were getting cognitive behavior therapy and considerable proportion getting antidepressants as well. That is no small complication. It is generally assumed that predictors of response to antidepressants should be different than predictors of response to psychotherapy, but there is really no opportunity to examine this within this confounded small sample.

The two authors quoted by name in media coverage are

Eva RedlEva Redei, PhD, Professor in Psychiatry and Behavioral Sciences and Physiology at Northwestern’s Feinberg School of Medicine in Chicago.

David.mohrDavid C. Mohr, PhD, Professor of Preventive Medicine and Director of the Center for Behavioral Intervention Technologies at the Feinberg School of Medicine at Northwestern University.

From an article in Medscape, Blood Test Flags Depression, Predicts Treatment Response:

We were pleased with these findings, including finding biomarkers that continued to be present after people were effectively treated,” co–lead author David C. Mohr, PhD, professor of preventive medicine and director of the Center for Behavioral Intervention Technologies at the Feinberg School of Medicine at Northwestern University in Chicago, Illinois, told Medscape Medical News.

Dr. Mohr noted that essentially, these are markers of traits ― and may show that certain people have a predisposition to the disorder and can be followed more carefully.

Maybe, maybe not, Dr. Mohr. Aside from your modest sample size and voodoo statistics, it is unclear how clinically useful a trait marker would be. After all, we already have a trait marker in neuroticism, and while it is statistically predictive, it does not do all that well in terms of clinical applications. And the alternative of course is simply to have a discussion with patients as the particular symptoms they have and whether alternative explanations can be ruled out.

Recall, Dr. Mohr, this “trait marker” as you assumed it to be, is occurring in the mildly to moderately depressed sample. Clinical depression is a recurring episodic condition, and this “trait” is not going to be expressing itself in a full-blown episode much of the time.

“Abundance of the DGKA, KIAA1539, and RAPH1 transcripts remained significantly different between subjects with MDD and…controls even after post-CBT remission,” report the investigators.

Well, maybe, but it seems a stretch to make such claims from such limited evidence. The 3 transcripts remaining significant after remission are based on the 9 patients who remitted. Three is different than the 9 of 20 transcripts that differed at baseline, but we don’t know if this is a matter of loss to follow up or remission. And even this reduced number of significant differences, 3, is still statistically improbable, given the small sample size, even assuming an effect is present. The authors have no business interpreting their data to the press in this fashion.

In addition, these transcripts “demonstrated high discriminative ability” between the 2 groups, regardless of their current clinical status, thus appearing to indicate a vulnerability to depression.

The authors have no business claiming to have demonstrated “high discriminative ability” with such a small sample. Notoriously, such findings do not replicate. There is always a drop in the performance statistics from such a small sample when replication is attempted in nine seconds sample. Comparison with an earlier paper, reveals that the authors have not even replicate the findings from their earlier study of early onset depression in the present one and that does not bode well.

“This clearly indicates that you can have a blood-based laboratory test for depression, providing a scientific diagnosis in the same way someone is diagnosed with high blood pressure or high cholesterol,” said Dr. Redei.

Maybe someday we will have a blood-based laboratory test for depression, but by themselves, these data do not increase the probability.

“Clinically, simplicity is important. The primary care setting is already completely overburdened. The more we can do to simplify the tasks of these caregivers, the more we’re going to be able to have them implement it,” said Dr. Mohr.

Of all the crass, premature and inaccurate statements I find in this article, this one tops the list. Basically, Dr. Mohr is making a pitch that the blood test he is promoting will free primary care clinicians from having to talk to their patients. All they need to do is give the blood test and prescribe antidepressants.

From A Blood Test for Depression Shows the Illness is not a Matter of Will

“Being aware of people who are more susceptible to recurring depression allows us to monitor them more closely,” said David Mohr, Ph.D., co-lead author of the study in a press release. “They can consider a maintenance dose of antidepressants or continued psychotherapy to diminish the severity of a future episode or prolong the intervals between episodes.”

This advice is not only premature, it is inappropriate for a mild to moderately depressed sample treated in primary care, where monitoring and follow-up are either nonexistent or grossly inadequate. Dr. Mohr’s suggestion if it were taken seriously, would lead to overdiagnosis and overtreatment or prolonged treatment without follow-up and him and re-evaluation.

In general, these authors seem cavalier in ignoring the problems of overdiagnosis. Elsewhere, Dr. Redei is asked about it and gives a flippant response:

There’s a lot of concern about overdiagnosis for psychiatric illnesses already. How do you think your findings might affect that issue?

[Dr. Redl] People who worry about overdiagnosis — they are probably right, and they are probably wrong. Because there is potentially a problem with underdiagnosis, too. In the elderly, for example – we say, “Oh, you’re just old. You don’t have any energy, and you don’t want to do anything — you’re just old.”

From Blood Test Spots Adult Depression: Study

The blood test’s accuracy in diagnosing depression is similar to those of standard psychiatric diagnostic interviews, which are about 72 percent to 80 percent effective, she said.

It is irresponsible rubbish to claim that the study showed that these measures of gene expression were as accurate as current interview methods. The study involved comparing 20 different measures of gene expression to an interview by a bachelor level interviewer using a less than optimal interview schedule that did not allow for explain any questions or probe of the patient’s response. It certainly would not of been allowed in a study for which the data were to be submitted to the US FDA (FDA). And there was no gold standard beyond that.

Additionally, if the levels of five specific RNA markers line up together, that suggests that the patient will probably respond well to cognitive behavioral therapy, Redei said. “This is the first time that we can predict a response to psychotherapy,” she added.

Again, Dr. Redei, you are talking trash that is not justified by the results of your study. The sample is quite small and most of the patients who receive cognitive behavior therapy also received medication.

The delay between the start of symptoms and diagnosis can range from two months to 40 months, the study authors pointed out.

“The longer this delay is, the harder it is on the patient, their family and environment,” said lead researcher Eva Redei, a professor in psychiatry and behavioral sciences and physiology at Northwestern’s Feinberg School of Medicine in Chicago.

“Additionally, if a patient is not able or willing to communicate with the doctor, the diagnosis is difficult to make,” she said. “If the blood test is positive, that would alert the doctor.”

Perhaps, Dr. Redei, you need to be reminded that you are studying mildly to moderately depressed primary care patients, not an inpatient or suicidal sample. What is the hurry to treat them? Current guidelines in much of the world have become conservative about initiating treatment too quickly. In both the United Kingdom and the Netherlands, there is a recommendation for first trying watchful waiting, simple behavioral activation homework, or Internet-based therapy before starting something more intensive like antidepressant therapy or psychotherapy. Certainly if a patient has multiple recurrent episodes or a history of sudden suicidality, a different strategy would be recommended.

And in what clinical situations does Dr. Redei imagine having to initiate treatment when a patient is not able or willing to communicate with the doctor? Would treatment be ethical under those circumstances and how would it receive the necessary monitoring?

From: First ‘Blood Test for Depression’ Holds Promise of Objective Diagnosis

“Currently we know drug therapy is effective but not for everybody and psychotherapy is effective but not for everybody, ” Mohr said. “We know combined therapies are more effective than either alone but maybe by combining therapies we are using a scattershot approach. Having a blood test would allow us to better target treatment to individuals.

Again, Dr. Mohr, this is a widely shared hope by some, but your current study in no way advances us further to achieve this hope of a clinical tool.

In all of the many media stories available about the study, there was little dissent or skepticism. One important exception was

Newsweek’s First ‘Blood Test for Depression’ Holds Promise of Objective Diagnosis

Outside experts caution, however, that the results are preliminary, and not close to ready for use the doctor’s office. Meanwhile, diagnosing depression the “old-fashioned way” through an interview works quite well, and should only take 10 to 15 minutes, says Todd Essig, a clinical psychologist in New York. But many doctors are increasingly overburdened and often not reimbursed for taking the time to talk to their patients, he says.

Essig says it’s “a nice little study” but has no clinical usefulness at this point. That’s because it involved such a small sample of people and because the researchers excluded many patients that real clinicians would see on a daily basis, he says.

“It’s moving basic knowledge incrementally forward, but its way to soon to say it’s a ‘blood test for depression,’” Essig says.

“Depression is not hard to diagnose, even in a primary care setting,” he adds. “If physicians were allowed by health-care delivery systems to spend more time talking with their patients there would be less need for such a blood test.”

Amen, Dr. Essig, well put.

Northwestern Researchers Develop RT-qPCR Assay for Depression Biomarkers, Seek Industry Partners

One has to ask why would these mental health professionals disseminate such misleading, premature, and potentially harmful claims? In part, because it is just quite fashionable and newsworthy to claim progress in an objective blood test for depression. Indeed, Thomas Insel, the director of NIMH is now insisting that even grant applications for psychotherapy research include examining potential biomarkers. Even in the absence of much in the way of promising, clinically useful biomarker candidates, there are points to be scored in grant applications that cite pilot work moving in that direction, regardless of how unjustified the claims are. As John Ioannidis has pointed out, fashionable areas of research are often characterized by more hype and false discoveries than actual progress.

However, comments in one article clearly show that these authors are interested in the commercial potential of their wild claims.

Now, the group is looking to develop this test into a commercial product, and seeking investment and partners, Redei said.

“The goal is to partner to move this as far as possible into the clinic,” Redei said. “There are [other assays] coming behind it, so I would like to focus on [those] … but then this one can move on. For that, I absolutely need partners [and] money, that’s the bottom line,” she said.

Redei envisions developing this assay into a US Food and Drug Administration-approved diagnostic, rather than a laboratory-developed test. “If it’s FDA approved, then any laboratory can do it,” she said.

“I hope it is going to result in licensing, investing, or any other way that moves it forward,” she said. “If it only exists as a paper in my drawer, what good does it do?”