NJ Psychological Association challenges APA Clinical Practice Guideline for the Treatment of PTSD

 

quick takes

ptsd guidelinesThe APA guidelines can be found here 

From: Charity Wilkinson <wilkinson.charity@gmail.com>

Subject: [abct-members] APA PTSD Clinical Practice Guideline Being Questioned by NJPA

Date: December 22, 2017 at 7:44:44 PM CST

To: ABCT Member List <abct-members@lists.abct.org>

Reply-To: ABCT Member List <abct-members@lists.abct.org>

Dear Colleagues,

I’m writing to bring to your attention that the NJ Psychological Association issued statement today indicating that they sent a message to the APA expressing concern about the Clinical Practice Guideline for the Treatment of PTSD. This action was taken when a group of over 75 psychologists in NJ signed a letter opposing the Guideline. Though many of us sent statements to the NJPA supporting the Guideline, our statement was ignored.

The NJPA’s statement advocates for psychologists practicing from psychodynamic and other orientations who believe that their work has been wrongfully excluded. They have indicated that they fear the loss of their livelihood, insurance companies not funding their work, and the opportunity for clients to receive psychodynamic and other treatments that were not included. The statement also suggests that all treatments yield results and that RCT’s should not have been as strongly considered in the development of the Guideline.

I would ask that ABCT members and perhaps leadership create a statement in support of the APA PTSD Guideline.

Thank you for your consideration.

Sincerely,

Charity Wilkinson-Truong

This is why APA has been so reluctant to take a stand and set guidelines about what is evidence-based psychotherapy and what is not.

See my post of a while ago (2012)

Troubles in the Branding of Psychotherapies as “Evidence Supported”

 

A meta analysis of interventions training non-mental health professionals to deal with mental health problems

 quick takesWhen garbage in isn’t necessarily garbage out.

I often see statements that meta-analyses can be no better than the literature on which they draw. The point is often underscored with something like “garbage in, garbage out” (GI,GO).

I certainly agree that we should not contact synthetic meta-analyses, aimed at isolating a single effect size when the available literature consists of small, similarly flawed studies. Yet, that is what is done entirely too often.

Many Cochrane Collaboration reviews depend on only a small handful of randomized controlled trials. The reviews typically acknowledge the high risk of bias, but policymakers, clinicians, and the advocates for the treatments that are reviewed seize on the encouraging effect sizes, ignoring  the limited quantity and quality of evidence.

In part that’s a failure of Cochrane, but in part it also reflects how hungry consumers are confident reviews of the literature, when such reassurance is just not possible.

John TukeyI think a meta-analysis of the literature characterized by mediocre studies can be valuable if it doesn’t attempt to provide an overall effect size, but only to identify the ways in which the current literature is bad and how it should be corrected. These are analytic or diagnostic meta-analysis, with the diagnostic assessment being applied to the adequacy of the existing literature and how it can be improved.

That’s why I think a recent review of anti-stigma and other training mental health training programs for non-mental health professionals is so valuable.

Booth A, Scantlebury A, Hughes-Morley A, Mitchell N, Wright K, Scott W, McDaid C. Mental health training programmes for non-mental health trained professionals coming into contact with people with mental ill health: a systematic review of effectiveness. BMC Psychiatry. 2017 May 25;17(1):196.

The review is explicit about the low quality of the literature, pointing out that most studies don’t even evaluate the key question of whether people with mental health problems come in contact with professionals benefit from what are often expensive intervention programs.

The review also points out that many studies use idiosyncratic and inadequately validated outcomes measures to assess whether interventions work. That is inexcusable because the existing literature is readily available to those desiring such studies, along with validated measures of effects on the target population.

At best, these intervention programs only have short-term benefits for the attitudes of the professionals receiving them, with little assessment of long-term benefits or of impact on the target population. This is hardly a recommendation from for large-scale programs without better evidence that they work.

Agencies funding expensive intervention programs often require evaluation components. Too bad that those conducting such programs don’t fulfill their responsibility in providing an adequate demonstration that what they are being paid for to provide actually works.

We should be quite skeptical of the claims that are made for anti-stigma and other educational programs targeting non-mental health professionals. The burden of proof is on those who market such programs, and the conflict of interest in making extravagant claims should be recognized.

We should get real about unrealistic assumptions behind such programs. Namely, the programs are predicated on the assumption that we can select a few professionals, expose them to brief interventions with unvalidated content, and expect the professionals to react expertly when suddenly thrown in situations involving persons with mental health problems. The intervention programs are typically too weak and unfocused. The programs don’t prepare professionals very well to respond effectively to unexpected, but fortunately infrequent encounters in which how they perform is so critically important.

I was once asked to apply for NIMH grant that would prepare primary care physicians to respond more effectively to older patients who were suicidal, but not expressing their intent directly. I declined to submit an application after I calculated that physicians would encounter such a situation only about once every 18 months. It would take a huge randomized trial to demonstrate any effect. But NIMH nonetheless funded a trial doomed to being uninformative from before it even enrolled the first physician.

What a systematic search yielded and what could be concluded

From 8578 search results, 19 studies met the inclusion criteria: one systematic review, 12 RCTs, three prospective non-RCTs, and three non-comparative studies.

The training interventions identified included broad mental health awareness training and packages addressing a variety of specific mental health issues or conditions. Trainees included police officers, teachers and other public sector workers.

Some short term positive changes in behaviour were identified for trainees, but for the people the trainees came into contact with there was little or no evidence of benefit.

Conclusions

A variety of training programmes exist for non-mental health professionals who come into contact with people who have mental health issues. There may be some short term change in behaviour for the trainees, but longer term follow up is needed. Research evaluating training for UK police officers is needed in which a number of methodological issues need to be addressed.

The studies included in the systematic review were all conducted in the USA. Eight of the 19 primary studies included took place in the USA, three in Sweden, three in England, two in Australia, and one each in Canada, Scotland and Northern Ireland. Participants included teachers, public health professionals, university resident advisors, community practitioners, public sector staff, and case workers. Law enforcement participants were trainee, probationary, university campus, and front line police officers.

The review noted that there isn’t an excuse for poor assessment of outcomes in these programs:

A recent systematic review of the measurement properties of tools measuring mental health knowledge recommends using tools with an evidence base which reach the threshold for positive ratings according to the COSMIN checklist [42].

No, study didn’t show that ‘mindfulness training doesn’t foster empathy or makes narcissists worse’

quick takes

Many of us have become accustomed to extravagant claims about the benefits of mindfulness that turn out to be based on poor quality studies with inadequate control groups. We become skeptical about what we are told about the benefits of mindfulness. We’ve come to expect a lot of confirmation bias.

Reactions to the study that I am going to be discussing, though, suggest that overexposure to these kinds of studies may create a bias of a different kind. Namely, we may have more accepting of claims of negative effects of mindfulness, even when they come a poor quality study.

A click bait headline of a link to an article in the British Psychological Society (BPS) Research Digest kept showing up in my Twitter feed.

bps on empthy study

mindfulness doesn't foster empahtyI might be inclined to believe it, without examining the evidence. Why not? It’s not at all clear that there are any specific effects of mindfulness, in the active ingredient, beyond nonspecific – placebo – conditions with which it is administered.

I suspect a lot of people who were retweeting it probably didn’t bother to check whether the article actually made sense.

When I will I obtained the open access study that had inspired the story in the BPS Research Digest, I could quickly see that the claims were not warranted.

Ridderinkhof A, de Bruin EI, Brummelman E, Bögels SM. Does mindfulness meditation increase empathy? An experiment. Self and Identity. 2017 Jan 3:1-9.

The abstract

Cultivating empathy is a presumed benefit of mindfulness, but this possibility has rarely been investigated experimentally. We examined whether a five-minute mindfulness exercise would cultivate empathy relative to two equally brief control exercises: relaxation and mind-wandering. We further examined whether mindfulness would be especially beneficial for people with autistic or narcissistic traits. Results showed no effect of mindfulness relative to both control conditions on mind reading, empathic responding, or prosocial behavior. Mindfulness effects were independent of autistic traits. Unexpectedly, people higher in autistic traits did show increased prosocial behavior across conditions. Intriguingly, mindfulness improved mind reading in non-narcissistic people, but reduced it in narcissistic people. These findings question whether a brief mindfulness exercise is sufficient for building empathy.

The study found no overall effects of mindfulness on empathy as it was measured in the study. The click bait headline was based on post hoc subgroup analyses.

When I drilled down into the article itself, I saw that it was not actually conventional mindfulness training that was provided to participants assigned to that condition, but a five-minute analog exercise.

Apparently the five-minute exercise was not very convincing to participants, because those who received it rated it as leaving them less mindful than those receiving a relaxation control manipulation rated themselves after it.

The authors nonetheless provided subgroup analyses organized around to personality variables, which they termed autistic traits and narcissism. Most of these analyses did not produce a significant effect, some were counterintuitive but the abstract and the authors discussion centered on the few of them

The authors fell into the trap of being swayed by the mere name of the measure. When is administered to a general population sample, the autism spectrum measure does not distinguish people who are more or less likely to exhibit autism spectrum characteristics. Similarly for the measure of narcissism. In each, the authors were interpreting small differences on the low end of the scale as if they were occurring at the high end.

So, the authors tackled reasonable question about whether mindfulness fosters empathy, but they did so with very weak methods. When they didn’t get positive results, they performed lots of subgroup analyses and cherry picked a few to overinterpret.

I follow some Twitter accounts because I expect them not only to alert me to findings to which I should pay attention, but because they also filter out things I should simply ignore. In this respect, the BPS Research Digest failed me. The click bait headline was simply misleading, but it did succeed in getting me and others to go to the website. The article acknowledge some of the problems of the study, but seem to dismiss them. Worse, the BPS article offered causal interpretations of what were undoubtedly cherry-picked, spurious effects.

We’re all suckers for believing effects that can be explained, even when the effects are not there.

Much of the discussion of the “conceptual penis” paper is flaccid

quick takes

The comments are a Quick Take on an article in The [London] Times: Peer review of science is a deeply tainted system– Matt Ridley

giphy2A lot of what is being written about “The conceptual penis as a social construct” paper is right wing, opportunistic anti-intellectual nonsense. The paper was published in a shoddy journal. While peer review has serious problems. they are not exposed by this paper getting into a low ranked journal.

Publication of this paper supposedly demonstrates the leftist bias of academia. I don’t get that.

I suppose I would be considered ‘leftist” for some purposes in some contexts, but it is not a very useful category. To put me in this category lumps me with people with whom I have little in common and separates me from people from whom I have much in common. To suggest that a significant aspect of my behavior is explained by this categorization needs a lot more evidence than simply categorizing me.

The author of the conceptual penis paper is nonetheless quoted in The Times:

“Neither paper would have been published if it had not fitted the prejudices of much of academia: leftist, postmodern, relativist, feminist and moralising. “The academy is overrun by left-wing zealots preaching dangerous nonsense,” says Boghossian. “They’ve taught students to turn off their rational minds and become moral crusaders.”

I think the statement says more about his warped view of academia than about academia.

The Times article nonetheless makes some quotable excellent statements about the flaws of peer review.

As a system of ensuring quality in research, peer review is in deep trouble. It allows established academics to defend their pet ideas and reward their chums. Its one-sided anonymity, in which the referee retains his anonymity but the author does not, could hardly be better designed to ensure cronyism.

Worse, as a recent report by Donna Laframboise, a Canadian investigative journalist, concluded: “A journal’s decision to publish a paper provides no assurance that its conclusions are sound . . . Fraudulent research makes it past gatekeepers at even the most prestigious journals. While science is supposed to be self-correcting, the process by which this occurs is haphazard and byzantine.”

True, so true. I spent a lot of my time frustrated by such an out-of-control peer review system that resists correction, no matter how egregious the junk is that gets published. No matter how the hype and misrepresentation of data that is exposed.

But these wise observations are misplaced in a discussion of the conceptual penis paper. I would very much welcome these comments expanded in another context.

Please leave Peter Boghossian (aka Peter Boyle, Ed.D) and James Lindsay (aka, Jamie Lindsay, Ph.D.) to work their conceptual penis any way they like. Just look away if you have something better to do. We all do.

Electroconvulsive therapy: a crude, controversial out-of-favor treatment?

 

quick takes

Recently there was a debate on Twitter about the safety and efficacy of an important treatment, electroconvulsive therapy (ECT). Maybe on Twitter everybody is entitled to their opinion, but I think it is our right and responsibility when medically important information is being discussed, that we challenge opinions been expressed without substantiation. We do it all the time with quack medical advice and self-help gurus, we should do it more often with advice offered to vulnerable persons facing difficult medical decisions.

my prioritizing parakeets

tenorThis is the first of a series of quick digests of the literature concerning electroconvulsive therapy. It’s intended to aid mental health professionals having discussions with seriously depressed persons and their families concerning treatment options.

In the Twitter discussion, we saw far too many clinicians and persons posing as clinicians offering unsubstantiated opinions. Some were simply well-meaning, but ill informed. Some were quacks. Others had an ideological commitment that they wanted to promote, but not expose to debate. We should call them out.

A lot of what I will be offering over coming post about ECT will be current and classic literature. However I will try to include more accessible sources as well. Where possible, I will link the two.

In this blog post I start with an excerpt from an article in The Conversation that is available for reposting and excerpting under its creative Commons license. The article’s author, George Kirov  is a Clinical Professor at Cardiff University who supervises the ECT delivery in Cardiff.

Electroconvulsive therapy does work – and it can be miraculous

The use of electroconvulsive therapy (ECT) to treat psychiatric disorders is on the rise in England, according to a new report in the Guardian. There was an 11% rise in the number of procedures performed on the NHS between 2012-13 and 2015-16.

ECT involves passing an electric current through the head of an anaesthetised patient. The aim is to produce an epileptic fit. It is used mostly to treat severe or treatment-resistant depression, but it can also have beneficial effects in some cases of mania and schizophrenia.

Its therapeutic effect was discovered in 1938. Today, it remains the most effective treatment for severe depression. Yet, for some reason, it is always presented in a negative light. Not least in The Guardian’s latest report where it is described as “a crude, controversial treatment, which fell sharply out of favour around the turn of the millennium”. Cue the inevitable debate about the treatment.

Although presented as “exclusive data” in the Guardian, the authors largely reiterate the data collected by the body that monitors ECT in the UK: the ECT Accreditation Service (ECTAS). The data is freely available on the Royal College of Psychiatrists website and counted 2,148 courses of ECT given during 2014-2015.

A quick glance through the ECTAS document can tell us a lot about the nature of the illnesses treated with ECT and the remarkable outcomes: 51.7% of people were rated as “severely ill” and another 18.7% as “among the most severely ill” prior to ECT. At the end of their treatment, however, 74.4% were “much improved/very much improved”, while only 1.7% had deteriorated. This is a treatment reserved for the most severely depressed patients, and it produces unrivalled improvements. Despite this, it is still a treatment that has its passionate opponents.

What does the evidence show?

Let us consider some of the arguments of the opponents. Speaking to The Guardian, Richard Bentall, a professor of clinical psychology at the University of Liverpool, said he “doesn’t believe that there are adequate clinical trials of ECT to establish its effectiveness” and that the design of trials had not been “up to scratch”. In other words, we are not sure that ECT works.

But there have been plenty of trials. A review in The Lancet listed the various ways ECT had been tested over the years. ECT has been compared with simulated ECT (six trials, all favouring real ECT). ECT has been compared with drugs in 13 trials (11 favoured ECT). Bilateral ECT was more effective than unilateral (that is, treatments given to the whole brain are more effective than those given to half of the brain). And, finally, six trials that compared higher electric charges with lower electric charges found that higher charges produced greater improvements.

Still, every few years the opponents of ECT demand more evidence. In response to such demands, a large study was conducted in the US (the CORE report on 253 patients) and the results were published in 2004. The study set the bar for improvement very high: it required depressed patients to have almost no symptoms on two consecutive measurements at the end of the treatment period. Three-quarters of patients reached those remission criteria. No other treatment in psychiatry has come even close to such effects

I suspect that the opponents of ECT will still reject the evidence from new trials – after all, one can find something “not up to scratch” with anything, if one has already formed a strong belief. Perhaps such people might be persuaded if they go to an ECT clinic and witness one of the miraculous changes that can occur there. I do this with medical students who come to observe one session of ECT, as part of their education.

Every few weeks, we have a patient who enters the treatment room mumbling incoherently, or telling us that they are a sinner deserving to be punished, or complaining that they have no intestines or some other vital body part or function. And, after a single bout of ECT, while still in the recovery room, some of these patients start talking coherently and change the topic away from their tormenting delusions. The students come back, after exchanging a few words with the patients, with their jaws dropped and a sense of disbelief in their eyes. This does not happen every day, and usually takes more than one session, but you only need to see it once to remember forever that ECT does work.

A now classic article cited in The Conversation

The UK. Efficacy and safety of electroconvulsive therapy in depressive disorders: a systematic review and meta-analysis. The Lancet. 2003 Mar 8;361(9360):799-808.

A free summary of it in Database of Abstracts of Reviews of Effects (DARE): Quality-assessed Reviews [Internet].

Results of the review

Seventy-three RCTS were included in the review. In addition, fourcohort studies and three observational studies were also identified.

The authors noted that a meta-analysis of the data on short-term efficacy from RCTs was possible. Real ECT was significantly more effective than simulated ECT in reducing depressive symptoms (6 RCTs, n=256); the standardised effect size (SES) was -0.91 (random-effects) (95% confidence interval, CI: -1.27, -0.54). At 6 months, no significant difference was noted. There was no significant difference between the ECT and simulated ECT for premature discontinuation (3 trials). No deaths were reported.

Treatment with ECT was significantly more effective than pharmacotherapy in reducing depressive symptoms (n=1,144; SES -0.80, 95% CI: -1.29, -0.29). Discontinuation was typical in both groups, but significantly lower in the ECT arm (risk difference 0.03, 95% CI: -0.09, -0.03). Four trials in this group had discontinuations in the pharmacotherapy arm only. One trial reported a death in each group.

Bilateral ECT was more effective than unipolar ECT (n=1,408; SES 0.32, 95% CI: -0.46, 0.19). Six trials reported that the times to orientation were longer for patients treated with bilateral ECT than for those treated with unilateral ETC. Four trials reported the results from testing retrograde memory within a week of the end of the course of ETC.

Observational studies: four non-randomised cohort studies were found, of which three reported lower overall mortality in patients treated with ECT and one showed no difference. Funnel plots did not reveal any publication bias.

Authors’ conclusions

ECT was shown to be an effective short-term treatment for depression and is probably more effective than drug therapy. Bilateral ECT is moderately more effective than unilateral ECT, while high-dose ECT is more effective than low-dose ETC.

The DARE summary includes evaluation of the quality of the evidence that is reviewed and the quality with which it is reviewed.

A classic article by Max Fink, whose international reputation is grounded in evidence-based appraisals of ECT research, including his own citation classics.

Fink M, Taylor MA. Electroconvulsive therapy: evidence and challenges. JAMA. 2007 Jul 18;298(3):330-2.

Remission Efficacy for Depressive Illness Many studies documenting the efficacy of ECT for depressive illness have been published,3 finding ECT superior to “sham” ECT and to medications in the treatment of patients with severe depressive illness. Two multisite collaborations—the Consortium for Research in ECT (CORE)4 and Columbia University Consortium (CUC)2—studies are illustrative. Both were designed to examine relapse prevention after successful ECT involving patients with major unipolar depression. The 2  patient groups were similar in mean age (55 and 59 years), sex ratio (70% female), and pretreatment severity (mean Hamilton Depression Scale scores, about 34).Index episode duration was 24 to 31weeks (CUC study) and 45 to 49 weeks (CORE study). At remission, themean Hamilton scores were 5 to 6(±3). Remission rates were 55% (159 of 290 patients completing the CUC study) and 86% (341 of 394 patients completing the CORE study). These results compare favorably to the initial 30% remission rate with citalopram and the remission rates of about 23% with bupropion, 21% with sertraline, and 25% with venlafaxine for patients who did not respond to citalopram in the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial of outpatients with nonpsychotic major unipolar depression.5

A recent large scale study of inpatient mortality

Liang CS, Chung CH, Tsai CK, Chien WC. In-hospital mortality among electroconvulsive therapy recipients: A 17-year nationwide population-based retrospective study. European Psychiatry. 2017 May 31;42:29-35.

ECT recipients had lower odds of in-hospital mortality than did those who did not receive ECT.

METHODS:

Using data from the Taiwan National Health Insurance Research Database from 1997 to 2013, we identified 828,899 inpatients with psychiatric conditions, among whom 0.19% (n=1571) were treated with ECT.

RESULTS:

We found that ECT recipients were more frequently women, were younger and physically healthier, lived in more urbanized areas, were treated in medical centers, and had longer hospital stays. ECT recipients had lower odds of in-hospital mortality than did those who did not receive ECT. Moreover, no factor was identified as being able to predict mortality in patients who underwent ECT. Among all patients, ECT was not associated with in-hospital mortality after controlling for potential confounders.

CONCLUSION:

ECT was indicated to be safe and did not increase the odds of in-hospital mortality. However, ECT appeared to be administered only on physically healthy but psychiatrically compromised patients, a pattern that is in opposition with the scientific evidence supporting its safety. Moreover, our data suggest that ECT is still used as a treatment of last resort in the era of modern psychiatry.

eBook_Mindfulness_345x550I will soon be offering e-books providing skeptical looks at mindfulness and positive psychology, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

Sign up at my new website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites. Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.

Is something rotten in brain stimulation research?

quick takes

Rotten-Apple-Hard-Carbon-Sodium-Ion-BatteriesWhat other area of neuroscience are vulnerable to these possible problems?

Or areas of psychology dominated by underpowered studies with considerable investigator flexibility in the conduct of the study and the reporting of results?

 

This post is a Quick Take on a fascinating blog post by Neurocopaie that I just read. I was motivated to pursue some of the pseudonymous blogger’s links into the open access articles that were referenced. You may be too. I also highly recommend subscribing to this blogger, as I do.

The original blog post cites some articles about transcranial direct current stimulation (tDCS). The results of these articles can only be viewed as suggestive, not definitive in identifying what could be pervasive, serious problems in the tDCS literature. But taken together, these articles could be indicative of something rotten going on in this literature that is also going on in other neuroscience literatures. But why stop there? I think there is food for thought here worthy of further discussion and broader testing of other areas of research.

Who or what is Neurocopaie?

Neurocopiae is about the science and art of today’s fMRI research. Content is captured through the lens of a junior faculty scientist before it is heavily preprocessed so that everyone can enjoy the beauty of the blobs. Opinions expressed @neurocopiae tend to be at the upper end of the reproducibility scale.

The post.

Amping up control? Bad research practices and poor reliability raise concerns about brain stimulation

It hasn’t been a very good week for proponents of the popular brain stimulation method called transcranial direct current stimulation (tDCS). tDCS is a non-invasive technique that uses electrodes to deliver weak current to a person’s forehead. Numerous papers have claimed that tDCS can enhance mood, alleviate pain, or improve cognitive function. Such reports have sparked interest in tDCS at a broader scale. When you enter tDCS in the youtube search, you will find DIY tutorials on how to assemble a device so that you can amp up your brain at home. Including enthusiastic reports of the resulting changes in brain function. To put it in Richard Dawkins’ words: Science? It works, bitches. In particular, it works when you know what the outcome should be.

Great quote from a linked Science news piece in the blog.

Traced back to Science, it is even pithier in context:

The tDCS field is “a sea of bullshit and bad science—and I say that as someone who has contributed some of the papers that have put gas in the tDCS tank,” says neuroscientist Vincent Walsh of University College London. “It really needs to be put under scrutiny like this.”

The PLOS ONE article that is cited.

Questionable science and reproducibility in electrical brain stimulation research

We invited 976 researchers to complete an online survey. We also audited 100 randomly-selected published EBS papers. A total of 154 researchers completed the survey. Survey respondents had a median of 3 [1 to 6, IQR] published EBS papers (1180 total) and 2 [1 to 3] unpublished ones (380 total). With anodal and cathodal EBS, the two most widely used techniques, 45–50% of researchers reported being able to routinely reproduce published results. When asked about how study sample size was determined, 69% of respondents reported using the sample size of published studies, while 61% had used power calculations, and 32% had based their decision on pilot data. In contrast, our audit found only 6 papers where power calculations were used and a single paper in which pilot data was used. When asked about questionable research practices, survey respondents were aware of other researchers who selectively reported study outcomes (41%) and experimental conditions (36%), adjusted statistical analysis to optimise results (43%), and engaged in other shady practices (20%). Fewer respondents admitted to engaging in these practices themselves, although 25% admitted to adjusting statistical analysis to optimize results. There was strong agreement that such practices should be reported in research papers; however, our audit found only two such admissions. The present survey confirms that questionable research practices and poor reproducibility are present in EBS studies. The belief that EBS is effective needs to be replaced by a more rigorous approach so that reproducible brain stimulation methods can be devised and applied.

The NeuroImage study.

Test-retest reliability of prefrontal transcranial Direct Current Stimulation (tDCS) effects on functional MRI connectivity in healthy subjects

This is the first study investigating the test-retest reliability of prefrontal tDCS-induced resting-state functional-connectivity (RS fcMRI) modulations.

Analyses of individual RS-fcMRI responses to active tDCS across three single sessions revealed no to low reliability, whereas reliability of RS-fcMRI baselines and RS-fcMRI responses to sham tDCS was low to moderate.

Back to the Neurocopiae blog post.

Amping up control? Bad research practices and poor reliability raise concerns about brain stimulation

But how often do we see comprehensive assessments of the reliability of new techniques in human neuroscience at all? These studies are commonly seen as less interesting and they won’t help you get a paper in a fancy journal unless you trash the whole field. If you only trash the reliability of your newly developed paradigm or method, it might be difficult to claim that the effects somewhere around the edge of the significance threshold truly warrant any enthusiastic conclusions that attract future citations.

Still, in differential psychology, everyone would expect you to conduct and report such basic assessments first before you make any strong claim about individual differences. My take-home from the tDCS story is that basic checks of the reliability of every method in neuroscience should always come first and that they should be reported as well. They are important. Now we have kids building tDCS devices based on youtube tutorials and we know very little about what it does and what it does not do reliably. The call for more focus on reliability of outcomes might seem trivial, but think of your favorite fMRI studies and how often you have seen a discussion of the reliability of the method itself. It’s not a lot.

eBook_Mindfulness_345x550I will soon be offering e-books providing skeptical looks at mindfulness and positive psychology, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

Sign up at my new website to get advance notice of the forthcoming e-books and web courses, as well as upcoming blog posts at this and other blog sites.  Get advance notice of forthcoming e-books and web courses. Lots to see at CoyneoftheRealm.com.