NJ Psychological Association challenges APA Clinical Practice Guideline for the Treatment of PTSD

 

quick takes

ptsd guidelinesThe APA guidelines can be found here 

From: Charity Wilkinson <wilkinson.charity@gmail.com>

Subject: [abct-members] APA PTSD Clinical Practice Guideline Being Questioned by NJPA

Date: December 22, 2017 at 7:44:44 PM CST

To: ABCT Member List <abct-members@lists.abct.org>

Reply-To: ABCT Member List <abct-members@lists.abct.org>

Dear Colleagues,

I’m writing to bring to your attention that the NJ Psychological Association issued statement today indicating that they sent a message to the APA expressing concern about the Clinical Practice Guideline for the Treatment of PTSD. This action was taken when a group of over 75 psychologists in NJ signed a letter opposing the Guideline. Though many of us sent statements to the NJPA supporting the Guideline, our statement was ignored.

The NJPA’s statement advocates for psychologists practicing from psychodynamic and other orientations who believe that their work has been wrongfully excluded. They have indicated that they fear the loss of their livelihood, insurance companies not funding their work, and the opportunity for clients to receive psychodynamic and other treatments that were not included. The statement also suggests that all treatments yield results and that RCT’s should not have been as strongly considered in the development of the Guideline.

I would ask that ABCT members and perhaps leadership create a statement in support of the APA PTSD Guideline.

Thank you for your consideration.

Sincerely,

Charity Wilkinson-Truong

This is why APA has been so reluctant to take a stand and set guidelines about what is evidence-based psychotherapy and what is not.

See my post of a while ago (2012)

Troubles in the Branding of Psychotherapies as “Evidence Supported”

 

How APA’s rating of acceptance and commitment therapy for psychosis got downgraded from “strong” to “modest” efficacy

dodosA  few years ago my blog post caused a downgrading of ACT for psychosis that stuck. This shows the meaninglessness of APA ratings of psychotherapies as evidence-supported.

Steve Hayes came into my twitter feed urging me to take a fresh look at the evidence for the efficacy of acceptance and commitment therapy (ACT).

I clicked on the link he provided and I was underwhelmed.

I was particularly struck by the ratings of ACT by the American Psychological Association Division 12.

I also noticed that ACT for psychosis was still rated only modestly supported.

A few years ago ACT was rated “strongly supported.” This rating was immediately downgraded to “modestly supported “by my exposing a single study as being p-hacked in a series of blog posts and in discussions on Facebook.

That incident sheds light on the invalidity of ratings by the American Psychological Association Division 12 of the evidence-supported status of therapies.

Steve Hayes’ Tweet

edited steve hayes exchange

Clicking on the link Hayes provided took me to

State of the ACT Evidence

 

ACBS

The APA ratings were prominently displayed above a continuously updated list of reviews and studies.

American Psychological Association, Society of Clinical Psychology (Div. 12), Research Supported Psychological Treatments:

Chronic Pain – Strong Research Support
Depression – Modest Research Support
Mixed anxiety – Modest Research Support
Obsessive-Compulsive Disorder – Modest Research Support
Psychosis – Modest Research Support
For more information on what the “modest” and “strong” labels mean, click here

Only ACT for Chronic Pain was rated as having strong support. But that rating seemed to be contradicted by the newest systematic review that was listed:

Simpson PA, Mars T, Esteves JE. A systematic review of randomised controlled trials using Acceptance and commitment therapy as an intervention in the management of non-malignant, chronic pain in adults. International Journal of Osteopathic Medicine. 2017 Jun 30;24:18-31.

That review was unable to provide a meta analysis because of the poor quality of the 10 studies that were available and their heterogeneity.

My previous complaints about how the evidence for treatments as evaluated by APA

There are low thresholds for professional groups such as the American Psychological Association Division 12 or governmental organizations such as the US Substance Abuse and Mental Health Services Administration (SAMHSA) declaring treatments to be “evidence-supported.” Seldom are any treatments deemed ineffective or harmful by these groups.

Professional groups have conflicts of interest in wanting their members to be able to claim the treatments they practice are evidence-supported, while not wanting to restrict practitioner choice with labels of treatment as ineffective. Other sources of evaluation like SAMHSA depend heavily and uncritically on what promoters of particular psychotherapies submit in applications for “evidence supported status.”

My account of how my blogging precipitated a downgrading of ACT for psychosis

Now you see it, now, you don’t: “Strong evidence” for the efficacy of acceptance and commitment therapy for psychosis

On September 3, 2012 the APA Division 12 website announced a rating of “strong evidence” for the efficacy of acceptance and commitment therapy for psychosis. I was quite skeptical. I posted links on Facebook and Twitter to a series of blog posts (1, 23) in which I had previously debunked the study claiming to demonstrate that a few sessions of ACT significantly reduced rehospitalization of psychotic patients.

David Klonsky, a friend on FB who maintains the Division 12 treatment website quickly contacted me and indicated that he would reevaluate the listing after reading my blog posts and that he had already contacted the section editor to get her evaluation. Within a day, the labeling was changed to “designation under re-review as of 9/3/12”and it is now (10/16/12) “modest research support.”

My exposure of a small, but classic study of ACT for psychosis having been p-hacked

The initial designation of ACT as having “strong evidence” for psychosis was mainly based on a single, well promoted study, claims for which made it all the way to Time magazine when it was first published.

Bach, P., & Hayes, S.C. (2002). The use of acceptance and commitment therapy to prevent the rehospitalization of psychotic patients: A randomized controlled trial. Journal of Consulting and Clinical Psychology, 70, 1129-1139.

Of course, the designation of strong evidence requires support of two randomized trials, but the second trial was a modest attempt at replication of this study and was explicitly labeled as a pilot study.

The Bach and Hayes article has been cited 175 times as of 10/21/12  according to ISI Web of Science, mainly  for claims that appear in its abstract: patients receiving up to four sessions of an ACT intervention had “a rate of rehospitalization half that of TAU [treatment as usual] participants over a four-month follow-up [italics added].” This would truly be a powerful intervention, if these claims are true. And my check of the literature suggests that these claims are almost universally accepted. I’ve never seen any skepticism expressed in peer reviewed journals about the extraordinary claim of cutting rehospitalization in half.

  • It is not clear that rehospitalization was originally set as the primary outcome, and so there is a possible issue of a shifting primary outcome, a common tactic in repackaging a null trial as positive. Many biomedical journals require that investigators publish their protocols with a designated primary outcome before they enter the first patient into a trial. That is a strictly enforced requirement  for later publication of the results of the trial. But that is not yet usually done for RCTs testing psychotherapies.The article is based on a dissertation. I retrieved a copy andI found that  the title of it seemed to suggest that symptoms, not rehospitalization, were the primary outcome: Acceptance and Commitment Therapy in the Treatment of Symptoms of Psychosis.
  • Although 40 patients were assigned to each group, analyses only involved 35 per group. The investigators simply dropped patients from the analyses with negative outcomes that are arguably at least equivalent to rehospitalization in their seriousness: committing suicide or going to jail. Think about it, what should we make of a therapy that prevented rehospitalization but led to jailing and suicides of mental patients? This is not only a departure from intention to treat analyses, but the loss of patients is nonrandom and potentially quite relevant to the evaluation of the trial. Exclusion of these patients have substantial impact on the interpretation of results: the 5 patients missing from the ACT group represented 71% of the reported rehospitalizations  and the 5 patients missing from the TAU group represent 36% of the reported rehospitalizations in that group.
  • Rehospitalization is not a typical primary outcome for a psychotherapy study. But If we suspend judgment for a moment as to whether it was the primary outcome for this study, ignore the lack of intent to treat analyses, and accept 35 patients per group, there is still not a simple, significant difference between groups for rehospitalization. The claim of “half” is based on voodoo statistics.
  • The trial did assess the frequency of psychotic symptoms, an outcome that is closer to what one would rely to compare to this trial with the results of other interventions. Yet oddly, patients receiving the ACT intervention actually reported more, twice the frequency of symptoms compared to patients in TAU. The study also assessed how distressing hallucinations or delusions were to patients, what would be considered a patient oriented outcome, but there were no differences on this variable. One would think that these outcomes would be very important to clinical and policy decision-making and these results are not encouraging.

Another study, which has been cited 64 times [at the time] according to ISI Web of Science, rounded out the pair needed for a designation of strong support:

Gaudiano, B.A., & Herbert, J.D. (2006). Acute treatment of inpatients with psychotic symptoms using acceptance and commitment therapy: Pilot results. Behaviour Research and Therapy, 44, 415-437.

Appropriately framed as a pilot study, this study started with 40 patients and only delivered three sessions of ACT. The comparison condition was enhanced treatment as usual consisting of psychopharmacology, case management, and psychotherapy, as well as milieu therapy. Follow-up data were available for all but 2 patients. But this study is hardly the basis for rounding out a judgment of ACT as efficacious for psychosis.

  • There were assessments with multiple conventional psychotic symptom and functioning measures, as well as ACT-specific measures. The only conventional measure to achieve significance was distress related to hallucinations and there were no differences in ACT specific measures. There were no significant differences in rehospitalization.

  • The abstract puts a positive spin on these findings: “At discharge from the hospital, results suggest that short-term advantages in effect of symptoms, overall improvement, social impairment, and distress associated with hallucinations. In addition, more participants in the ACT condition reach clinically significant symptom improvement at discharge. Although four-month rehospitalization rates were lower in the ACT group, these differences did not reach statistical significance.”

I noted at the time:

The provisional designation of ACT as having strong evidence of efficacy for psychosis could have had important consequences. Clinicians and policymakers could decide that merely providing three sessions of ACT is a sufficient and empirically validated approach to keep chronic mental patients from returning to the hospital and maybe even make discharge decisions based on whether patients had received ACT. But the evidence just isn’t there that ACT prevents rehospitalization, and when the claim is evaluated against what is known about the efficacy of psychotherapy for psychotics, it appears to be an unreasonable claim bordering on the absurd.

 

 

Can mental health professionals distinguish between washed and unwashed brains? The case of Patty Hearst revisited

patty hearst slaCan mental health professionals serve as expert witnesses in determining whether those accused of serious crimes have washed or unwashed brains?

Can mental health professionals ethically deprogram young adults whose parents kidnap them away from political or religious groups they have joined?

The answers to these questions we would now give are probably very different than the resounding “yes”  I heard when I was an assistant professor in the Department of Psychology at UC Berkeley with a charismatic colleague, Margaret Singer.

toobin book coverThe questions are revived in a book about Patty Hearst by Jeffrey Toobin, American Heiress. Very non-controversial “No” and “No” answers are provided from the perspective of the 21st century.

Yup, Toobin is the lawyer delivering well reasoned opinions on CNN about the constitutionality of actions of US President Donald Trump.

Two years before I joined the faculty, Patty Hearst was a Berkeley sophomore living with a teaching assistant Steven Weed, whom she first met as a math teacher at a Catholic boarding school where she was a 15-year-old student. Hearst was kidnapped from her apartment shared with Weed by members of the Symbionese Liberation Army, a radical group with an incoherent ideology.

In the 19 months following her abduction, Hearst was seen participating in bank robberies in which innocent persons died.

As Toobin was quoted  in a great NPR interview:

“If you look at her actions … over the following year, you see the actions of a revolutionary, not a victim…There was some glamour to what she was doing, the swagger of wearing berets, of carrying machine guns — the romance of revolution was an undeniable part of the appeal of the SLA.”

Hearst was eventually captured by the FBI, convicted of bank robbery and sentenced to seven years in federal prison. She served 22 months before President Jimmy Carter commuted her sentence. Later, President Bill Clinton pardoned her.

Toobin calls the presidential actions on Hearst’s behalf an example of “wealth and privilege in action.”

“The fact that she got these two presidential gestures of forgiveness is the purest example of privilege on display that frankly I have ever seen in the criminal justice system,” Toobin says.

Deprogramming, according to Wikipedia:

Refers to coercive measures to force[1] a person in a controversial belief system to change those beliefs and abandon allegiance to the religious, political, economic, or social group associated with the belief system.[2][3] Methods and practices of self-identified “deprogrammers” have involved kidnapping, false imprisonment, and coercion,[4] and sometimes resulted in criminal convictions of the deprogrammers.[5][6] Classic deprogramming regimens are designed for individuals taken against their will, which has led to controversies over freedom of religion, kidnapping, and civil rights, as well as the violence which is sometimes involved.

As a technique, the deprogramming that has been practiced over the last half century has been typically commissioned by relatives, often parents of adult offspring, who objected to the subject’s membership in an organization or group. It has been compared to exorcisms in both methodology and manifestation,[8] and the process sometimes has been performed with tacit support of law enforcement and judicial officials.[9][10] In response to a burgeoning number of new religious movements in the 1970s in the United States, the “father of deprogramming”, Ted Patrick, introduced many of these techniques to a wider audience as a means to combat cults.[11][12] Since then, deprogrammings have been carried out “by the thousands”.[10] For example, various atrocity stories served as justification for deprogramming of Unification Church members in the USA.[13]

As a technique for encouraging people to disassociate with groups with whom they have as consenting adults chosen to associate, deprogramming is a controversial practice. Even some cult critics have denounced it on legal and ethical grounds.[14] Similar actions, when done without force, have been referred to as “exit counseling“. Sometimes the word deprogramming is used in a wider (and/or ironic or humorous sense), to mean the freeing of someone (often oneself) from any previously uncritically assimilated idea. According to Carol Giambalvo, “exit counsellors are usually former cult members themselves”.[15]

Various academics have commented on the practice. For example, as defined by James T. Richardson, UNLV Professor of Sociology and Judicial Studies and Director of the Grant Sawyer Center for Justice Studies, deprogramming is a “private, self-help process whereby participants in unpopular new religious movements (NRMs) were forcibly removed from the group, incarcerated, and put through radical resocialization processes that were supposed to result in their agreeing to leave the group.”[16] Law professor Douglas Laycock, author of Religious Liberty: The free exercise clause, wrote:

Beginning in the 1970s, many parents responded to the initial conversion with “deprogramming.” The essence of deprogramming was to physically abduct the convert, isolate him and physically restrain him, and barrage him with continuous arguments and attacks against his new religion, threatening to hold him forever until he agreed to leave it.[17]

Lawyer John LeMoult, writing in a law review journal, described such practices as the person subject to deprogramming being “seized, held against his will, subjected to mental, emotional, and even physical pressures until he renounces his beliefs”, and compared this power to that of Nazis over their prisoners.[18] Legal scholar Dean M. Kelley called deprogramming “protracted spiritual gang-rape”.[19]

After a fascinating discussion of the procedures involved in deprogramming, Wikipedia continues:

Sociologist Eileen Barker wrote in Watching for Violence:

“Although deprogramming has become less violent in the course of time … Numerous testimonies by those who were subjected to a deprogramming describe how they were threatened with a gun, beaten, denied sleep and food and/or sexually assaulted. But one does not have to rely on the victims for stories of violence: Ted Patrick, one of the most notorious deprogrammers used by CAGs (who has spent several terms in prison for his exploits) openly boasts about some of the violence he employed; in November 1987, Cyril Vosper, a Committee member of the British cult-awareness group, FAIR, was convicted in Munich of “causing bodily harm” in the course of one of his many deprogramming attempts; and a number of similar convictions are on record for prominent members of CAGs elsewhere.”

In Colombrito vs. Kelly, the Court accepted the definition of deprogramming by J. Le Moult published in 1978 in the Fordham Law Review:

“Deprogrammers are people who, at the request of a parent or other close relative, will have a member of a religious sect seized, then hold him against his will and subject him to mental, emotional, and even physical pressures until he renounces his religious beliefs. Deprogrammers usually work for a fee, which may easily run as high as $25,000. The deprogramming process begins with abduction. Often strong men muscle the subject into a car and take him to a place where he is cut off from everyone but his captors. He may be held against his will for upward of three weeks. Frequently, however, the initial deprogramming only last a few days. The subject’s sleep is limited and he is told that he will not be released until his beliefs meet his captors’ approval. Members of the deprogramming group, as well as members of the family, come into the room where the victim is held and barrage him with questions and denunciations until he recants his newly found religion ”

Deprogrammer Carol Giambalvo writes in From Deprogramming to Thought Reform Consultation

“It was believed that the hold of the brainwashing over the cognitive processes of a cult member needed to be broken – or “snapped” as some termed it – by means that would shock or frighten the cultist into thinking again. For that reason in some cases cult leader’s pictures were burned or there were highly confrontational interactions between deprogrammers and cultist. What was often sought was an emotional response to the information, the shock, the fear, and the confrontation. There are horror stories – promoted most vehemently by the cults themselves – about restraint, beatings, and even rape. And we have to admit that we have met former members who have related to us their deprogramming experience – several of handcuffs, weapons wielded and sexual abuse. But thankfully, these are in the minority – and in our minds, never justified. Nevertheless, deprogramming helped to free many individuals held captive to destructive cults at a time when other alternatives did not seem viable.

Alan W. Gomes (chairman of the department of theology at Talbot School of Theology, Biola University) in his 2009 book Unmasking the Cults reports:

While advocates of the deprogramming position have claimed high rates of success, studies show that natural attrition rates actually are higher than the success rate achieved through deprogramming.[22]

The Dialog Center International (DCI) a major Christian counter-cult organization founded in 1973 by a Danish professor of missiology and ecumenicaltheology, Dr. Johannes Aagaard[23] rejects deprogramming, believing that it is counterproductive, ineffective, and can harm the relationship between a cult member and concerned family members.[24]

Professor of Psychiatry Saul Levine suggests that it is doubtful that deprogramming helps many people and goes on to say that it actually causes harm to the victim by very nature of the deprogramming. For deprogramming to work, the victim must be convinced that they joined a religious group against their will. They then must renounce responsibility and accept that in some mysterious way that their minds were controlled.[25] It is Levine’s professional opinion that once deprogrammed, a person would never be certain that they were really doing what they want. He states that deprogramming destroys a person’s identity and is likely to create permanent anxiety about freedom of choice and leave the deprogrammed subject dependent upon the guidance and advice of others. “Fundamentally deprogramming denies choice and creates dependency. It robs people of their sense of responsibility. Instead of encouraging people to accept that they made a mistake, it encourages people to deny their actions and blame others.”[25][26]

David Bromley and Anson Shupe wrote:

Deprogrammers are like the American colonials who persecuted “witches”: a confession, drawn up before the suspect was brought in for torturing and based on the judges’ fantasies about witchcraft, was signed under duress and then treated as justification for the torture.[28]

A number of factors contributed to the cessation of deprogramming:

Some of the deprogrammed adults sued the deprogrammers or the relatives who had hired them. Also in 1987, psychologist Margaret Singer became unusable as an expert witness after the American Psychological Association (APA) rejected her Deceptive and Indirect Methods of Persuasion and Control (DIMPAC) report.[29]

The APA Task Force on Deceptive and Indirect Methods of Persuasion and Control

Was formed at the request of the American Psychological Association (APA) in 1983. The APA asked Margaret Singer, one of the leading proponents of theories of coercive persuasion, to chair a task force to:

  1. Describe the deceptive and indirect techniques of persuasion and control that may limit freedom and adversely affect individuals, families, and society.
  2. Review the data base in the field.
  3. Define the implications of deceptive and indirect techniques of persuasion and control for consumers of psychological services.
  4. Examine the ethical, educational, and social implications of this problem.[2]

Before the task force had submitted its final report, the APA submitted an amicus curiae brief (10 February 1987) in a case pending before the California Supreme Court. The case involved issues of brainwashing and coercive persuasion. The brief stated that Singer’s hypotheses “were uninformed speculations based on skewed data.” The APA subsequently withdrew from the brief, portraying its participation as premature in that DIMPAC had not yet submitted its report. (Scholars who had co-signed the brief[3] did not withdraw.)

The task force completed its final report in November 1986. In May 1987 the APA Board of Social and Ethical Responsibility for Psychology (BSERP) rejected the DIMPAC final report; stating that the report “lack[ed] the scientific rigor and evenhanded critical approach necessary for APA imprimatur“, and also stating that the BSERP did “not believe that we have sufficient information available to guide us in taking a position on this issue”.[4] The BSERP board requested that the task-force members not distribute or publicize the report without indicating that the Board found the report unacceptable, and cautioned the members of the task force against using their past appointment to it “to imply BSERP or APA support or approval of the positions advocated in the report”.[5]

Singer and her professional associate sociologist Richard Ofshe subsequently sued the APA in 1992 for “defamation, frauds, aiding and abetting and conspiracy” and lost in 1994. Subsequently, judges did not accept Singer as an expert witness in cases alleging brainwashing and mind control.

So, fast forward to 2017. What does this interesting,  but old story have to teach us about accepting mental health professionals as “experts” and suspending our demands for evidence for their assertions? The ability of professional organizations like APA to articulate and adjudicate the ethics of such situations? Does the APA’s stance foreshadow its complicity in torture?

Patty Hearst’s multimillionaire (Her grandfather William Randolph Hearst served as the model for Citizen Kane) parents got her the most persuasive expert witnesses that money could buy at the time.

We can now recognize that these “experts” were trafficking in junk science. But what if her parents were shopping in 2017?  I am sure some neuroscientist would testify that fMRI can distinguish between washed and unwashed brains. For evidence, the neuroscientist would produce  a small-N study with a large effect size  that “proved” it. Ugh!

I blog at a number of different sites including Quick Thoughts, PLOS blog Mind the Brain, and occasionally Science-Based Medicine. To keep up on my writing and speaking engagements and to get advance notice of e-books and web-based courses, please sign up at CoyneoftheRealm.com