Saturday 24 May 2014

Notes on the Sociology of Evidence in Clinical Psychology

When I was an assistant psychologist in an inpatient mental health service I delivered group CBT for psychosis interventions. My colleagues and I were encouraged to feel like part of a groundswell; clinical psychology slowly but surely upsetting the apple cart with new promising treatments for the distress associated with psychosis. I was part of various email listservs and used to receive  updates about this or that latest study on CBT for psychosis. As a rather self-doubting sort, I felt it important to see for myself what the evidence was for our intervention so I would follow the link and scour the report for its results section. What does this effectiveness look like in numbers? I was always disappointed, I could never see the change clearly laid out in the tables, and yet there was the headline or email subject line, proclaiming CBTp's efficacy. Unfortunately, and again, as a rather self doubting sort, I would suspect the failure to see the change arose from my own inability to read and understand the figures properly. Our confidence in ourselves can be dangerously undermined when we think someone else knows better.

Now I am in clinical training. Over the last few months I have been part of a seminar in which we discuss and critique research that reviews the evidence for various psychotherapeutic interventions. Learning in greater detail how to read RCTs and meta-analyses has been a pleasingly rigourous experience. When my turn to present came I reviewed an effectiveness RCT for a treatment which was being compared to a "Treatment as Usual" condition. Reading the details of what this involved, it became clear the TAU was a rather paltry control; brief monthly check in appointments "as needed" as opposed to the structured and regular meetings of the treatment. When I pointed this out as one of the things to consider in assessing the evidence, the leader of the class (an advocate of a competitor treatment) gave a knowing smile. "They were crafty weren't they. It makes me wish we'd thought to do that".


"What a man believes upon grossly insufficient evidence is an index into his desires 
-- desires of which he himself is often unconscious." Bertrand Russell

This sense of allegiance has its merits. Research needs to be critiqued, and who better to do it than researchers who are passionately invested? People who care very deeply about how something is presented will do their damnedest to launch as strong a defence or as strong an attack as is required by the current situation. It can lead to the best sort of forensic examination. When you have researchers in different groups paying very close attention to the work of other groups you can be sure they will spot any unfairness arising from discrepancies in therapy-adherence ratings, dropout rates in different arms of an RCT or anything appended to the actual treatments which might boost effect sizes. Sometimes "opponents" have even been party to useful information on the very studies they are critiquing.

Ultimately though, the facts about how effective an intervention is all needs to tumble out somewhere, and practitioners need to come to some plausible consensus. Allegiance is great for critique but becomes embarrassing when it amounts to rejectionism. I don't know how light-hearted our seminar leader was being, but I do know that it was indicative of a real phenomenon in therapists; the belief that what they are offering works and the desire to "prove" it. If they can't do this, then the fault must lie with the research methodology rather than the treatment.



Quietly, without anyone drawing too much attention to it, therapists draw themselves into teams defined by orientation and specialization. It seems always to have been this way; from Freud's expulsion of dissenters, through to the "Controversial Discussions" in the British Psychoanalytic Society, to Hans Eyesenk's encounters with psychotherapy and to the current controversy over CBTp. Say something critical of a therapist's approach and you can be sure to raise hackles, as has been shown in the extraordinary vehemence of the CBTp debate. This is part of why I chose to be a clinical psychologist rather than a different form of psychotherapist; the thing which surely sets us apart from psychoanalysts or counsellors is our training in research and our pragmatic openness to following the data wherever it leads us rather than getting caught up in modality cliques. If this gets compromised, what do we really have to offer?