Sunday, 19 June 2016

What do we talk about when we talk about schizophrenia?

I have been gleefully reading Kieran McNally's book on the history of schizophrenia, which turns out to be a compendium of great detail and fascination. As someone who has spent a few years now trying to seriously orient myself in the history of this weird and sprawling concept (I was lucky enough to be allowed to devise and teach an undergraduate course on the history of schizophrenia), I am staggered by the scale of McNally's erudition on the subject. It makes the book enormously valuable both as a treasure trove (in addition to an almost 30 page long reference section, there is a further 10 pages of recommended reading) and as a contribution to our understanding of this unwieldy but influential idea.

The topics of madness and psychiatry have long had their groups of dedicated historians, but the history of schizophrenia itself can get sidelined. Often it is told as part of a broader narrative by people with an axe to grind (witness Jeff Lieberman's casually whig history "Shrinks" from last year), or with other, bigger fish to fry (Richard Bentall's Madness Explained contains a nice conceptual history of schizophrenia, but it is not the main focus of the book). Such histories are, in any case, often predominantly externalist, meaning they focus on the social and economic context of madness (or on the personalities of famous psychiatrists), and not on the development of the ideas. McNally's book is avowedly internalist about schizophrenia. This means you won't find many colourful anecdotes about wacky doctors and their extraordinary patients, but the story of the concept's development (filling a space that has been peculiarly vacant) is no less entertaining. The book is built partly out of papers which McNally has published on specific historical questions, but it still comes together into a satisfying and revealing narrative.

Things Are More Complex Than They Seem:


This is a "critical" history in the best sense of that term; that is, McNally introduces layers of complexity and nuance to a narrative we already think we know. The rough outline of schizophrenia's past is well rehearsed: at the turn of the 19th/20th centuries, Kraepelin separates Dementia Praecox from Manic Depression, and Bleuler re-names it "schizophrenia", partly to avoid the degenerating quality implied by "dementia". Psychiatrists disagree wildly about how to define it, until a series of refinements (Schneider's first rank symptoms, the Feighner criteria) lead into a universally accepted definition in DSM-III. There are two major waves of disruption (Poland's "socio-political" critics in the 1960s, and "scientific" critics from the 1980s to the present), and a future rendered uncertain by the rise of NIMH's RDoC initiative.

Several major strands in this story are unwound by McNally, revealing how official psychiatric knowledge transmission warps the field's history. To begin with, it is convincingly demonstrated that the notion of schizophrenia as "split personality" (which psychiatrists have spent decades defining schizophrenia against) is not some popular misconception perpetrated by an unwitting public but was, for many years, built firmly into the professional understanding of the category. Thus psychiatric textbooks spent about the first 3rd of schizophrenia's lifespan describing it in terms of psychic splitting, and the next two 3rds repudiating that conception.

Officially Hecker's idea of Catatonia (which was incorporated into schizophrenia) has been "disappearing" from the diagnostic scene, possibly because of improved medication. In fact, argues McNally, it may never have been very prevalent, nor very conceptually coherent ("Taxonomy, consequently, made visible to science, in a ceremonial space, categories of people who were not in fact there." - p.95), and was only reluctantly accepted as part of the broader schizophrenia classification in the first place. In another vein meanwhile, the popular Bleulerian mnemonic, the "four As" (disorders of association, affect, ambivalence and autism), is at least an over-simplification of Bleuler's writings, and at worst a distortion. Some texts have five As, and others disagree over what the As actually are. In any case, Bleuler did not write in such glib snippets, and the acronym only appears some fifty years after his text, probably for the benefit of trainee psychiatrists who felt bad that they couldn't find time to read the original.

These are just headline findings. It is not possible to do justice to the richness of the text, which brings out much needed detail from schizophrenia's murkiest period, that space between the appearance of Bleuler's 1911 book, and the emergence of the first DSM. During those forty something years, psychiatrists were particularly divided over what schizophrenia meant, and how it stood in relation to the idea of dementia praecox (which actually survived in some dusty corners until into the late 1960s). Importantly, McNally can read German and French, and can thus go back to original source material in a way that is rarely done. So much of the self-recounted history of psychiatry (Lieberman's book is a prime example) hews closely to the living memory of the teller. Thus anything much before the 1950s has been increasingly excluded from the profession's autobiography.

Ahistorical Psychiatry:


One theme that runs throughout is what McNally describes as "the ahistorical nature of psychiatric thought" (p.126). Psychiatry, he points out, has persistently neglected the development of its own concepts, leading to simplification and dilution of its ideas (some psychiatrists have also lamented this tendency). This is how ideas pertaining to catatonia, split personality, and Bleuler's "four As" can be so awry.

It's tempting to hope this doesn't matter. As Thomas Kuhn pointed out, all successful scientific research is in the habit of forgetting its history ("Why dignify what science's best efforts have made it possible to discard?" - The Structure of Scientific Revolutions, p.138). But it does matter deeply. There is serious doubt about whether psychiatry is a scientific enterprise (a psychiatrist once told me that he had chosen his profession because it was the only branch of medicine prepared to admit it was not a science), and no good can come from simplistic reification of ideas at the expense of describing real experiences. Recent research by Nev Jones has highlighted the peculiar and disquieting effect when people doubt the validity of their experience because it fails to match canonical DSM descriptions. To accurately describe people's subjectivities, psychiatry needs depth, and for all its flaws, the detail one can find in Bleuler's clinical writing conveys a sense of people, and what ails them, that checklist diagnoses are sorely lacking.

Contra Metzl?


It is peculiar that McNally devotes a whole chapter to the issue of how schizophrenia fed into social discrimination, and a section therein to its specific racial biases, but nowhere mentions Jonathan Metzl's The Protest Psychosis. Metzl's thesis is that schizophrenia became a "black disease" during the late 1960s, when DSM-II took away the suffix "reaction" from the diagnosis, and psychiatrists implicitly came to associate paranoid projections (an important concept in understanding psychosis at the time) with the representations of black political activists. Possibly he does not concur with Metzl. By McNally's reading, schizophrenia was already a black disease long before DSM-II or even DSM-I, being over-diagnosed in black populations in studies in 1925 and 1931.


Beyond the Horizon


There is sometimes a sense that McNally over-does the ludicrous quality of schizophrenia research (though, I would hasten to add, not by very much). For instance, in an entertaining early chapter he reviews the extraordinary litany of long forgotten sub-divisions and related concepts. Speaking of a schizaxon, schizothymia, schizomania, schizonoia, schizobulia, schizophasia, shizoparagraphia, or of a schizovirus all seem rather absurd now (especially when you put these schizo prefixes together). McNally groups Meehl's (1962) schizotaxia in with these redundant concepts, painting a picture of one more another junk idea in the scientific dustbin. But although it's fair to point out that no-one now speaks of schizotaxia, it is misleading to suggest that Meehl's idea fell by some historical wayside, just because the term didn't catch on. In teasing apart a conceptual referent for "schizotypy" (a sub-clinical, at-risk phenotype) as opposed to schizophrenia (a clinical disorder) and schizotaxia (a heritable disposition), the framework presented in Meehl's paper provided a powerful organising principle for schizophrenia research ever since. Whether they know it or not, contemporary investigators are indebted to the idea of schizotypy (which is actually very popular right now). Schizotaxia (even if undesirably named) is perfectly conceptually coherent. Nobody now talks in terms of Albert Ellis' "musturbation" (to mean the anxiety provoking feeling that one should achieve some unreasonable thing), but that doesn't mean Albert Ellis didn't play an important role in re-conceiving the function of psychotherapy.

As I mention above, McNally is not interested in pushing an agenda for researchers, though one suspects he thinks they should be more historically literate. However, it's impossible to read this book without wondering about the problem of schizophrenia's conceptual unwieldiness. McNally is, at the very least, skepitical, and wonders in his conclusion whether the side effects of medication are too high a price to pay for treatment given the idea of schizophrenia has "often failed to justify itself" (p.210). The validation of schizophrenia is frequently postponed for the future, a shining technological breakthrough when psychiatry anchors its concepts once and for all. Once again, the idea of abandoning schizophrenia is in the air; should we stop talking about it? Should we call for a paradigm shift? If only it were so simple.

I have argued before that schizophrenia's flaws are undeniable, but we lack a compelling alternative. Paradigm shifts (at least in the Kuhnian sense) take place when a theoretical framework arrives that makes it untenable to speak in terms of its predecessor. Schizophrenia is just over 100 years old, which isn't that long in the tooth for a productive but strictly false programme of research. Phlogiston theory organized research in chemistry from 1667 to 1780, though researchers probably had a sense it was flawed for a while before they could figure out a better way of thinking. Unlike Oxygen theory, none of the competitors currently being mooted in the psychiatric domain (a focus on specific symptoms or complaints, or on individual formulations) is formally incommensurate with a theoretical disorder called "schizophrenia". Until a theory arrives that makes tighter predictive claims, we are stuck with a hot mess.

Friday, 3 June 2016

Medication, Phenomenology and the Nocebo Effect

A great recent paper by Gibson and colleagues undertook a thematic analysis of people's responses to being asked about taking antidepressants. Some of what they described was very negative. This is not a surprise (it is well known that many antidepressants have a significant side effect profile), though it is important that it has been documented. Here, for example, is a striking description:
Each one has had a worse effect than the previous…. I can’t remember them all. It started with memory loss then progressed to me becoming borderline catatonic staring at the wall for hours unable to stand up. Within a few weeks and genuinely terrified. It was a relief to go back to the misery of depression after these experiences
In addition to descriptions of what we can designate as direct negative physiological effects, another negative theme that emerged was "loss of authenticity/ emotional numbing". This is a more slippery experience; a sort of phenomenological unease arising from taking medications. Authenticity is an important part of our sense of who we are. To interfere with it may be less physically dangerous than a side effect like weight gain, but feels somehow more metaphysically perilous. Take my body, but leave my self alone!

The authors of the study point out "This research points to the inadequacy of asking the simple question: ‘Do antidepressants work?’ Instead, the value or otherwise of antidepressants needs to be understood in the context of the diversity of experience and the particular meaning they hold in people’s lives." I agree, but I think even this form of the question can be complicated further.

We have become accustomed to thinking about the effects of antidepressant medications in terms of the placebo effect. Since (at least) a famous meta-analysis by Irving Kirsch (and a subsequent book), many have suggested that the benefits of antidepressants are not the result of an positive, active drug effect, but the mysterious workings of the various expectancy effects we call "placebo".

It's a popular idea, albeit one that has become fraught with controversy. I am not going to wade into the question of how effective antidepressants really are (if you want to think about that then you are in for a long puzzling road. You could do worse than to start with James Coyne's provocative critique of Kirsch here). but I do want to suggest that, when it comes to expectancy effects and antidepressants, there may be a kind of asymmetry in how we customarily think.

Drugs also have nocebo effects; harmful outcomes that arise from the expectations of the people taking them. The nocebo effect (placebo's evil twin) is not something to be fooled around with. For a vivid account read this case study of a young man who needed hospitalisation after he overdosed on the inert pills he was given during an antidepressant trial. If expectations about a sugar pill can do that, then without doubting the flat reality of antidepressants' severe physical effects, we might wonder whether some negative effects, including feelings of phenomenological unease, could also result from a such a phenomenon.

There is a veritable culture of suspicion about the phenomenology of antidepressants, and a strand of cultural commentary on psychiatric medications that sounds a shrill moralising note. Taking medications for depression is regarded by some as an inherently suspect thing to do. Two notorious skeptical pieces in The Guardian (by Will Self and Giles Fraser), around the time of the publication of DSM-5, both hinted at the idea that taking antidepressants was the result of false consciousness:
At worst, they pathologise deviations for normalcy, thus helping to police the established values of consumer capitalism, and reinforcing the very unhappiness that they purport to cure.
It is hard to imagine that none of this would loop back round and influence people's experiences of what it is like to take medication. Ineeded, psychiatrist Linda Gask writes beautifully about the internal struggle over self-authenticity that can result from these ideas:
There are times still when I wonder whether the medicated me I’ve been for so long is the ‘real’ me, or are these tablets simply suppressing the person I truly am?
Contrast the Gibson study with the miraculous seeming accounts of the experience of taking SSRIs when they were brand new. Peter Kramer's 1994 "Listening to Prozac" included the now famous (and oft-derided) claim from one patient that the drug made them feel "better than well".

Could it be that when drugs first appear, they not only benefit from a sort of placebo boost (in virtue of their novelty value), but also from the absence of a culturally inherited, nocebo baggage? Research on this question seems just as important as teasing out the beneficial effects that arise from inert substances. If there are such effects, what are the moral obligations that arise for how we talk about treatments and shape the expectations of those who take them?

Saturday, 21 May 2016

Scattered Thoughts on a Hard Subject

Spurred by Masuma Rahim's thoughtful piece about the issue, I have been thinking about psychiatric assisted suicide. She points out that this is an issue it is "very difficult to have a settled opinion about". I don't yet have one, but it seems important to understand some of what is at stake. This is a list of thoughts. The topic may be triggering, and this post should be approached with that in mind.

1. Any debate about psychiatric assisted suicide concerns the question of whether there are ever circumstances in which people with mental health problems should be allowed to receive help to die.
2. If healthcare professionals are to have any role in this process, part of it must be in trying to provide the best possible assessment of whether a person can reasonably expect their life to improve.
3. Unless we think assisted suicide is always unconscionable, we have to accept that there exist reasons that bear on those cases in which it is not. Clarifying those reasons will help us think more clearly about the issue in general.
4. Whether or not mental health problems are "illnesses" is of no relevance to the question of whether psychiatric assisted suicide is morally palatable. The desire to die seems driven by the intensity, and particular quality, of individual suffering. It is not clear that this suffering is more real in cases where an illness exists. Whether or not a person wants to die is likely to be a function of whether they think their misery will persist.
5. A domain specific prohibition on assisted dying in psychiatry would appear to suggest that it is not possible for mental health service users to make reasoned choices about whether they can end their lives.
6. Psychiatry has a long history of "great and desperate cures", driven by desire to avoid feelings of hopelessness on the part of the doctor. How would we know psychiatric assisted dying isn't just the latest chapter of this ignoble tradition?
7. I have, in the past, walked on to a psychiatric ward and felt a chill at the idea that I could end up locked in a place that is organised almost entirely around the idea that I should be denied, at any cost, the freedom of killing myself.
8. To characterise this debate in terms of one group of people declaring another group "better off dead" is to fail to engage with the experiences of those who have advocated for their own right to psychiatric assisted suicide, or pursued it for themselves.
9. There is a particularly difficult balance to be struck between emotion and reason in this debate. We need to think calmly and clearly about psychiatric assisted suicide, but it is hopeless to try and avoid appeals to emotion. No-one can hope to understand what is at stake unless they take time to imagine what it is like to spend many years very seriously wanting to die. Equally no-one can hope to understand what is at stake unless they take time to imagine what it is like to lose someone to suicide.
10. We might wonder whether a policy like this would have a positive impact on the suicide rate. If people are aware that it is possible for them to die under medical supervision, that may reduce the intensity of some people's despair and desperation, making them less likely to kill themselves. Hope, even the paradoxical hope for death, might help people feel better.
11. Alternatively, a policy like this might increase the social visibility of suicide and diminish the taboo that surrounds it. This might lead to an increase in thoughts of death and more completed suicides, even in a sort of contagion as the idea occurs to more people. Legal protection would put suicide into the "pool" of acceptable solutions.

Thursday, 5 May 2016

Genetic Disavowalism is the Denial of Privilege

Here are two recent strands of thinking about genetics in clinical psychology: 1. Oliver James's (and others) bold position, that genetics play little or even no role in human psychology. Marcus Munafo has called this "genetic denialism" 2. The diffuse suggestion (one recent example here) that to pursue genetic research into mental health problems is related in some way to a eugenic agenda; to wit, that (i.e.) a genome wide association study looking at the diagnosis of schizophrenia may encourage us to think in quasi-fascistic ways. There are some good responses to the first of these strands, in Munafo's article (linked above), and in this piece by Kevin Mitchell at Wiring The Brain. Here, I want to address the second strand, which I will call genetic disavowalism.

The purpose of genetic disavowalism is pretty clear; to encourage us to think of genetic research and theories of genetic risk as inherently negatively morally valenced. This argument (to the extent that there is an argument; it is seldom made explicitly) is a little under-cooked to say the least. It is of course perfectly possible to acknowledge a genetic contribution to human behaviours and mental states without commencing some inexorable slide toward Nazi-ism. Does the genetic aetiology of Down Syndrome commit society to a re-run of the Nazi Aktion T4 programme? Clearly not. For one thing, a eugenic policy is a choice a government makes rather than a necessary consequence of a given set of scientific knowledge. For another, there is nothing to stop any government undertaking such a programme targetting people on the basis of behavioural or cognitive traits it doesn't like, but which are not genetically determined. Even if genetic theories about human behaviours and tendencies do incline some sorts of person towards ideas about eradicating those behaviours and tendencies (by "breeding them out" or what have you), there is no logical entailment, and we carry on with genetically inclined research because we wonder if there might be benefits to be derived from the knowledge.

Apart from all that, I think that genetic disavowalism has itself a moral problem to contend with; the denial of genetic privilege.

We are accustomed to thinking about privilege in terms of race, gender or social class. As a white man, for example, I have the privilege of not being looked on with suspicion in certain neighbourhoods, and I have the privilege of not feeling tense when groups of NYPD officers walk past me. It has come to be seen as crass and offensive to fail to acknowledge our privilege, especially when discussing race (see Peggy McIngtosh's essay on the invisble knapsack here), but the notion of privilege has been linked to mental health as well, by Martin Robbins here, and by me here.

When I first blogged about sane privilege, I was thinking in terms of the social position people have when they are viewed as less rational in virtue of their psychiatric status. When a person is considered deluded, their utterances become generally more suspect in the eyes of people around them They lose certain testimonial privileges (some of their statements about reality are taken less seriously). But privileges are also conferred on us by our genetic predispositions. This is most obviously the case in the way that skin colour or primary and secondary sexual characteristics are genetically determined facts about our appearance, but it presumably has cognitive implications too.

To the extent that IQ is genetically influenced, my course mates or colleagues with IQs two standard deviations above the mean have an advantage relative to me (with my quite middling IQ) in performance on exams or the production of research and logically sound clinical arguments. Equally, to the extent that my genetics plays a role in my tendency to not have debilitating emotional "highs" or feel my relationship with reality become terrifyingly fragmented, I have a sort of privilege conferred on me relative to people who are prone to such experiences. It is no good arguing that actually a tendency toward certain mental states is actually perfectly desirable, and should itself be considered a privilege. That may so for some people, but unless we want to deny that mental health problems are frequently extremely difficult to live with (and unless we want to throw out even the apparently politically neutral term "distress" to refer to such experiences), we have to acknowledge that is not the case for all.

Acknowledging cognitive genetic privilege need not entail acceptance of an illness account of mental health problems. Peter Kinderman has movingly written about his risk for a psychotic experience, given a possible personal high genetic loading for such an occurrence. At the same time, he resists the implication that this means he has a disorder or "attenuated syndrome". Even if you feel more inclined than Kinderman to describe such a genetic loading as predisposition toward illness, his is a perfectly consistent intellectual position.

Genetic influences on psychology have always been a controversial topic, and there is an easy tendency to accuse genetic researchers or thinkers of secretly holding eugenic aspirations. Perhaps some strains of genetic reasoning are infused with a negative moral valence (think of the pub bore who argues that women are genetically inferior), but to the best of our knowledge, genes make certain aspects of our lives more or less easy for us. They confer varying degrees of privilege. To ignore this is not only unrealistic, it is insensitive.

Tuesday, 29 March 2016

"Difference Makers" and "Background Conditions"

A group of clinical psychologists has made the case that the UK's Medical Research Council should spend more money funding research into the social rather than biological causes of mental health problems. Note the headline of the article reporting the story: "Mental illness mostly caused by life events not genetics, argue psychologists". The argument is clear; mental health problems are set off by life events, not by some underlying biological vulnerability.

This sort of claim about causality has consistently proved controversial. Oliver James recently ignited firm criticism from behaviour geneticists when he baldly denied the role of genetics in mental health problems. I am with the behaviour geneticists in that dispute; James' dogmatic environmentalism rests on a wilful misunderstanding of scientific findings, and on some very shaky arguments.

Environmentally inclined clinical psychologists often want to push back against a view that says most of the cause of mental health problems lies in our genes. There is a fact of the matter about this, and it does suggests a powerful role for pre-disposition. If we want to find someone who meets criteria for schizophrenia our best bet is to find someone who has an identical twin with the disorder. Nothing else raises the risk so far (from its baseline of around 1% to 28%*). Because of this, many researchers now hold that bio-genetic vulnerabilities do the bulk of the causal work in psychosis (leading some psychologists to complain that environmental factors are marginalised by being reduced to the status of "trigger").

But even so, the claim that we underplay the environment's role as a cause may be warranted. Causality is complex and we assign different weights to different causal stories depending on what we intend to use them for. A criminal court, for example, may apply a "but for" test, asking whether the events under examination would have happened but for the actions of a defendant. This doesn't necessarily show us the full causal picture as it doesn't answer questions about why the defendant behaved as they did (indeed liberally inclined thinkers tend to feel that the criminal justice system focuses too much on individual responsibility and not enough on societal causal factors when punishing people), but it works tolerably well for assigning a certain sort of criminal responsibility.

Bringing environmental factors further into the foreground may serve a valuable purpose in the mental health debate. Consider this passage from Peter Zachar's book A Metaphysics of Psychopathology:



Zachar brings out the element of choice we have in identifying causes. Exactly what we choose to call a cause depends in part on what aspects of the whole situation we consider "background conditions". He does not imply that the choice is limitless (he is not a relativist about causes), but he does suggest that where you turn your investigative attention may legitimately be a function of your interests; a function of what aspects of the total situation you feel to be most relevant. 

Most relevant to what? To the interventions we can make to help people. Perhaps the enormous bulk of research that investigates the genetic and biological underpinnings of mental health problems takes a particular view about what can be seen as "background" and what can be seen as a "difference maker". If your aim is to develop medicines and genetic tests, then it makes sense to focus on neurotransmitters and SNPs, as these are the things you hope to change. They start to loom into focus as "difference makers". But it is also possible (especially in most mental health settings, where it feels like gene therapies or radically improved medications are a very long way off) to see these ingredients as part of the background. This makes sense in the light of a burgeoning "neurodiversity" movement, which re-frames genetic variation as normal, and thus undermines the notion that this or that genetic predisposition (to schizophrenia say) is itself a relevant pathological "difference maker".

What motivates psychologists who see trauma and "life events" as significant in causing mental distress is a refusal to see various forms of adversity as a "background condition". Sure, genetics plays an important role, these researchers suggest, but the public health implications of that fact are not immediately clear. Meanwhile, the public health implications of an aetiological role for traumatic life events are obvious; we should aim to stop people being exposed to them. As Peter Kinderman says in the article I linked to at the top, "when unemployment rates go up in a particular locality you get a measurable number of suicides".

If asked, I am sure Kinderman would deny that a change in economic circumstances is the whole causal story in any given suicide. Likely a host of factors (personality variables, social support network and so forth) combine to create something like more or less "resilience" in people. But unless you can intervene to improve that resilience, it makes sense to push it some way into the background and focus on things you feel you can change. If you do this, life circumstances and political events start to look more like "difference makers", even if we can still have a debate about what constitutes a cause.

______________________________

* UPDATE: I originally cited the figure 48% here, reflecting the commonly quoted probandwise concordance rate for schizophrenia in identical twins. 28% reflects a lower estimate of concordance, based on a pairwise concordance rate. There is some controversy over which rate to cite, and as this post was an argument for greater focus on environmental factors, I did not want to lay myself open to the charge of minimizing the genetic contribution. However, it was suggested to me that the probandwise rate is an inflation of the true concordance rate, and for the time being I'm inclined to agree. Nonetheless, there are good arguments for using the probandwise concordance rate, and when I have better understood the issue, I will try to write a post outlining them.

Friday, 25 March 2016

Mental Health Conferences and Service User Inclusion

I'm just back from a fantastic conference laid on by the History and Philosophy section of the BPS, and the Critical Psychiatry Network (thanks to Alison Torn at Leeds Trinity University for putting together such a great programme).

Beyond the content of the papers, I was struck by the way that the event recapitulated an ongoing tension evident around the inclusion in academic spaces of "experts by experience". Conferences like this are increasingly attended by people who have experience of using mental health services (a fact which seems essential if "critical" aspirations are ever going to bear serious fruit), but are they always included effectively? 

One attendee noticed a bunching together of service user talks into a single session:



Did this encourage the use of kid gloves with service-user researchers? Or set up an implicit distinction between more and less "professional" research? Rather than dividing presenters up by identity (into service users and professionals, or experts by training and experts by experience), a useful distinction might be between people who are attending a conference with the purpose of presenting research and those who are giving testimony. 

There is nothing about service user produced research that makes me feel inclined to judge it differently than that produced by non-service users. It will be a very good thing for everyone if more research is conducted by people on whom it has a direct bearing, but it is subject to the same scrutiny as research conducted by anyone else.

Service users who deliver testimonials however are doing something very different. Their words are personal and a degree of emotional risk is involved when you disclose intense experiences and give voice to anger. We don't subject this sort of testimony to the same degree of quarrel, nor pore over it in quite the same "academic" manner as we do a theoretical exposition or literature review.

Making a research/testimonial distinction might create greater clarity about what we want service user inclusion to do for conferences (and for service users), because at least two distinct goals seem to be in play. One is that service users be included in mental health research in a way that expands our epistemological horizons and rejects a hierarchy that privileges some researchers over others. The other is for people to be able to speak at such conferences when they may not have the means or the interest to develop research per se, but nonetheless have something important to say.

Thursday, 21 January 2016

Something's Missing: What Psychotherapy Research Leaves Out

It is common to hear, in discussions about the value of psychotherapy research, that nomothetic outcomes leave out some indiscernible inter-personal human "magic". That there is something missing from psychotherapy research, which renders its findings essentially moot. So often do I hear this point in discussions, that I want to take it on and suggest that it presents less of a problem than is generally supposed.

When the "something missing" argument is wielded in a debate about research, two consequences usually seem to be implied:

1. That psychotherapy research cannot tell you very much about what psychotherapy is really like, and so should not be trusted in appraising whether it is helpful.

2. That psychotherapy research does a sort of crass violence to the psychotherapy relationship itself, and that psychotherapy researchers are naive to think they can capture something so delicate.

The "something's missing" argument is often stated as though it were a knock-out blow to the value of outcomes research. It isn't. That something should be left out whenever we attempt to measure or represent something else is a banal truism. It simply presents no problem to the project of learning about reality. I am sure I have quoted Paul Meehl on this question before. In his book on statistical prediction, he refutes those who claim that any aspect of human behaviour is too complex to be in principle predictable from regression models, because humans are "more than" the models in question:
"A cannon ball falling through the air is “more than” the equation S=½g, but this has not prevented the development of a rather satisfactory science of mechanics".
The same goes for all of science (the full reality of a Large Hadron collider is more than the sum of the research produced by the physicists who work with it, but the research they produce does not lack veracity or utility in virtue of that fact) and the humanities too (no quantity of historical books on the American Civil War will ever completely reconstruct the experience of someone who fought in it). In fact, it is inherent to representing a state of affairs in any form other than the original.

So yes, psychotherapy research has "something missing", but that is trivial and we have to either accept the limitation or offer solutions to it (which is to say, become methodologists rather than critics). The choice we have is not between trite, uninformative quantitative research and rich-full-blooded qualitative information, it is between some combination of those two approaches and sheer guesswork. 

Quantitative research does not just forget the magic of the interpersonal encounter, it factors it out in a bid to discover a separate numerical truth: how many people show some sort of measurable improvement (and how much of one)? This can look clumsy, but it is actually necessarily revealing to escape the persuasion of interpersonal charm and the therapeutic relationship. Think of a doctor like John Bodkins Adams, who appears to have been very successful interpersonally. He was sufficiently charming that he received money from many of his patients in their wills and became a extremely successful GP. Only something as crass as a body-count (Bodkin-Adams may have killed as many as 160 of his patients) revealed that something untoward was going on. 

We should think of psychotherapy outcome research as analogous to the body count. Without it, we are too apt to be misled by the charisma and good intentions of the therapy industry.