Pages

Friday, 9 December 2016

Illness, Authority, and the Burden of Proof

Psychiatry is beset by a chronic controversy around the world “illness”. Critics and opponents of the specialty have long been engaged in what the philosopher Ian Hacking called an “unveiling” project, questioning the authority of psychiatrists to label people as ill. This has generally worked by highlighting the lack of an unambiguous marker or validator to confirm the presence of illness. Arch critic Thomas Szasz noted the lack of a physical lesion to prove the existence of pathology, and declared mental illness a “myth”. In a famous early 1970s experiment, Rosenhan and colleagues demonstrated that a single malingered psychiatric symptom could, without confirmatory tests, lead to hospitalization.

Herein lay the unveiling Hacking describes; revealing that something supposedly real was in fact constructed. The broad narrative of these critics was that “psychiatric illness” was a term applied to behaviours and experiences that were not pathological. Unlike other medical specialists, who are experts in demonstrably physical disease process, psychiatrists lack an externally validating terrain. They thus lack the authority to say of their patients “this person is ill”.

Notice the form of the argument; without validating evidence that a physically demonstrable disease process is present, we ought not to use the word “illness”. Thus, in psychiatry, we seem to treat illness as being similar to criminal guilt. A person is healthy until proven otherwise; sane until proven insane. Maybe this is the right way for psychiatry to be organized. We all naturally seek to be recognized as autonomous, reasonable individuals, capable of deciding for ourselves. We recoil from the possibility that our vision of reality may be compromised. Putting the burden of proof on the psychiatrist protects us pre-emptively from the possibility our autonomy is compromised and our slant on the world is “off.”

But it is worth contrasting this state of affairs with other forms of medical complaint. In a recent book about psychosomatic illness, neurologist Suzanne O Sullivan reports on a series of patients with serious physical symptoms. In each case, she concludes the most important causal factor lies in emotional conflict or disavowed stress. Although she does not deny these patients are ill, she does deny that the illness is straightforwardly physical in the way they believe. It is interesting to observe that, contrary to the situation faced by many psychiatrists, Sullivan has to work hard to persuade her patients that they don't have a disease.

Here the burden of proof is somehow reversed. The patients approach the doctor claiming to have a physical disease. Only a complex diagnostic process is able to rule out its presence and allow the symptoms to be accounted for as “psychosomatic”. Even then, there is no disease, but the patient is still ill. How can this be? Doesn’t Sullivan lack the same medical authority as psychiatrists in these instances? Isn’t she, like them, forced to conclude that absence of a disease process entails absence of an illness?

Apparently Sullivan cannot observe her patients’ situations and fail to conclude that they are ill. This seems partly to be sympathetic, and partly intuitive. Sympathetic because Sullivan is clearly at pains to communicate to her patients that she takes their experiences seriously. They feel themselves to be ill. Intuitive because and to all intents and purposes, these patients simply seem to be ill. When an individual is unable to walk, or is subject to persistent and debilitating seizures, we readily concur that we are in the terrain of medicine. We havestrong intuitions about where illness is and where it is not present. How otherwise how could the arguments about disorders like ME/CFS could, get so heated and confused?

Somehow our intuitions about physical and psychiatric debilitation are different. We question psychiatrists’ authority to label their patients “ill” in way we don’t bother to with for other specialties. This is not just a question of objective markers, it is written into how we interpret different experiences. This may amount to a systematic bias against seeing illness in the psychiatric realm. Insofar as psychiatry can suffer from “mission creep” (into the medical-labelling of political dissenters for example) this bias may have its uses. However, despite a relatively vocal movement for the demedicalisation of mental health services, there is reason to suppose that a complete movement away from an “illness model” will not serve everyone.

“Illness” is not only a word imposed top down on people’s subjectivity by patrician doctors, it is also a way of giving form to experiences that are painful, disorienting, dangerous or overwhelming. The idea of experience as illness can make sense of it no less than a narrative about its social, traumatic or affective origins, and is in any case not necessarily in conflict with such a narrative.

It is worth wondering then, why we light so easily on the idea of illness in the case of some forms of overwhelm, and so reluctantly in the case of others. In either direction we run the risk of going wrong. We might reasonably ask who has the authority to say that someone else is ill, but we might also wonder who has the authority to decide who is not.

Thursday, 17 November 2016

What's our problem with genetics?

In the field of mental health there is a distinct allergy to the question of genetic influences on cognition. At its most extreme, this allergy leads to a sort of anaphylactic reaction that has been called genetic denialism. A recent example of this (an example that prompted the epithet) is Oliver James’ book “Not in Your Genes”. However, a milder form of the allergy exists, and I call it genetic disavowalism. Genetic disavowalism is far more pervasive than genetic denialism, and finds its way into clinical psychological writing. 

Genetic Denialism has changed little in 30 years

Genetic disavowalism has valid historical roots. There are good reasons for horror when we examine the way that genetic theories have been distorted to promote policies of eugenics and even mass extermination. But as with other allergies, the responses that are mobilized against “gene talk” are out of proportion to the current level of threat. It is misguided to conclude that it is somehow desirable to steer clear of genetics, or “genetic explanations” altogether, or that the field as a whole is tainted by Nazism. As is the case with other allergies, these responses may be counterproductive, or even damaging.

When I talk about disavowalism, I am not referring to certain widely agreed upon facts about the limits of genetics in psychiatry. That is, I am not calling into question the idea that genes are only one part of the aetiological process giving rise to various mental health problems. I am not contesting that the heritability quotient is a limited form of information about the role genetic influences and leaves (for the time being) a “heritability gap” between rates of within-family concordance and accounted variance in molecular approaches to genetics. Neither am I questioning the limited practical application of genetics in many areas of psychiatry. What falls under the scope of genetic disavowalism- what I take to be intellectually unhealthy- is a detectable aversion to even considering psychiatric genetics as a reasonable field of enquiry for aetiology or therapeutics.

Genetic disavowalism can be associated with its own wrongs. I lose count of the number of times I have seen a psychologist descry genetic explanations on the grounds that they imply a person is “permanently flawed” or similar. Flawed? Let’s follow the reasoning. If a genetic explanation implies that someone is permanently flawed then presumably a theory which allows for a more transitory kind of experience (an environmentally determined, understandable “reaction”) only implies that they are temporarily flawed. The time frame changes, but the “flaw” remains. 

This is not a problem with genetics; it’s a problem with how some people (including apparently, critics of genetics) think about other people. The use of the word “flawed” is suggestive of a value judgment that has nothing to do with causes, and everything to do with how you regard a behavior or trait. If you come to the aetiological debate with the view that people with mental health problems are “flawed”, then your inclination to disavow genetics may be a defence against your own internal prejudice.

It can be helpful, in the face of genetic disavowalism, to probe intuitions about the morality of psychiatric genetic research and intervention. If genetic psychiatric research is inherently negatively morally valenced (or if it amounts to a “tedious obsession” as I have seen it said), we might ask whether it should be discontinued.

Here is a case study that bears on the question. In Steve Silberman’s Neurotribes, there is a wonderful description of the origins of our genetic understanding of Phenylketonuria (or PKU), a disorder which can result in profound cognitive and intellectual disabilities. The Norwegian researcher Ivar Følling had noticed a strong smell in the urine of a child who had developed intellectual disabilities. Further testing revealed this to be the result of high phenylpyruvic acid levels, indicative of a difficulty metabolising phenylalanine. Phenylalanine build up in the brain is what leads to the pervasive cognitive difficulties often seen in PKU. Følling showed that the underlying metabolic problem was determined by an inherited abnormality on a specific gene.

This was a serendipitous discovery that came out of a line of investigation that can easily seem hopeless; the search for genes for generalized cognitive disability. Nonetheless, its implications are extraordinary. Children with the gene can now be identified at birth and, with a simple dietary regimen, be prevented from developing the sort of pervasive cognitive difficulties that can make it impossible to live independently. Was this unexpected discovery a bad thing? Are we worryingly closes to a program of eugenics for knowing that there is a form of cognitive disability that is caused entirely by an inherited abnormality on the PAH gene? Is it morally wrong to stave off that disability by resorting to a preventative diet?

The story of PKU will not easily generalize to all areas of mental health, but it is foolish (and a form of ahistorical thinking) to suppose that unexpected shifts in our knowledge about the genetics of mental health can not now lead to other breakthroughs in treatment. In 2016, do we know all there is to know? Psychiatric genetics has been parodied by analogies to a “gene for homelessness or debt”, with the implication that it is conceptually wrong-headed to conduct genetic research in the domains of schizophrenia or depression. Maybe a gene for homelessness or schizophrenia will never be found, but genes or genetic mutations for some specific subtypes of cognitive problems are not out of the question. How to approach genetic research most fruitfully is one issue. Whether that research should take place at all is another one entirely.


Friday, 28 October 2016

Is mental illness denialism a death sentence?

A decision by Pakistan’s highest court, to define schizophrenia as “not a mental disorder” has caused some consternation for legal advocates and psychiatrists. As a result of this ruling, Imdad Ali, a 50 year old Pakistani man can now be executed for a crime he committed in 2001. Ali has a diagnosis of schizophrenia, which his lawyers hoped would mean that (although he has been convicted) he could not be executed as punishment. This hope seems to have been grounded in the fact that Pakistan is signatory to a UN agreement, that individuals like Ali will not be executed if they are “not capable of understanding the crime and the punishment”.

Psychiatrist Joe Pierre has argued that antipsychiatric thinking may play a role in this case. It certainly seems that way superficially. Here’s his reasoning:

1. Imdad Ali is going to be executed because the court has cast doubt on the reality of schizophrenia
2. Mental illness denialism casts doubt on the reality of schizophrenia
3. Mental illness denialism may have fed into this judgment, and may lead to similar results elsewhere.

I think there might be grounds for placing more distance than this between mental illness denialism (something I have pushed back against) and the Pakistani court’s decision. Although the court does seem to make an outrageous ruling (its judgment explicitly states that schizophrenia is not a mental disorder, as defined by the country’s own 2001 mental health ordinance), the grounds for that decision may be closer to the mainstream of psychiatric thought than journalists are accepting. This is a judgement that (despite headlines) seems to have to do not with schizophrenia’s existence, but rather with its periodicity. It’s true (and widely accepted) to say that schizophrenia is not a condition which always effects you in the same way at different times. People with the diagnosis of schizophrenia have a waxing and waning in their degree of mental capacity.

Pierre mentions the insanity defense twice, but that does not appear to be relevant to this case. Ali has already been found guilty, which presupposes that he was a) declared fit to stand trial, and b) not considered “insane” at the time of the crime. There is no incompatibility here. In schizophrenia, the psychosis and disorientation come and go. We think nothing of keeping separate the insanity defense, competency to stand trial and diagnosis per se. Thus, it is possible (under US law at least) to have schizophrenia, be declared fit to stand trial and be found guilty of the crime, provided you were mentally competent at the relevant stages.

This case seems to hinge on whether Ali is competent to be executed. It is a repugnant question. If, like me, you abhor the death penalty it doesn’t seem coherent to imagine people could be competent in the relevant way. Nonetheless, it is a well-established legal framework in the US. It has been unconstitutional since 1986 to execute an "insane" prisoner, which means someone has to decide whether an individual remains "insane" at the time of sentencing. The key detail is that the evaluation of competency to be executed is a decision distinct from the simple presence of a mental disorder

This idea is apparently what is being explored now in Pakistan. Here is what the judges seemed to have decided (from what is admittedly a fairly obscure judgement): schizophrenia shows a fluctuating course, therefore it is not a permanent mental disability, therefore it does not fit Pakistan's 2001 mental health ordinance definition of mental disorder; therefore Ali is competent to be executed. This chain of inference doesn’t seem right to me, and yet it is not that far from how things work in many US states. If Ali were on an American death row there would be no question about the legal status of his schizophrenia, but his competence to be executed would be an open question. You can make part of your living as a forensic psychologist, at least in certain US states by conducting assessments that bear upon this question. Entire books are written on the topic.

Pierre has a question in his subtitle: “could mental illness denialism result in the same thing happening in the U.S.?” Before we become concerned about mental illness denialism (an undoubted problem), we might wonder whether the court in Pakistan, rather than denying the reality of schizophrenia (it says explicitly in it judgment that it isn't) is clumsily contesting its nature. Then, rather than worry about this thinking spreading to the US, we should glumly acknowledged that the situation here is not much better.

Thursday, 20 October 2016

A strange sad walk though asylum history

It is late afternoon on a warm fall day. New England is quiet and crisp under a pale blue sky and waning orange light. I am alone in a clearing, surrounded by radiant autumn leaves and the cold uniform stones of a cemetery.

In the heart of rural Connecticut, just under a kilometre from a river that wends through the state down to the Long Island Sound, is the graveyard for residents of what was once the state asylum. It is a lonely place, set back from a tarmacked road so that you have to drive along a wide gravel path with the dust kicking up behind you.


The cemetery sits in the lee of a valley with the huge crumbling redbrick buildings of a 19th century hospital just up the hill. Many of those are still in operation but now constitute a network of modern mental health services; inpatient units, community outreach teams and assorted therapeutic services. Although only a stone's throw away, they seem a million miles from this quiet corner. 

An air of calm prevails, with nothing but birdsong and the rustle of dry leaves to break the silence; a quiet that has a grim resonance to it. Although cemeteries are usually places of memory and melancholy reflection this is a peculiarly sad memorial. Other such places I have visited in America are brimming with the stories of the people they house. Whole patches of Queens are devoted to huge granite slabs bearing Irish or Italian names. Walking through them one is struck by the vigour of transatlantic migration, the keenness of struggle, and the strength of kin. 

These stones have no names, instead each is inscribed with a number. A simple designation made by the state hospital at a time when the people who died in such institutions were overwhelmingly anonymous and alone. At first glance it appears a relatively small plot of land, but look at the slabs and you see them creeping up to mark the resting places of over 1600 people. 



It is almost unbearable to think about the individual chains of events that must have led to so many forgotten people being buried here. At a time when all varieties of psychological and physical suffering were grouped into a morass of stigmatized hopelessness, individual stories were routinely submerged beneath an ocean of pessimism and neglect. 

I find myself hoping that at least each burial was attended by a ceremony of some sort, that the staff on the wards had fond memories of the people they were saying goodbye to. At a conference earlier this year I saw historians of madness discuss the turn toward investigating their subject from the perspective of the patients. The anonymity of this secluded graveyard feels like a vivid testament to the need for that scholarship.


Happily there has been an injection of much needed attention here. Between 2001 and 2015, a coalition of people, including retired pastors and the Connecticut Alliance for the Mentally Ill, have overseen the construction of a separate memorial linking the numbers on each of those headstones to the names of those buried beneath. Now three larger slabs stand at the front of the cemetery, with a dignified reminder of the fact that so many humans are buried here. 

Two news articles covered the process of creating the new memorial. This long piece in the New York Times came out just months after the attacks on the World Trade Center, and compared the sheer quantity of numbered dead with the losses experienced in that disaster. This article in the local Hartford Courant marked the successful completion of the project. Both contain moving stories about families finally filling in the gaps in the stories of lost relatives. Multiply them by 1,686 and you can acquire some sense of the human scale.


How could it happen? What made it seem like an anonymous resting place was dignity enough for the people of this graveyard? The Times piece quotes the hospital's chief executive as saying "The reason the patients' names are not on the stones is not to protect their confidentiality, but so it wouldn't bring shame on their families", but that is only partly explanatory. Why was it possible for shame to outweigh the basic human expedient of recognition in death? The question produces a vertiginous feeling. How obvious it seems now that serried ranks of numbered headstones resembles more closely an anonymous mass grave than a respectful final resting place. Which of our current institutional practices will one day look so patently wrong?

Monday, 10 October 2016

What do you mean "invalid"?

Here is a common statement made in debates about psychiatric diagnosis:

"[psychiatric disorder x] is an invalid construct"

This "validity critique" of psychiatric diagnosis arose after the technical overhaul of DSM-III, when the psychometric properties of such classifications became a powerful way of understanding their flaws. Originally, it came from seminal work by Richard Bentall and Mary Boyle, both of whom queried the construct validity and predictive validity of DSM-defined schizophrenia. This was an original and useful way to raise problems with schizophrenia-talk, and it has happily found its way into the mainstream of psychiatric discourse. Unfortunately these contributions get kind of watered down in the endless repetition of a "invalid construct" claim as quoted above. What gets left behind is the veneer of apparent truth, without any substantive meaning.

To say "valid" or "invalid" is not very helpful in itself. Those are words that have both a specific technical psychological meaning and a broader lay meaning. Validity in psychometrics is a term used to describe the extent to which a test or checklist measures something that is actually there or successfully predicts some other event. To understand what is meant by "valid" (or "invalid") we need to ask valid in relation to what? Thus a given questionnaire could be an invalid predictor of suicide, but still be a valid indicator of severely depressed mood.

Psychometric validity is difficult to nail down in the realm of diagnosis because it is not clear what might count as a validating criterion. Bentall and Boyle point out several facts about the diagnosis of schizophrenia, including the fact that symptoms do not cluster together and that functional outcome is not uniform. Those are good things to know, and they undermined pervasive myths about the schizophrenia diagnosis.

But in the broader, lay sense of the terns, it is all but impossible to meaningfully say whether diagnosis [x] is really "valid" or "invalid". What, after all, do those words really mean in this context? The dictionary definition simply suggests "grounded in fact", or "reasonable or cogent". On this definition, surely it is valid to say "I feel ill" (if your first person subjective experience tells you as much), or "I have schizophrenia" (if the technical language using community agrees as such).  If I have a set of experiences that feel subjectively like illness, cause me to meet DSM-criteria for a disorder and that DSM-diagnosis provides a constructive narrative for helping me to live the sort of life I want for myself, is it still an "invalid" diagnosis? In the technical sense of "invalid", it is arguable. In the broader lay sense of "invalid" it seems the answer is a resounding no!

Another way that psychologists increasingly talk of validation is in terms of recognizing the reality of people's experiences. Thus we validate people's sadness or anger by joining with their perspective and agreeing that anger or sadness was a reasonable thing to feel under the circumstances. Failure to do this is called "invalidating", meaning that it seems to undermine the reasonableness of an emotion. Invalidate someone's emotions and you show them they are wrong to have felt them. Invalidate someone's experience of reality and you hint to them that they are crazy.

Talk of lack-of-validity has been a valuable addition to understanding psychiatric diagnosis. But when it gets isolated from its underpinning arguments, validity talk in this context can be invalidating.

Thursday, 15 September 2016

(Ab)normal psychology

In a presidential blogpost at the BPS last month, Peter Kinderman reiterated an argument from his book that there is no such thing as abnormal psychology. I have spoken to this debate once before, when I reviewed his book on this blog. Here is what I said then:
A superficially appealing argument raised here is that "abnormal psychology" is an unreasonable field of study; after all, we don't speak of "abnormal physics". There is an important idea here with which I find myself aligned. Using the word "abnormal" is indeed a needlessly unpleasant way of speaking about people, but the physics analogy doesn't fly. All physical phenomena are subject to the same basic laws (as far as we know), but that hasn't prevented the fruitful subdivision of their study into solid state physics, condensed matter physics, and so forth. When people have experiences of psychological distress, these tend to manifest in a propensity toward particular states of mind. Is it really so unreasonable to study these states in their specificity, cautiously categorising them until some better framework is offered?
I still stand by that more or less, but when I re-read Kinderman's argument this time around I felt more disposed to agree with something in the point he makes. What is he driving at here? It's a fun idea to probe.

Psychiatric diagnoses like schizophrenia can be said to be hypothetical constructs. That is to say they are theories about the nature of entities (what type of entities is controversial) that are held to exist. Because it is still hard to find solid external criteria by which to independently validate their existence, they are sometimes said to fail as valid constructs. This is not a fringe argument. It is acknowledged far and wide within academic psychiatry. That is why the validity/utility debate has such traction within the discipline. I have pointed out before that psychiatric diagnoses survive because they come to act as a sort of stand in term for quite real seeming experiences. Where they aspire (and fail) as hypothetical constructs, they succeed as intervening variables.

What does that mean? In the 1948 paper that introduced and distinguished intervening variables and hypothetical constructs, the former are simply a convenient shorthand for some collection of already observed (but potentially unexplained) empirical facts. The latter are supposed to be things, that have some "explanatory surplus"; if you can propose a successful hypothetical construct, you will be able to make new (and accurate) predictions about reality.

That term explanatory surplus is key. Although there is a way of reading Kinderman's argument that is unfavourable to him (namely that one can of course plausibly divide the study of psychology into common and and relatively uncommon processes), he is certainly on to something. Here are two reasons why:

1. In any given case in which an individual has a psychiatric diagnosis, I can make some rough empirical predictions based on aggregated statistical facts about that diagnosis. But because knowledge about psychiatric entities is generally obscured by how poorly defined they are (for now), I am largely at a loss to make tightly-specified predictions about individuals based on a decent theory. "Abnormal psychology" is a collection of useful observations about how certain people behave and what processes are present in particular groups. 

2. Even if I did have a well specified set of facts about such and such a psychiatric entity, the majority of any given person's behaviour will still be best explained by facts that are common to all people. Thus I can usually understand instances of aggression in terms of things like a person's likely fears and wishes, combined with the situational context they found themselves in. I can then add on some nice sounding post-hockery to the effect that they have "poor impulse control" (a variation on a capacity we all have more or less of) or something similar. In the absence of a well set out aetiological theory of any disorder (giving it "explanatory surplus"), I don't really have an explanation yet. Most people's behaviour (even psychiatric patients) can be mainly explained by principles that have been derived from general psychology. Only a little is added by factoring in the useful observations of psychopathogical research. 

I have to be careful. I am not downplaying the value of clinical psychological research. Nor am I one of those people who wants to deny that there could be something like illness processes present in many cases of DSM diagnosis. I think it is unambiguously clear that abnormal psychology exists in the context of neuropsychology and neurology. But I suppose I agree with Kinderman insofar as I think that most of the behaviour of most people can (and should, as far as possible) be understood in terms of the things that are common to everyone.

Monday, 29 August 2016

Delusions and Verisimilitude

What's the one thing everyone knows about delusions? That they're false beliefs. Not so fast. Already we have two problems. First, there is much debate among philosophers about whether they are really beliefs (recently the linguist Dariusz Galasinski has written a fascinating post about whether delusional utterances even always have to be propositional statements).

Additionally, it's not clear that delusions always have to be false. An oft repeated sentiment in psychiatry is that even a true belief ("my wife is cheating on me") could be delusional if held with the right (or, I suppose, wrong) sort of conviction. That idea is usually attributed to Karl Jaspers, but not having read him yet I can't confirm. I have also seen it attributed to Lacan, but I don't recall coming across it when I read his weird, poetic Seminar on the Psychoses. Perhaps someone could point me to the source.

I am intrigued by the possibility that a true statement could be a delusion. It seems superficially rather a contradiction in terms, after all "delusional" is a rhetorical way of describing something as patently false. Nonetheless it makes some sense. It seems possible to have a pathological conviction about something true. Imagine correctly insisting that it was raining outside when you had no means of knowing it were so. Healthy assertion about most things contains withing itself the germ of the possibility that the person asserting could be wrong.

It seems then that we can be delusionally correct in certain circumstances. Is there also another way delusions could be true? Could they be, as I think some therapists would like to suggest, a form of communication about reality? Could it be that delusions, even quite wild ones ("I am having my mind read by the president"), have some element of truth to them?

Some people think so. For example, one therapeutic approach suggests finding the relate-able component of any delusional utterance and focusing on that. A supervisor recently told me that she was once confronted by a patient yelling "we're at war!" and responded by saying "you must be very frightened". I can't resist the detail that the patient turned out to be flatly correct (it was 2003 and she was referring to the outbreak of the Iraq war), but my supervisor's approach was a good one I think. When someone says something to you that is on-the-face-of-it at odds with your understanding of reality, it seems more communicatively cooperative to find the part that both of you make sense of. "You think other people can read your mind? Well that must feel terrifying and very exposing."

Such an approach sometimes gets packaged up as a form of relativism or pluralism, the idea being that there is no such thing as one truth. That might feel quite comfortable for people of a certain philosophical persuasion; if you are a pluralist or post-modernist about truth, then you needn't be troubled by the idea that any given statement is false.

Image result for this is my truth tell me yours
Do we have to have different truths?

Unfortunately I like the therapeutic stance but not the philosophical posture. I have come to belief in truth. By this I just mean that I think that some states of affairs are the case, while others are not. What is more, I think most people secretly agree. If you jump off a bridge (all other things being equal), you'll end up moving downwards. If there were nothing that were true it would be supremely weird that we managed as a species to agree over so many things.

So what do we do? Can we still say, following sympathetic therapists, that delusions have some truth to them? I think we can. Beyond the idea that some statements are true and others false, there is the idea in philosophy of science, that any given statement can be more or less true; that is, have more or less verisimilitude. Take these two statements: 1. "There is no such thing as schizophrenia" 2. "Schizophrenia is a real illness". They seem mutually contradictory, as though they couldn't both be true. They certainly cause a lot of argument. Regular readers of this blog might already have a view about which is right and which is wrong.

I think such arguments usually arise because the people apt to make one of those statements often think they are saying something that could be simply true or false. This is a mistake. One of the reasons such contradictory statements can exist (and it is surely one of the reasons that relativism about truth is such a respectable position in some circles) is that so many of the claims made in this domain are impossibly under-specified as truth-assertions. That is to say, my example statements 1 and 2 use such loosely understood words ("schizophrenia", "real") that we cannot gauge their truth value without interrogating some hypothetical speaker to get further qualification.  For what it's worth I think both statements have some verisimilitude. I go back and forth about which one I think has more truth than the other, but they are both getting at something basically correct.

How does all this help us with delusions? Delusions are like the statements studied by philosophers of science. They are often (though not always) statements about how the world is. If this were not so we might not bother calling them delusions to begin with. People falsely claim they are being watched; that they are of unusually superior ability; that they are infected with some fatal disease. It seems right to be aware that such beliefs are are often untrue. At least the headline assertion is often false. However delusions are usually complex and under-specified statements. Minimally, a person who makes an outlandish claim about the world is also making a less ridiculous one about what it is like to be them at that moment. Broadening our view somewhat, they might be making a quasi metaphorical statement about some aspect of their environment. I will not be saying anything radically new if I suggest that sometimes, delusions are informative in surprising ways.

Therapeutically this is nothing new. Sympathetic listeners have long held that delusions contain something true. Confronted with an uncomfortable contradiction between a patient's beliefs and their own, many people's instinct seems to be to assert the possibility of a plurality of truths. People of some philosophical persuasions (self included) find this too wishy-washy. Perhaps verisimilitude can help us square the circle. 

Monday, 15 August 2016

Trump: A psychological fiction

Nobody predicted it. A chronic narcissist they said. Mentally unstable. Not fit for office. But 2016 was that sort of year. The unthinkable had happened time and again. In retrospect the rise of Donald Trump to the presidency seems inevitable. Already the succession of events seems pre-destined; a global economic downturn, combined with the shift of manufacturing jobs overseas, guts the white American working class financially, at the same time as the rise of a triumphant cultural liberalism aliented them socially. Trump was able to ride to power on a double wave of anger. The story seems designed for school history books.

It took no time at all for Trump to look seriously out of his depth. What had looked like confident bluster for most of the previous year (and had so pleased that section of the population that had voted for him after years of feeling sick at being condescended to by the "liberal elite") started to lose its sheen even for the Donald's most ardent fans. It was one thing for Trump to swagger onto one of his golf courses in Scotland during the UK's EU referendum. It was quite another to watch him garble his way through his first joint press conference with the proficient Theresa May. For the first time in living memory a US president came second fiddle to a UK Prime Minister. Worse, for former Trump supporters, this was a woman!

Again and again Trump looked foolish. His gaffes piled up; mixing up North and South Korea during his  inauguration address, appearing to think Francois Hollande was the Canadian premier, and of course the unforgettable backtracking on the great Mexico-US border wall as it transpired almost immediately that such a project was utterly unfeasible. Never in history had a president looked so hopeless so quickly after taking office. 

But the really unpredictable part came next in mid 2017. Rumours began to circulate that the joint chiefs of staff were plotting to find some way of dealing with Trump. Not unseating him (a straight coup would have been too de-stabilising for America), but subtly moving to de facto rule by military until the 2020 general election rolled around. Despite America's historical love of democracy, there was a quiet sense that most of the population would have supported such a move. Americans may have been sick of being governed by politics-as-usual career politicians, but they had no wish to see the country driven to complete destruction by someone as nasty and stupid as the president.

Trump's bluster began to falter. For a man with a historical lack of any apparent humility (or capacity for self reflection) he started to seem far quieter. Interviewers noticed a calmer quality. He was famously photographed leaving a briefing in the Oval Office with tears in his eyes. Suddenly Trump's mental health was in question again; tabloids ran crass stories about him losing it; buckling under pressure.

And then the game changing press conference on the White House lawn in September, reading tearfully, but with unprecedented dignity from notes on a lectern. "Fellow Americans, I have a burden I wish to share with you today; the burden of a man who has battled all his life with crippling shame and self disgust." The journalists were aghast. Was this a bizarre trick? A resignation? Had Trump finally gone mad?

He continued:
During my campaign a lot of people threw a lot of diagnoses at me, a lot of hateful terms. That hurt, but I did what I have learned to always do, to shrug it off and roll on. I knew I could ignore the haters, even feed off them. I had never known failure before, not real failure, so I rolled on, thinking I could just keep my head above water. But in my months as president I have learned something profound; something which has changed me more than I can hope to convey to you. Those wannabe doctors throwing diagnoses at me? Well, painful though it was to admit it, I have come to see they were right. Here's what the doctors say they mean by Narcissistic Personality Disorder:
Trump pulled out a piece of paper and read out the DSM-5 criteria for NPD. He laughed at each item on the list, with the Washington press pack (nervously at first) joining in too, sharing with him this unprecedented self-disclosure:

  • Grandiosity with expectations of superior treatment from others
  • Fixated on fantasies of power, success, intelligence, attractiveness, etc.
  • Self-perception of being unique, superior and associated with high-status people and institutions
  • Needing constant admiration from others
  • Sense of entitlement to special treatment and to obedience from others
  • Exploitative of others to achieve personal gain
  • Unwilling to empathize with others' feelings, wishes, or needs
  • Intensely jealous of others and the belief that others are equally jealous of them
  • Pompous and arrogant demeanor
Sounds like me right? Well, at least it sounds like the me of last year, and the me of my entire life up to now. I've been that guy everyone calls 'arrogant'. I've been the pompous entitled guy who bullies and intimidates to get what he wants. But I have to tell you, there is another side to all this that the psychiatry textbooks don't play up; the feeling of vulnerability, shame and goddam self hatred underneath it all!
He was getting tearful again, and across America, so were millions of others too. Blue collar workers who had voted Trump to stick it to the liberal elite; New York intellectuals who had hated Trump and everything he stood for. Blacks. Whites. Latinos. All across the country, people united in shared emotion at the disclosure suddenly being made by the most powerful man on the planet. Trump went on and described the intense feelings of loneliness and shame he had experienced all his life, and which he had protected himself from using a defensive shield of confidence and grandiosity.

What Trump did that day changed America's understanding of mental health, and of Narcissistic Personality, forever. Trump made a radical shift toward collective governmental decision making, openly acknowledging his own limitations; "now I have been open about how I used narcissism to defend myself, I don't have to hide my own lack of knowledge or experience; I can learn! It's liberating, really."  Bullying bosses across rethought their behaviour as the president role-modelled a strong but fallible leader. Books appeared describing that hidden underbelly of narcissism; the fear and insecurity it hides. The American Psychiatric Association revised the DSM to more strongly emphasise that "true self" core underneath the defence. And slowly the term came to have a less insulting ring as the population at large stopped associating it with brashness and arrogance, and held in mind instead that fragile, frightened person underneath to who we can all relate.

As I write this, in 2018 Trump's approval ratings are middling, but there is an unprecedented sense of warmth and respect for someone who, having brought the country to the brink of crisis, managed to weather his own psychic storm to rapidly. Americans have weathered that storm with him, and there is a feeling that somehow leadership has been changed forever. 

Saturday, 6 August 2016

"None of that was real": Folk metaphysics and psychopathology

A brief selection from Irvin Yalom's latest book of psychotherapy vignettes: A newly qualified clinical psychologist (Helena) seeks psychotherapy with Yalom after realising that a recently deceased friend and travelling partner would have met criteria for bipolar disorder. Reflecting on their exhilarating travels together, Yalom’s patient expresses an unsettling worry:

What I used to consider the peak of my life, the glowing exciting center, the time when I, and he, were most thrillingly alive—none of that was real. (Yalom, 2015. Italics in original).

There is a peculiar sort of folk metaphysics on display in this complaint. Helena has just qualified as a clinician and now reinterprets the behavior of a gloriously energetic friend in terms of illness. Perhaps aspects of the friend’s life do make sense in terms his having of a mood disorder, but the idea of such an illness seems to rob some of his experiences of perceived authenticity. Although it is less tangible a harm than stigma, detention or forcible medication, this sense of lost reality seems to be a profound and damaging alteration in conscious experience.

Here in microcosm we see a hint of how people (even clinicians) think about psychiatric disorder categories. Not real. Not mine. Not me. I feel bad for the clinical psychologist's patients. What an impoverished and concrete way she has of thinking about their experiences.

Tuesday, 12 July 2016

Reviewing the Literature: Fiction's Clinical Psychologists

This post is a callback to a talk I saw in March, in which John McGowan from Salomons explored the way psychiatrists are represented in film. With so many villainous psychiatric professionals he asked, is Hollywood ready for a hero psychiatrist? This is a good question to explore. How psychiatrists are represented in film and literature is an index of how they are held in our collective imagination. His survey of recent films suggested something like ambivalence toward the mind doctors. However, psychiatrists have been heroes in the past. The final scene in Hitchcock's Psycho, when Dr. Fred Richman (or Richmond? IMDB is ambiguous here) delivers a confident breakdown of Norman Bates' mental process, is very telling. That was the very early 1960s US, when psychiatrists ruled the cultural interpretation of the mind with their baroque psychoanalytic theories and formulations (until the 1980s they dominated US psychoanalysis as other professionals were excluded from analytic training institutes). Hitchcock's doctor is dashing, fluent and definitive. He is also almost omniscient. What a good thing that we don't hero worship our mind doctors like that anymore!

By now psychiatrists have quite a long and complex place in film and literary history. What about clinical psychologists specifically? You don't find them very much in fiction. Pat Barker once wrote a novel Border Crossing about a child psychologist, and the Sixth Sense's Malcolm Crow is a psychologist too. They may not be heroes, but they are good guys. Sensitive, intuitive and basically virtuous. Often though, clinical psychologists are lumped in under the vague category of mind doctor. You don't really know whether they are psychiatrists, psychoanalysts, psychotherapists, or what. Authors and directors don't really care about the boundaries between psy-professionals that we practitioners like to police so carefully. 

There are two exceptions to this trend that I am aware of. They are revealing because they both tell a similar story. The first is from Norman Rush's incredible novel Mating, which is an epic story of love and the life of the mind, centred on the figure of Nelson Denoon and an unnamed female narrator. The narrator loves Nelson with a sort of respect and passion that can only be conveyed in a reading of the book. She becomes dismayed when, after she crosses the Kalahari desert to join him at a utopian community he has been developing, he develops a sort of passive indifference to ideas and projects he once found important. She enlists the help of a clinical psychologist to try and get him back, but has to grapple with the contempt with which Denoon holds them:



"About as respectable as colonic irrigation"! I found that part smarted a bit. Rush has spent a lot of time getting us to trust Denoon's opinion, and it is impossible to read the novel and not love him yourself. 

Another example is in Will Self's How The Dead Live. Self is very interested in (and skeptical about) the psy professions. His character Mr. Khan is specifically a clinical psychologist (in contrast with one of Self's recurring characters Dr. Zach Busner; a psychiatrist), and seems to have been included as a form of deliberate professional satire. Here he is meeting the novel's main protagonist Lilly Bloom, who is about to die of breast cancer (a fuller excerpt is here, and is worth reading):


Yuck! What an unattractive portrait; albeit one I recognise in real-life descriptions from people who have received help from clinical psychologists. At the risk of denigrating my colleagues, I wonder if there is a grain of truth in Self's unflattering depiction. To a some extent he is wringing a laugh; using a self important professional for comic relief. But satire is a serious business and should make us think about how we are with people, and in what ways we fail to accurately see ourselves in those interactions. I'd be interested to know about any other clinical psychologist appearances in the fictional domain. How else are we perceived and imagined in the culture?

Sunday, 19 June 2016

What do we talk about when we talk about schizophrenia?

I have been gleefully reading Kieran McNally's book on the history of schizophrenia, which turns out to be a compendium of great detail and fascination. As someone who has spent a few years now trying to seriously orient myself in the history of this weird and sprawling concept (I was lucky enough to be allowed to devise and teach an undergraduate course on the history of schizophrenia), I am staggered by the scale of McNally's erudition on the subject. It makes the book enormously valuable both as a treasure trove (in addition to an almost 30 page long reference section, there is a further 10 pages of recommended reading) and as a contribution to our understanding of this unwieldy but influential idea.

The topics of madness and psychiatry have long had their groups of dedicated historians, but the history of schizophrenia itself can get sidelined. Often it is told as part of a broader narrative by people with an axe to grind (witness Jeff Lieberman's casually whig history "Shrinks" from last year), or with other, bigger fish to fry (Richard Bentall's Madness Explained contains a nice conceptual history of schizophrenia, but it is not the main focus of the book). Such histories are, in any case, often predominantly externalist, meaning they focus on the social and economic context of madness (or on the personalities of famous psychiatrists), and not on the development of the ideas. McNally's book is avowedly internalist about schizophrenia. This means you won't find many colourful anecdotes about wacky doctors and their extraordinary patients, but the story of the concept's development (filling a space that has been peculiarly vacant) is no less entertaining. The book is built partly out of papers which McNally has published on specific historical questions, but it still comes together into a satisfying and revealing narrative.

Things Are More Complex Than They Seem:


This is a "critical" history in the best sense of that term; that is, McNally introduces layers of complexity and nuance to a narrative we already think we know. The rough outline of schizophrenia's past is well rehearsed: at the turn of the 19th/20th centuries, Kraepelin separates Dementia Praecox from Manic Depression, and Bleuler re-names it "schizophrenia", partly to avoid the degenerating quality implied by "dementia". Psychiatrists disagree wildly about how to define it, until a series of refinements (Schneider's first rank symptoms, the Feighner criteria) lead into a universally accepted definition in DSM-III. There are two major waves of disruption (Poland's "socio-political" critics in the 1960s, and "scientific" critics from the 1980s to the present), and a future rendered uncertain by the rise of NIMH's RDoC initiative.

Several major strands in this story are unwound by McNally, revealing how official psychiatric knowledge transmission warps the field's history. To begin with, it is convincingly demonstrated that the notion of schizophrenia as "split personality" (which psychiatrists have spent decades defining schizophrenia against) is not some popular misconception perpetrated by an unwitting public but was, for many years, built firmly into the professional understanding of the category. Thus psychiatric textbooks spent about the first 3rd of schizophrenia's lifespan describing it in terms of psychic splitting, and the next two 3rds repudiating that conception.

Officially Hecker's idea of Catatonia (which was incorporated into schizophrenia) has been "disappearing" from the diagnostic scene, possibly because of improved medication. In fact, argues McNally, it may never have been very prevalent, nor very conceptually coherent ("Taxonomy, consequently, made visible to science, in a ceremonial space, categories of people who were not in fact there." - p.95), and was only reluctantly accepted as part of the broader schizophrenia classification in the first place. In another vein meanwhile, the popular Bleulerian mnemonic, the "four As" (disorders of association, affect, ambivalence and autism), is at least an over-simplification of Bleuler's writings, and at worst a distortion. Some texts have five As, and others disagree over what the As actually are. In any case, Bleuler did not write in such glib snippets, and the acronym only appears some fifty years after his text, probably for the benefit of trainee psychiatrists who felt bad that they couldn't find time to read the original.

These are just headline findings. It is not possible to do justice to the richness of the text, which brings out much needed detail from schizophrenia's murkiest period, that space between the appearance of Bleuler's 1911 book, and the emergence of the first DSM. During those forty something years, psychiatrists were particularly divided over what schizophrenia meant, and how it stood in relation to the idea of dementia praecox (which actually survived in some dusty corners until into the late 1960s). Importantly, McNally can read German and French, and can thus go back to original source material in a way that is rarely done. So much of the self-recounted history of psychiatry (Lieberman's book is a prime example) hews closely to the living memory of the teller. Thus anything much before the 1950s has been increasingly excluded from the profession's autobiography.

Ahistorical Psychiatry:


One theme that runs throughout is what McNally describes as "the ahistorical nature of psychiatric thought" (p.126). Psychiatry, he points out, has persistently neglected the development of its own concepts, leading to simplification and dilution of its ideas (some psychiatrists have also lamented this tendency). This is how ideas pertaining to catatonia, split personality, and Bleuler's "four As" can be so awry.

It's tempting to hope this doesn't matter. As Thomas Kuhn pointed out, all successful scientific research is in the habit of forgetting its history ("Why dignify what science's best efforts have made it possible to discard?" - The Structure of Scientific Revolutions, p.138). But it does matter deeply. There is serious doubt about whether psychiatry is a scientific enterprise (a psychiatrist once told me that he had chosen his profession because it was the only branch of medicine prepared to admit it was not a science), and no good can come from simplistic reification of ideas at the expense of describing real experiences. Recent research by Nev Jones has highlighted the peculiar and disquieting effect when people doubt the validity of their experience because it fails to match canonical DSM descriptions. To accurately describe people's subjectivities, psychiatry needs depth, and for all its flaws, the detail one can find in Bleuler's clinical writing conveys a sense of people, and what ails them, that checklist diagnoses are sorely lacking.

Contra Metzl?


It is peculiar that McNally devotes a whole chapter to the issue of how schizophrenia fed into social discrimination, and a section therein to its specific racial biases, but nowhere mentions Jonathan Metzl's The Protest Psychosis. Metzl's thesis is that schizophrenia became a "black disease" during the late 1960s, when DSM-II took away the suffix "reaction" from the diagnosis, and psychiatrists implicitly came to associate paranoid projections (an important concept in understanding psychosis at the time) with the representations of black political activists. Possibly he does not concur with Metzl. By McNally's reading, schizophrenia was already a black disease long before DSM-II or even DSM-I, being over-diagnosed in black populations in studies in 1925 and 1931.


Beyond the Horizon


There is sometimes a sense that McNally over-does the ludicrous quality of schizophrenia research (though, I would hasten to add, not by very much). For instance, in an entertaining early chapter he reviews the extraordinary litany of long forgotten sub-divisions and related concepts. Speaking of a schizaxon, schizothymia, schizomania, schizonoia, schizobulia, schizophasia, shizoparagraphia, or of a schizovirus all seem rather absurd now (especially when you put these schizo prefixes together). McNally groups Meehl's (1962) schizotaxia in with these redundant concepts, painting a picture of one more another junk idea in the scientific dustbin. But although it's fair to point out that no-one now speaks of schizotaxia, it is misleading to suggest that Meehl's idea fell by some historical wayside, just because the term didn't catch on. In teasing apart a conceptual referent for "schizotypy" (a sub-clinical, at-risk phenotype) as opposed to schizophrenia (a clinical disorder) and schizotaxia (a heritable disposition), the framework presented in Meehl's paper provided a powerful organising principle for schizophrenia research ever since. Whether they know it or not, contemporary investigators are indebted to the idea of schizotypy (which is actually very popular right now). Schizotaxia (even if undesirably named) is perfectly conceptually coherent. Nobody now talks in terms of Albert Ellis' "musturbation" (to mean the anxiety provoking feeling that one should achieve some unreasonable thing), but that doesn't mean Albert Ellis didn't play an important role in re-conceiving the function of psychotherapy.

As I mention above, McNally is not interested in pushing an agenda for researchers, though one suspects he thinks they should be more historically literate. However, it's impossible to read this book without wondering about the problem of schizophrenia's conceptual unwieldiness. McNally is, at the very least, skepitical, and wonders in his conclusion whether the side effects of medication are too high a price to pay for treatment given the idea of schizophrenia has "often failed to justify itself" (p.210). The validation of schizophrenia is frequently postponed for the future, a shining technological breakthrough when psychiatry anchors its concepts once and for all. Once again, the idea of abandoning schizophrenia is in the air; should we stop talking about it? Should we call for a paradigm shift? If only it were so simple.

I have argued before that schizophrenia's flaws are undeniable, but we lack a compelling alternative. Paradigm shifts (at least in the Kuhnian sense) take place when a theoretical framework arrives that makes it untenable to speak in terms of its predecessor. Schizophrenia is just over 100 years old, which isn't that long in the tooth for a productive but strictly false programme of research. Phlogiston theory organized research in chemistry from 1667 to 1780, though researchers probably had a sense it was flawed for a while before they could figure out a better way of thinking. Unlike Oxygen theory, none of the competitors currently being mooted in the psychiatric domain (a focus on specific symptoms or complaints, or on individual formulations) is formally incommensurate with a theoretical disorder called "schizophrenia". Until a theory arrives that makes tighter predictive claims, we are stuck with a hot mess.

Friday, 3 June 2016

Medication, Phenomenology and the Nocebo Effect

A great recent paper by Gibson and colleagues undertook a thematic analysis of people's responses to being asked about taking antidepressants. Some of what they described was very negative. This is not a surprise (it is well known that many antidepressants have a significant side effect profile), though it is important that it has been documented. Here, for example, is a striking description:
Each one has had a worse effect than the previous…. I can’t remember them all. It started with memory loss then progressed to me becoming borderline catatonic staring at the wall for hours unable to stand up. Within a few weeks and genuinely terrified. It was a relief to go back to the misery of depression after these experiences
In addition to descriptions of what we can designate as direct negative physiological effects, another negative theme that emerged was "loss of authenticity/ emotional numbing". This is a more slippery experience; a sort of phenomenological unease arising from taking medications. Authenticity is an important part of our sense of who we are. To interfere with it may be less physically dangerous than a side effect like weight gain, but feels somehow more metaphysically perilous. Take my body, but leave my self alone!

The authors of the study point out "This research points to the inadequacy of asking the simple question: ‘Do antidepressants work?’ Instead, the value or otherwise of antidepressants needs to be understood in the context of the diversity of experience and the particular meaning they hold in people’s lives." I agree, but I think even this form of the question can be complicated further.

We have become accustomed to thinking about the effects of antidepressant medications in terms of the placebo effect. Since (at least) a famous meta-analysis by Irving Kirsch (and a subsequent book), many have suggested that the benefits of antidepressants are not the result of an positive, active drug effect, but the mysterious workings of the various expectancy effects we call "placebo".

It's a popular idea, albeit one that has become fraught with controversy. I am not going to wade into the question of how effective antidepressants really are (if you want to think about that then you are in for a long puzzling road. You could do worse than to start with James Coyne's provocative critique of Kirsch here). but I do want to suggest that, when it comes to expectancy effects and antidepressants, there may be a kind of asymmetry in how we customarily think.

Drugs also have nocebo effects; harmful outcomes that arise from the expectations of the people taking them. The nocebo effect (placebo's evil twin) is not something to be fooled around with. For a vivid account read this case study of a young man who needed hospitalisation after he overdosed on the inert pills he was given during an antidepressant trial. If expectations about a sugar pill can do that, then without doubting the flat reality of antidepressants' severe physical effects, we might wonder whether some negative effects, including feelings of phenomenological unease, could also result from a such a phenomenon.

There is a veritable culture of suspicion about the phenomenology of antidepressants, and a strand of cultural commentary on psychiatric medications that sounds a shrill moralising note. Taking medications for depression is regarded by some as an inherently suspect thing to do. Two notorious skeptical pieces in The Guardian (by Will Self and Giles Fraser), around the time of the publication of DSM-5, both hinted at the idea that taking antidepressants was the result of false consciousness:
At worst, they pathologise deviations for normalcy, thus helping to police the established values of consumer capitalism, and reinforcing the very unhappiness that they purport to cure.
It is hard to imagine that none of this would loop back round and influence people's experiences of what it is like to take medication. Ineeded, psychiatrist Linda Gask writes beautifully about the internal struggle over self-authenticity that can result from these ideas:
There are times still when I wonder whether the medicated me I’ve been for so long is the ‘real’ me, or are these tablets simply suppressing the person I truly am?
Contrast the Gibson study with the miraculous seeming accounts of the experience of taking SSRIs when they were brand new. Peter Kramer's 1994 "Listening to Prozac" included the now famous (and oft-derided) claim from one patient that the drug made them feel "better than well".

Could it be that when drugs first appear, they not only benefit from a sort of placebo boost (in virtue of their novelty value), but also from the absence of a culturally inherited, nocebo baggage? Research on this question seems just as important as teasing out the beneficial effects that arise from inert substances. If there are such effects, what are the moral obligations that arise for how we talk about treatments and shape the expectations of those who take them?

Saturday, 21 May 2016

Scattered Thoughts on a Hard Subject

Spurred by Masuma Rahim's thoughtful piece about the issue, I have been thinking about psychiatric assisted suicide. She points out that this is an issue it is "very difficult to have a settled opinion about". I don't yet have one, but it seems important to understand some of what is at stake. This is a list of thoughts. The topic may be triggering, and this post should be approached with that in mind.

1. Any debate about psychiatric assisted suicide concerns the question of whether there are ever circumstances in which people with mental health problems should be allowed to receive help to die.
2. If healthcare professionals are to have any role in this process, part of it must be in trying to provide the best possible assessment of whether a person can reasonably expect their life to improve.
3. Unless we think assisted suicide is always unconscionable, we have to accept that there exist reasons that bear on those cases in which it is not. Clarifying those reasons will help us think more clearly about the issue in general.
4. Whether or not mental health problems are "illnesses" is of no relevance to the question of whether psychiatric assisted suicide is morally palatable. The desire to die seems driven by the intensity, and particular quality, of individual suffering. It is not clear that this suffering is more real in cases where an illness exists. Whether or not a person wants to die is likely to be a function of whether they think their misery will persist.
5. A domain specific prohibition on assisted dying in psychiatry would appear to suggest that it is not possible for mental health service users to make reasoned choices about whether they can end their lives.
6. Psychiatry has a long history of "great and desperate cures", driven by desire to avoid feelings of hopelessness on the part of the doctor. How would we know psychiatric assisted dying isn't just the latest chapter of this ignoble tradition?
7. I have, in the past, walked on to a psychiatric ward and felt a chill at the idea that I could end up locked in a place that is organised almost entirely around the idea that I should be denied, at any cost, the freedom of killing myself.
8. To characterise this debate in terms of one group of people declaring another group "better off dead" is to fail to engage with the experiences of those who have advocated for their own right to psychiatric assisted suicide, or pursued it for themselves.
9. There is a particularly difficult balance to be struck between emotion and reason in this debate. We need to think calmly and clearly about psychiatric assisted suicide, but it is hopeless to try and avoid appeals to emotion. No-one can hope to understand what is at stake unless they take time to imagine what it is like to spend many years very seriously wanting to die. Equally no-one can hope to understand what is at stake unless they take time to imagine what it is like to lose someone to suicide.
10. We might wonder whether a policy like this would have a positive impact on the suicide rate. If people are aware that it is possible for them to die under medical supervision, that may reduce the intensity of some people's despair and desperation, making them less likely to kill themselves. Hope, even the paradoxical hope for death, might help people feel better.
11. Alternatively, a policy like this might increase the social visibility of suicide and diminish the taboo that surrounds it. This might lead to an increase in thoughts of death and more completed suicides, even in a sort of contagion as the idea occurs to more people. Legal protection would put suicide into the "pool" of acceptable solutions.

Thursday, 5 May 2016

Genetic Disavowalism is the Denial of Privilege

Here are two recent strands of thinking about genetics in clinical psychology: 1. Oliver James's (and others) bold position, that genetics play little or even no role in human psychology. Marcus Munafo has called this "genetic denialism" 2. The diffuse suggestion (one recent example here) that to pursue genetic research into mental health problems is related in some way to a eugenic agenda; to wit, that (i.e.) a genome wide association study looking at the diagnosis of schizophrenia may encourage us to think in quasi-fascistic ways. There are some good responses to the first of these strands, in Munafo's article (linked above), and in this piece by Kevin Mitchell at Wiring The Brain. Here, I want to address the second strand, which I will call genetic disavowalism.

The purpose of genetic disavowalism is pretty clear; to encourage us to think of genetic research and theories of genetic risk as inherently negatively morally valenced. This argument (to the extent that there is an argument; it is seldom made explicitly) is a little under-cooked to say the least. It is of course perfectly possible to acknowledge a genetic contribution to human behaviours and mental states without commencing some inexorable slide toward Nazi-ism. Does the genetic aetiology of Down Syndrome commit society to a re-run of the Nazi Aktion T4 programme? Clearly not. For one thing, a eugenic policy is a choice a government makes rather than a necessary consequence of a given set of scientific knowledge. For another, there is nothing to stop any government undertaking such a programme targetting people on the basis of behavioural or cognitive traits it doesn't like, but which are not genetically determined. Even if genetic theories about human behaviours and tendencies do incline some sorts of person towards ideas about eradicating those behaviours and tendencies (by "breeding them out" or what have you), there is no logical entailment, and we carry on with genetically inclined research because we wonder if there might be benefits to be derived from the knowledge.

Apart from all that, I think that genetic disavowalism has itself a moral problem to contend with; the denial of genetic privilege.

We are accustomed to thinking about privilege in terms of race, gender or social class. As a white man, for example, I have the privilege of not being looked on with suspicion in certain neighbourhoods, and I have the privilege of not feeling tense when groups of NYPD officers walk past me. It has come to be seen as crass and offensive to fail to acknowledge our privilege, especially when discussing race (see Peggy McIngtosh's essay on the invisble knapsack here), but the notion of privilege has been linked to mental health as well, by Martin Robbins here, and by me here.

When I first blogged about sane privilege, I was thinking in terms of the social position people have when they are viewed as less rational in virtue of their psychiatric status. When a person is considered deluded, their utterances become generally more suspect in the eyes of people around them They lose certain testimonial privileges (some of their statements about reality are taken less seriously). But privileges are also conferred on us by our genetic predispositions. This is most obviously the case in the way that skin colour or primary and secondary sexual characteristics are genetically determined facts about our appearance, but it presumably has cognitive implications too.

To the extent that IQ is genetically influenced, my course mates or colleagues with IQs two standard deviations above the mean have an advantage relative to me (with my quite middling IQ) in performance on exams or the production of research and logically sound clinical arguments. Equally, to the extent that my genetics plays a role in my tendency to not have debilitating emotional "highs" or feel my relationship with reality become terrifyingly fragmented, I have a sort of privilege conferred on me relative to people who are prone to such experiences. It is no good arguing that actually a tendency toward certain mental states is actually perfectly desirable, and should itself be considered a privilege. That may so for some people, but unless we want to deny that mental health problems are frequently extremely difficult to live with (and unless we want to throw out even the apparently politically neutral term "distress" to refer to such experiences), we have to acknowledge that is not the case for all.

Acknowledging cognitive genetic privilege need not entail acceptance of an illness account of mental health problems. Peter Kinderman has movingly written about his risk for a psychotic experience, given a possible personal high genetic loading for such an occurrence. At the same time, he resists the implication that this means he has a disorder or "attenuated syndrome". Even if you feel more inclined than Kinderman to describe such a genetic loading as predisposition toward illness, his is a perfectly consistent intellectual position.

Genetic influences on psychology have always been a controversial topic, and there is an easy tendency to accuse genetic researchers or thinkers of secretly holding eugenic aspirations. Perhaps some strains of genetic reasoning are infused with a negative moral valence (think of the pub bore who argues that women are genetically inferior), but to the best of our knowledge, genes make certain aspects of our lives more or less easy for us. They confer varying degrees of privilege. To ignore this is not only unrealistic, it is insensitive.

Tuesday, 29 March 2016

"Difference Makers" and "Background Conditions"

A group of clinical psychologists has made the case that the UK's Medical Research Council should spend more money funding research into the social rather than biological causes of mental health problems. Note the headline of the article reporting the story: "Mental illness mostly caused by life events not genetics, argue psychologists". The argument is clear; mental health problems are set off by life events, not by some underlying biological vulnerability.

This sort of claim about causality has consistently proved controversial. Oliver James recently ignited firm criticism from behaviour geneticists when he baldly denied the role of genetics in mental health problems. I am with the behaviour geneticists in that dispute; James' dogmatic environmentalism rests on a wilful misunderstanding of scientific findings, and on some very shaky arguments.

Environmentally inclined clinical psychologists often want to push back against a view that says most of the cause of mental health problems lies in our genes. There is a fact of the matter about this, and it does suggests a powerful role for pre-disposition. If we want to find someone who meets criteria for schizophrenia our best bet is to find someone who has an identical twin with the disorder. Nothing else raises the risk so far (from its baseline of around 1% to 28%*). Because of this, many researchers now hold that bio-genetic vulnerabilities do the bulk of the causal work in psychosis (leading some psychologists to complain that environmental factors are marginalised by being reduced to the status of "trigger").

But even so, the claim that we underplay the environment's role as a cause may be warranted. Causality is complex and we assign different weights to different causal stories depending on what we intend to use them for. A criminal court, for example, may apply a "but for" test, asking whether the events under examination would have happened but for the actions of a defendant. This doesn't necessarily show us the full causal picture as it doesn't answer questions about why the defendant behaved as they did (indeed liberally inclined thinkers tend to feel that the criminal justice system focuses too much on individual responsibility and not enough on societal causal factors when punishing people), but it works tolerably well for assigning a certain sort of criminal responsibility.

Bringing environmental factors further into the foreground may serve a valuable purpose in the mental health debate. Consider this passage from Peter Zachar's book A Metaphysics of Psychopathology:



Zachar brings out the element of choice we have in identifying causes. Exactly what we choose to call a cause depends in part on what aspects of the whole situation we consider "background conditions". He does not imply that the choice is limitless (he is not a relativist about causes), but he does suggest that where you turn your investigative attention may legitimately be a function of your interests; a function of what aspects of the total situation you feel to be most relevant. 

Most relevant to what? To the interventions we can make to help people. Perhaps the enormous bulk of research that investigates the genetic and biological underpinnings of mental health problems takes a particular view about what can be seen as "background" and what can be seen as a "difference maker". If your aim is to develop medicines and genetic tests, then it makes sense to focus on neurotransmitters and SNPs, as these are the things you hope to change. They start to loom into focus as "difference makers". But it is also possible (especially in most mental health settings, where it feels like gene therapies or radically improved medications are a very long way off) to see these ingredients as part of the background. This makes sense in the light of a burgeoning "neurodiversity" movement, which re-frames genetic variation as normal, and thus undermines the notion that this or that genetic predisposition (to schizophrenia say) is itself a relevant pathological "difference maker".

What motivates psychologists who see trauma and "life events" as significant in causing mental distress is a refusal to see various forms of adversity as a "background condition". Sure, genetics plays an important role, these researchers suggest, but the public health implications of that fact are not immediately clear. Meanwhile, the public health implications of an aetiological role for traumatic life events are obvious; we should aim to stop people being exposed to them. As Peter Kinderman says in the article I linked to at the top, "when unemployment rates go up in a particular locality you get a measurable number of suicides".

If asked, I am sure Kinderman would deny that a change in economic circumstances is the whole causal story in any given suicide. Likely a host of factors (personality variables, social support network and so forth) combine to create something like more or less "resilience" in people. But unless you can intervene to improve that resilience, it makes sense to push it some way into the background and focus on things you feel you can change. If you do this, life circumstances and political events start to look more like "difference makers", even if we can still have a debate about what constitutes a cause.

______________________________

* UPDATE: I originally cited the figure 48% here, reflecting the commonly quoted probandwise concordance rate for schizophrenia in identical twins. 28% reflects a lower estimate of concordance, based on a pairwise concordance rate. There is some controversy over which rate to cite, and as this post was an argument for greater focus on environmental factors, I did not want to lay myself open to the charge of minimizing the genetic contribution. However, it was suggested to me that the probandwise rate is an inflation of the true concordance rate, and for the time being I'm inclined to agree. Nonetheless, there are good arguments for using the probandwise concordance rate, and when I have better understood the issue, I will try to write a post outlining them.

Friday, 25 March 2016

Mental Health Conferences and Service User Inclusion

I'm just back from a fantastic conference laid on by the History and Philosophy section of the BPS, and the Critical Psychiatry Network (thanks to Alison Torn at Leeds Trinity University for putting together such a great programme).

Beyond the content of the papers, I was struck by the way that the event recapitulated an ongoing tension evident around the inclusion in academic spaces of "experts by experience". Conferences like this are increasingly attended by people who have experience of using mental health services (a fact which seems essential if "critical" aspirations are ever going to bear serious fruit), but are they always included effectively? 

One attendee noticed a bunching together of service user talks into a single session:



Did this encourage the use of kid gloves with service-user researchers? Or set up an implicit distinction between more and less "professional" research? Rather than dividing presenters up by identity (into service users and professionals, or experts by training and experts by experience), a useful distinction might be between people who are attending a conference with the purpose of presenting research and those who are giving testimony. 

There is nothing about service user produced research that makes me feel inclined to judge it differently than that produced by non-service users. It will be a very good thing for everyone if more research is conducted by people on whom it has a direct bearing, but it is subject to the same scrutiny as research conducted by anyone else.

Service users who deliver testimonials however are doing something very different. Their words are personal and a degree of emotional risk is involved when you disclose intense experiences and give voice to anger. We don't subject this sort of testimony to the same degree of quarrel, nor pore over it in quite the same "academic" manner as we do a theoretical exposition or literature review.

Making a research/testimonial distinction might create greater clarity about what we want service user inclusion to do for conferences (and for service users), because at least two distinct goals seem to be in play. One is that service users be included in mental health research in a way that expands our epistemological horizons and rejects a hierarchy that privileges some researchers over others. The other is for people to be able to speak at such conferences when they may not have the means or the interest to develop research per se, but nonetheless have something important to say.