Wednesday, 22 June 2022

Some barriers to sense making about ECT

The longstanding debate about ECT has been in the news again. A report about the treatment was discussed in an article in The Independent. There was also an strikingly rancorous debate on Women's Hour (starting here at about 34.20). The context for all this is the finding that women receive ECT at a higher rate than men. The Independent article seems to present this as if it were a bad thing.  That is only the case if ECT's benefits are outweighed by is disbenefits. Some people claim as much, but if they are wrong, then it might be good that more women than men receive ECT. Women are more likely then men to experience depression.

I have never got to grips with the details of ECT's efficacy. As a psychologist I feel I should have an opinion. This post is an attempt to engage honestly with the topic and form a view. Unfortunately there are several barriers to doing this. Here are three that I see:

1. Polarisation:

There exists one group of mental health professionals who administer ECT and profess that they would be happy to have it performed on them, and another group who oppose it to the extent that they call for a moratorium on its use. The way these groups interact on Twitter, in blogs, and in the scholarly literature suggests that there is a degree of ill feeling between these camps. 

The most striking fact about these two groups to me seems to be their different characterisation of the nature of depression. For psychologists this perhaps evokes a milder picture: someone who feels low or sad, with self-denigrating thoughts and perhaps feelings of hopelessness, but ultimately ambulatory and potentially responsive to psychological therapy. For psychiatrists, depression also includes those individuals so severely impaired that they are at risk of severe neglect or death by suicide. 

The temptation to "join" one side or the other is strong. Such "joining" is likely to increase bias and rationalisation. 

2. Florid rhetoric:

Rhetoric is a significant barrier to sense making in the ECT debate. For sure it is an intervention that makes me think of “great and desperate cures." ECT is – there’s no escaping it – a pretty extraordinary way of proceeding. Its origin story is startling: its founder was inspired by witnessing the sedating electrocution of pigs at a slaughterhouse. But a startling origin story can be rhetorically inflamed and overdone. Stated baldly, many of the facts of medical practice would whip up frenzy in a newspaper headline (try redescribing a bilateral mastectomy without medical euphemism). Those who oppose ECT tend to be guilty of describing it in terms that make it seem prima facie abhorrent. Those who defend it tend towards excessively sanitising language that elides its negative effects. 

3. Incomplete consideration of the stakes

ECT's opponents tend to discount the risks of no treatment. In the context of our conception of medical responsibility, failure to treat is also an intervention.

When people make the case against ECT, cognitive side effects are a central part of the argument. Sure enough the only person I have ever spoken to about their ECT suffered autobiographical memory loss that they regretted bitterly. These effects are often cited as evidence that the treatment causes brain damage (incidentally those who see ECT's cognitive side effects as a sign of brain damage also tend to argue against the framing of psychiatric disorders as brain diseases, even though it is widely established that depression, schizophrenia and bipolar disorder are all associated with sometimes very severe cognitive impairment). Considered in isolation, these cognitive effects seem so obviously undesirable that only a barbarian would consider risking them. 

However, working in a major trauma centre, primarily with people with acquired brain injury, the physical and cognitive aftermath of uncompleted suicide is a salient part of my professional life. At any time we usually have one or two people known to our service who were admitted for this reason. When psychiatrists make decisions about treatment of depression, this kind of aftermath is part of the stakes they have to weigh. It is in the nature of risk that the worst case scenario will not always be actualised. This doesn't mean it can be entirely discounted. We often choose a course of action with a more likely but less severe payoff so that we can avoid a less likely but more severe payoff. This is the structure of buying insurance. Considered this way, I think ECT becomes a far more reasonable proposition. Given a choice between the cognitive side effects of ECT, and the cognitive effects of a severe traumatic brain injury sustained by walking under a lorry or jumping of a bridge, I would most certainly choose the former. 

Extant evidence: 

Forming a view comes down to making sense of the efficacy literature. This is hampered by many of the barriers discussed here. 

A high profile 2010 review by Read and Bentall concluded: "placebo controlled studies show minimal support for effectiveness with either depression or ‘schizophrenia’ during the course of treatment (i.e. only for some patients, on some measures, sometimes perceived only by psychiatrists but not by other raters), and no evidence, for either diagnostic group, of any benefits beyond the treatment period." They discount these minimal benefits as outweighed by "strong evidence ... of persistent and, for some, permanent brain dysfunction." As discussed above, this discounting may be unwarranted.
 
Meechan and colleagues recently published a literature summary that meta-analysed the ECT vs placebo trials that Read and Bentall had reviewed. Their analysis favoured ECT:

A  response to that analysis by John Read (who opposes ECT) counters some criticisms made by Meechan and colleagues, but does not (as far as I can see) give grounds to  discount the results themselves. 

What are some things that seem reasonable to say about ECT?
  • ECT has cognitive side effects that can be long lasting, primarily a loss of autobiographical memory
  • The state of the evidence is fairly poor: there are only 11 placebo controlled RCTs that examine the issue. These RCTs tend to support ECT's efficacy in improving depression - the effect size is greater than those observed for medications and psychotherapy.  
  • Any positive effects of ECT are likely to be short term: i.e. long enough to get someone out of a very severe depression, but not sufficient to maintain them in that state. 
  • More ECT efficacy research is needed.
Postscript:

In a blog for the "Council for Evidence Based Psychiatry" (CEP) Richard Bentall (a psychologist skeptical about ECT) articulated a position that I found surprisingly sympathetic:

"I have been challenged to explain what I would do if faced with a patient suffering from life threatening depression (to which the answer is: try other therapies but, if there really was no alternative and death was imminent, I would probably try ECT in desperation despite the questionable evidence of its effectiveness)."

I wonder how many psychiatrists - themselves worried about the limited research on ECT, but aware of the possibility it can be helpful - proceed in precisely this way? 

Thursday, 15 July 2021

Challenging behaviour

Psychologists are accustomed to thinking about "challenging behaviour" in terms of the mechanisms of reinforcement that contribute to sustaining it. I associate this view with the work of Eric Emerson - who wrote an excellent book on the topic. 




The Emersonian view is a perspective that focuses on the contingencies that reinforce behaviour. It gives rise to mechanistic interventions that adjust those contingencies: removal of the rewards that sustain a behaviour, easier routes to the outcome being sought, differential reinforcement of other behaviour (DRO). This sort of approach has been effective especially in Learning Disability settings, where people typically spend long periods of time living, and where staff are primarily concerned with supporting behaviour. There are also case reports of successful interventions in long term psychiatric hospitals. These are the sorts of settings where the consistency and commitment required to adjust environmental contingencies are in more plentiful supply. 

Recently I have been involved in delivering "challenging behaviour" training in the context of an acute general hospital. I use scare quotes because the whole concept of challenging behaviour (a concept clinical psychologists are fond of) begins to fray a bit around the edges when examined more closely. For one thing, some have pointed out that the terminology itself is likely to stigmatise and mislead; framing behaviours as inherent to the people who exhibit them. This kind of critique seems right to me: behaviours are of course a function of person and the situation they are in.  Too much terminological pendantry can seem to miss the point. I have heard it suggested that "challenging behaviour" should be replaced by "behaviour that challenges," as though the two terms weren't in fact extremely similar. The sort of flexibility that would help us think productively about behaviour is likely best served by a similar flexibility in our approach to terminology.

More significantly, professional approaches to challenging behaviour are substantially impacted by the role the professional occupies, and the sorts of permissions they feel they have. Even before you get into adjusting reinforcement contingencies and so on, there is a significant difficulty about helping people feel permitted to do the basics. That sounds a bit strange - what am I talking about? 

When a person in a hospital becomes upset and behaves in a way that is experienced as challenging, the optimal response is for the staff around them to make sense of the behaviour (as a communication of distress and so on) and to respond in a way that the distress is mitigated. So a person shouts and punches in fear. They aren't restrained, but instead are met with reassurance. They feel calmer and they shout and hit less. This sequence seems so obvious that I sometimes feel embarrassed to deliver the slides about it. People usually nod enthusiastically along. But its obviousness or otherwise isn't really at issue. For the most part - I increasingly feel - it's not that staff don't know how to behave, it's that they lack a subjective sense of permission to behave in this way.

Consider this story:

A young girl sits in the middle of a room in a psychiatric ward. She has removed all of her clothes and is rocking back and forth.

The hospital staff has invited a visiting doctor to look in on the patient. They talk about the girl, who was diagnosed with schizophrenia: for a long time now, she has done nothing but rock, and she has not spoken since being admitted many months ago. They ask the visitor’s opinion.

His response is wordless: he takes off all his clothes and steps into her room.

He sits down next to the girl and begins to rock in sync with her, two naked figures side by side.

This goes on for a while. This goes on for 5, 10, 15, 20 minutes.

And then she speaks to him. Her first words in nearly 200 days.

Later, the visitor will ask the staff, “Did it never occur to you to do that?

The visiting doctor in that story was RD Laing, and its hard to read it without entertaining the hope that you too would have the clinical nous to do something similarly dramatically empathic. Laing comes off as heroically insightful, drawing his colleague's attention to their failure to do the obvious.

But such an approach is far from obvious. Assuming this really happened as reported, Laing's colleagues must have had doubts about what he was doing. The same is true - I want to suggest - even for responses with much lower stakes. Who wants to risk spending time listening when you don't feel qualified to deal with a person's tears? Who wants to try and reason with an angry shouting patient when you worry it might be done better by a more senior member of staff?

To connect with someone emotionally in hospital - even in a less dramatic way than Laing - involves a degree of risk. There is the risk of becoming emotionally impacted, the risk of feeling engulfed, and there is the risk of doing something wrong or foolish. Laing was free of this latter risk. He was Laing - a great psychiatric guru. Even if the encounter hadn't gone so well (the patient might not have talked - might even have baulked at the idea of being approached by a naked doctor) he would still have been lauded as a psychiatric genius taking an empathic risk. In other words, Laing was permitted by the whole set up to do something that was actually very difficult, even if he said it was obvious.

When we teach about responding to "challenging behaviour" in terms of a technical approach to reinforcement or distress reduction, we risk overlooking systemic factors that have to do with the granting of permission to respond empathically. My impression is that people want to respond empathically and know in theory what that looks like. A difficulty arises in any given stressful situation when people think that they are not the right person for that job, that someone more senior or more qualified ought to be drawn on. We could improve institutional approaches to "challenging behaviour" by finding ways to grant this permission as extensively as possible. 



Monday, 6 July 2020

The imposter's guide to imposter syndrome

Imposter syndrome is a concept that is having its time in the sun. Although first christened almost forty years ago in a professional article, it seems positively de-rigeur. Diagnostic concepts come and go but the extent to which they take root in the popular imagination is an indication of how far they speak to broader social phenomena. This one co-exists with increasing awareness of gendered discrimination in the work place. The article linked above is about women's experience specifically, and a recent book (Valerie Young's The secret thoughts of successful women) has expanded on the theme. Perhaps imposter syndrome might be recast as a description of the way that women have been made to feel by male dominated professional spaces.

Its widespread acknowledgment means there are some self-help interventions out there for chronic self doubters, with a dedicated website (impostersyndrome.com, the companion to Young's book) leading the way. These interventions tend to focus on a version of cognitive re-structuring, with a large helping of positive self talk. We are encouraged to "learn to think like non-impostors" by engaging in self directed pep talks. Techniques like saying out loud that we are awesome, or making a list of "at least 10 things that show you are just as qualified as anyone else for the role you are seeking." This kind of approach involves entering into an argument with yourself about the reality of whether you are or are not in fact an imposter.

Having been through that self-argumentative cycle myself multiple times, it seems to me that the big problem with debating yourself out of imposter syndrome is the corrosive skeptical worry that you might be wrong. This doesn't need to feel plausible, only possible. On self examination I can easily find areas of relevant knowledge I feel I don't have, and evidence of times I failed to meet a relevant personal standard. "OK" I tell myself, "but everyone has limitations and failures." "Yes" I snap back, "but yours are worse!"

At this point it has helped me to notice what I am up to. I suspect a degree of characterological inclination toward self-sabotage; an overly aggressive super-ego. Here the ruminations on imposter syndrome become a self indulgent cocoon. A "poor me" reclusiveness, a way to hide in an endless cycle of self-abuse and avoid hard work. I have also noticed a decidedly unattractive inclination toward self deprecation in conversations - picking up on my own defects. The result is often reassurance from others, which is presumably the point.

In fact the whole concept of imposter syndrome sets up a specific question ("am I an imposter?") and invites arguments about how to best reach an answer. The dysthymic self attends to evidence that affirms the proposition ("I definitely fouled up in that meeting; "I don't know half of what I need to in order to be competent"; "I can account for all my achievement in terms of luck rather than merit"). The positive therapeutic self is supposed to weigh evidence more favourably. But this is an effortful process, and runs the risk that (without the support of an external voice) you conclude that you are an imposter after all.

I have found it more helpful to engage a gestalt shift in attention that leaves aside the question of whether I really am an imposter. It even leaves room for the possibility that I am. The focus moves instead to the facts of any given professional situation, and to considerations of what ought to be done. Instead of looking in at the person, look out at the parameters of the task, regardless of who is undertaking it.

Viewed this way, the "imposter" question evaporates. It can even be viewed as a convenient evasion.

For the real issue in most professional scenarios is not so much you as the task you confront. The question is not "do I belong?" but "what has to happen?" In work situations you have already been selected for a job. Perhaps there was someone else on the interview shortlist who would have been better at it in some sense. Too bad. It is unlikely that person (or anyone else) can replace you imminently. The ethical thing to do is to work hard and keep your end up in the now. That will probably involve hard work. You certainly cannot save the situation by appeals to the idea you "belong" after all.

Are you going to direct your attention to the job in hand, or are you going to expend energy fretting about whether you really ought to be there? Unless you're in imminent danger of harming people (say, you somehow wound up convincing people to let you perform a surgery or fly a plane without the relevant qualifications), you most likely owe it to those around you to do your job as well as you can.

I realise this approach lacks the positive validating tenor of typical imposter syndrome self help. This is not a comment on the general validity of positive-validating approaches. It is a letter to myself, and so reflects instead my personal preferences. Deliberating over whether I am an imposter has resulted in some half hearted self-praise. But seeing the ways in which that whole game is a distraction has been altogether transformative.

Monday, 6 January 2020

Neurolit




Let me not be mad - A.K. Benjamin.
Bodley Head - 2019.
212 pages.

Into the abyss - Anthony David
Oneworld - 2020.
189 pages.


By the time Oliver Sacks died in 2015 he had become something of an untouchable. Not exactly a "national treasure" - but whatever the transatlantic equivalent of that might be. It seems strange to remember that, although he had attained sage-like status at his death (see Vaughan Bell's obituary post for a lovely example of the justified affection Sacks' inspired), he attracted controversy earlier in his career. The ethical worry about Sacks was that he was "the man who mistook his patients for a literary career;" writing for the "voyeuristic cognoscenti."

Whatever other legacies Sacks may have left, he arguably created more or less an entirely new literary genre ("neurolit" perhaps?), and every publisher of popular neurological case studies since has been keen to get "Oliver Sacks" onto the cover in some form, to provide the requisite signal to browsers.

I could read Sacks endlessly. After he died I ploughed through many of the ones I hadn't yet got to. I also worked through some of the expanding shelf of his literary progeny: Paul Broks, Suzanne O Sullivan, Jules Montague, the list goes on. As a psychologist this is all work related, but it is also a guilty pleasure - like detective fiction or spy thrillers.

Two recent books in the Sacks tradition have revealed some of the different things it can offer. A.K. Benjamin is the pseudonym for a mysterious clinical neuropsychologist who writes in the idiom of a world weary psychoanalyst. Anthony David is an academic psychiatrist and giant of his field whose first foray into popular writing is a spare but immaculate primer on psychiatry.

Image result for let me not be mad ak benjamin


Benjamin's book comes with something of a "twist," for we learn by the end that some of the clinical vignettes within pertain to his own mental health problems. This device is personally revealing, but the use of pseudonym necessarily takes some of the edge off. In the end it is far from the most interesting part of the book.

I wasn't taking notes as I read Let me not be made, but so vivid and honest is the writing that much of it has implanted itself in my mind. Healthcare often involves an element of facade; of adopting a confident professional position and sticking by decisions despite the knowledge they could be wrong. Benjamin sees and - you sense - detests this facade.

In an extraordinary passage he writes of the emotional work that patients (sometimes) do to protect clinicians from the worst of their experiences. Our patients do us a service, Benjamin points out, by dying well. Most of us want a social encounter to feel comfortable and will collude with people, including clinicians to make it so. This is a form of protection, but it is coursing in the wrong direction. The person receiving the salary ought to the be the one doing the protecting. Benjamin invites us to recognise that it is often the other way round. That this is so brings into relief the importance of discomfort. If someone can make you feel uncomfortable, and you can sit with that discomfort and bear it, you might really be doing something for them.



Image result for into the abyss anthony david

The title of David's book seems to conjure something rather vague and mystical, but it in fact denotes a precise idea. Karl Jaspers once posited an "abyss" between the mechanistic and hermeneutic forms of understanding people. David sees the psychiatrist's task as bridging this abyss, to provide a working understanding of people that draws on both. Much has been made in mental health of the biopsychosocial model. David points out that in offering an apparent theory of everything, this idea threatens to explain nothing. Here he is on the way psychiatrists ought to related to the three strands, bio, psycho and social: "Every time we meet a new patient, we must decide which of the three, if any, is most important." (p.2) This is a punchy and pragmatic version of sense making that I recognise in the referral questions to a neuropsychology department, and it seems potentially at odds with my own profession's sometimes broadly inclusive formulations.

Each case study here seems carefully sculpted to reveal something important about the discipline of psychiatry - there is a patient with Capgras and Cotard's, who allows David to illustrate the meaning of the two-factor theory of delusions; another who illustrates the paradoxical neuropharmacological connection between psychosis and parkinsonism. Unusually for neurolit - David also includes a case that illuminates the issue of race and psychosis - allowing him to weave in some reflections on Frantz Fanon.

My own route into neuropsychology has felt weird to me - like many I was inspired by the intrigue of astonishing neurological phenomena and a desire to understand them. But I came into the field via an interest in psychoanalysis, phenomenology and mental health. What is the appeal of these books to this jobbing clinician? As a psychologist, working in neuropsychology, I realise I read them as something like a form of supervision. Academic texts give you statistical generalities, but there is nothing like a vivid account of the minutiae of clinical work and some startling clinical advice (at one point he abruptly announces to a family affected by head injury that they "will never be the same again") to help you really learn something.

A book like Abyss then is something like a series of lessons - archetypal illustrations of how people can be distressed. It expands ones clinical repertoire, opening up new possibilities for formulating complex situations. This is more or less what you would expect from a prominent expert in psychiatry. The cases are well worked through and served up with aplomb. But they are also like events from a distant past. As a clinician I envied David the unruffled clarity he brings to bear on each situation. A patient appears to recover from a crippling depressive guilt, only to throw himself under a lorry within moments of being discharged. David is unsettled, but seeks solace in Durkeim's writing on anomic suicide and emerges a wiser clinician with what reads like relative ease.

Benjamin's book offers something more emotional, and counterintuitively more reassuring. Here is an author who portrays the startling and graphic events that bring people into contact and inevitable emotional entanglement with neuropsychologists. He is vividly impacted by his patients and has gone searching in some unusual places (tibet, his own dreams; psychoanalytic theory) to try and make sense of them. But as he shows in a remarkably drawn scene of an NHS team meeting ("the decision is made, the knife is readied, and nobody who was there can quite say what just happened" - p.67), there is never really anything like a complete sense to be made. The best we can hope for is to figure things out a little better and a little more usefully. The note of horror, the terrible senseless randomness in the world that lets brains grow tumours and collide with skulls at high velocity, haunts the pages relentlessly.

Friday, 3 May 2019

Therapy for therapists?

We see periodic flare ups - among psychologists and psychotherapists - of a debate about the importance of personal therapy for therapists. If you want to work in this business providing it to others, the central question goes, is it imperative that you undergo your own? The debate usually gets hot headed, perhaps because it is actually a form of culture clash between very different types of professional psychology. Across the spectrum of the "psy professions," the requirements vary. Many (most?) psychotherapy training courses mandate it. Others don't. It is constitutive of what it means to a psychoanalyst that you are someone who has been psychoanalysed. Meanwhile, most courses in clinical psychology will not insist that their trainees receive any sort of therapy.

It seems easy to argue that personal therapy has value for several rather banal reasons: the experience of being in the “hot seat” (being encouraged to divulge personal details, opening up about your insecurities) can give you a sense of what you are asking people to do. The opportunity to discuss feelings about the ways your work affects you personally is also a way to ward off professional fatigue and "burn out." The privacy of the whole situation (it's hard to see what a therapy session is like without actually sitting in one) also makes personal therapy seem an attractive introduction to how the process ought to look - a sort of apprenticeship. 

There are also arguments that the case in favour of personal therapy is overdone. It is a very resource intensive requirement, hitching entry to the profession to a large financial outlay that some more "hard headed" clinicians argue is simply unnecessary. I sought therapy when I was training to be a therapist, but whatever value it had for me as a person, it was not the only (and maybe even not the best) source of insight about the intersection of my personal and professional development. Intensive supervision on the other hand was profoundly helpful. I had the fortune to meet with supervisors who took a granular interest in the blow to blow of the sessions, and who were not afraid to tell me when they thought I was avoiding material from my own anxieties, or getting wrapped up in a response that was more aimed at gratifying me than helping the person I was meeting. 

Looking at my colleagues I hesitate to suggest personal therapy is essential. Some of them were the most gifted and emotionally insightful people I have met, but I suspect this was principally a result of their temperament, interests and life experiences. Many psychologists seem conscientious and rather neurotic, making them good candidates for extensive self-examination in or out of therapy. Others I have met seem somewhat emotionally unaware despite years of personal therapy. Hardly a convincing advert. 

It may be that this is a case (one of many in my view) in which the psychoanalysts have made an accurate diagnosis of a problem (therapists without personal insight are a bad thing!), but don’t have a monopoly on the remedy. Much as “mindfulness” is actually a description of a wide ranging mental state, which can be facilitated in many ways other than those that have been made popular since the advent of professional meditation training, so "emotional insight" does not only need to arise from a particular form of two way conversation that became popular in the 20th Century. The flip side of this is that therapy is not a precise science and has no guaranteed outcomes anyway. It would seem weird therefore to put too much store by it as some sort of royal road to clinical wisdom. 

How could we settle the question? Apart from examining the efficacy of therapist therapy on outcomes (those who worry about the value of personal therapy for trainees often ask to see the evidence that it "works"), the most interesting way of looking at the problem would be a straightforward “taste test”. You could (for instance) take a panel of senior psychotherapy teachers who run therapy courses mandating personal therapy and have them interview a sample of psychotherapy trainees, some of whom have had at least a year of therapy and some of whom have had none. Would the trainers be able to discern the “analysed” from the “unanalysed” cases? If the trainers’ judgments about who had had therapy were no better than chance then that would seem a significant challenge to the dogma that personal therapy is doing something clinically relevant.

Ultimately the dispute is so intractable because what is at stake is two conflicting visions of the ways that it is possible for us to deceive ourselves. For the cognitively minded, a salient sort of self deception might arise from the various self-justifying biases that are tied up with the insistence upon personal therapy: that it may seduce you into thinking that you’ve got (that it is possible to get) your own psychological house in order; that it bestows an unchallengable authority on the figure of the "well analysed" therapist ("I have the requisite sort of insight - you do not"), and that it is an arrangement principally suited to keeping psychotherapists in business through a steady flow of trainees who need to sit on the couch.

For the psychoanalytically inclined, the relevant self deception is tied up in the hubris of embarking on "the work" without sufficient knowledge of what lurks in your own unconscious. Enter the room as a psychoanalytic naïf and you will be hit by a storm of transferential and counter-transferential responses that you can’t make sense of. You will likely be pulled into a range of potentially harmful enactments with the people you’re trying to help. You'll be lucky to avoid causing harm to the client and to yourself, let alone providing help. I find this way of thinking fairly compelling. The rampant abuse of patients by mental-healthcare professionals (who didn’t – I assume – enter their field as aspirant sadists) looks like striking evidence of how harmful and surprising our unconscious motivations can be. Personal therapy - under the psychodynamic conception - is a way to start looking at the darkness that lies within you so you can stop it from wreaking havoc on those you work with.

The point of raising these two types of consideration is not to try and arbitrate between them, but to diagnose the whole problem with the debate as it is currently constituted. Both sorts of concern seem valid to me. I am convinced of the fact that therapists bring things into therapy that will impact on the work in potentially profound ways. I am also convinced that therapists are motivated to preserve their professional identity by appeal to the special significance of their hard-won clinical insight, and that this can often be overblown. If different sides can retreat from their favoured assumptions, a new way of asking the question might emerge.  

Thursday, 31 January 2019

Psychoanalysis' unlikely innovator

There is a popular narrative about the history of psychoanalysis and schizophrenia; that it involved little more than the invocation of schizophrenogenic parents and equated to victim blaming. This version of history is sometimes raised to engender doubts about any psychological theorizing in this area and shut it down.  It’s intellectually healthy to raise such doubts. Contemporary psychoanalysts - worried about repeating historical mistakes - have grappled with them too. Here is an excellent essay, by a psychoanalyst warning his colleagues to heed the "cautionary tale" of the schizophrenogenic mother theory.  

But historical reality is more nuanced than the narrative that runs "psychoanalytic theory bad; biomedical revolution good": Frieda Fromm-Reichmann’s “schizophrenogenic mother” idea has come to symbolise the theoretical chauvinism of the age, but her legacy is more complex. She didn’t focus on aetiology as much as on therapy. Her heroic efforts are documented in Gail Hornstein’s fantastic biography.

More overtly anti-mother was the work of Theodore Lidz, who devoted a large part of his career to studying the dynamics of families of those diagnosed with schizophrenia. His descriptions of two schizophrenia-creating family patterns (skewed families locate all of the power in one parent while schismatic families split it between them in a perplexing civil war) contain toe-curlingly misogynistic descriptions of mothers. These ideas stuck around a long time. As late as 1994 the notion of schizophrenogenic parenting was still doggedly advocated by some authors, with no attention to the idea’s serious evidential shortcomings.

But there was a more subtle and integrative idea at large during the psychoanalytic heydey of American Psychiatry. Sandor Rado was a peculiar figure in American psychoanalysis. Although an elder statesman of the field (he had known Freud well and was selected by him to edit two early psychoanalytic journals), he was cast out from the orthodox New York Psychoanalytic Society for his belief that psychoanalytic knowledge could not be separated from a sound understanding of neurology and genetics. “I believe that the influence of genetics, especially biochemical genetics, is going to be so enormous that it would be bootless to try to outline it.” Rado once said (see page 141 in this)

Image result for sandor rado

Rado coined a term that has become ubiquitous in modern academic psychiatry: schizotype. This portmanteau (a collapsing of “schizophrenic genotype”) was used to designate an individual genetically vulnerable to a psychotic decompensation. For Rado (who outlined his ideas in a 1953 paper) a psychotic breakdown represented a combination of this genetic predisposition and the very human process of adapting to the world in light of that predisposition. Although highly speculative and somewhat vaguely couched, Rado’s paper on schizotypy is notable for its almost Laingian level of phenomenological detail. His ideas about the relationship between the constitutional factor (an “integrative pleasure deficit”) and the dynamic contents of the mind were supposed to be the start of a serious mind-body theory of psychosis. But it wasn’t to be.

Although some theorists took note (Paul Meehl brought the concept to academic clinical psychology where it slowly began to gain traction), American psychoanalysis at the time - which is virtually to say American psychiatry at the time - entirely ignored Rado’s idea. In fact “ignored” might be too suggestive of indifference.Some have gone so far as to suggest that Rado’s influence on psychiatry was repressed: “Rado and his collaborators were shunted out of the mainstream psychoanalytic journals and largely vanished from even their references and citations” (p.975 here). Instead American psychoanalysis became committed to ever more dogmatic assertions of the role of parenting in the development of schizophrenia. It's tantalizing to imagine how different it could have been.

Saturday, 24 November 2018

Matthew Parris misses the point

Matthew Paris has written an essay in The Times arguing against increased mental health funding. This issue is a sacred cow for politicians and the media so I have some respect for Parris for going against the grain. Mental health is too important to be untouchable. Unfortunately the article is based on a series of misconceptions that undermine its argument.

Here is how the substantive line of reasoning gets going. Treatment should be based on science, Parris is going to say, and psychiatry isn't scientific:


This is a ludicrously limited understanding of science. There is virtually no area of medicine where patients behave like the billiard balls of Parris' schoolboy physics. Medical science is generally probabilistic; surgeries tend to have a particular outcome, babies tend to emerge in particular ways. Psychiatry is more probabilistic than most, dealing as it does with cognitions and affects rather than hearts and livers, but the empirical principles are the same. To try and reign in the uncertainty, patients are grouped together by diagnoses and provided with treatments. What works best is what gets recommended for the NHS. That formula doesn't always get implemented properly, and we can argue about ways to change it, but it's misleading to ignore the fact that it exists as an ideal.

To bolster his case about the unscientific foundations of therapy, Parris makes reference to psychoanalysis:


Parris is talking here about psychoanalytic therapists (the majority of whom are paid privately) as a way of arguing for reduced NHS spending. It's a spurious connection. We can get up a whole separate debate about the merits of Freud, Adler and Jung, but in this context they are a distraction. 

When he does turn to the dominant mode of NHS therapy, Parris' ignorance is astounding. First he paints a picture on which the only sort of research conducted into therapy's effectiveness is from the most casual inquiry on the part of the therapist:


Then, with the sort of comic assuredness that can only come from profound ignorance of the literature, he cooks up a scheme for how such research might be better conducted:


These passages suggest that Parris cannot have read -- nor read about -- a single randomly controlled trial of psychotherapy in preparation for his essay on psychotherapy. Only someone in such a state of epistemic innocence could suggest with a straight face that, hey, psychiatrists might like to conduct a systematic test of their treatments one of these days. The problem he thinks he as identified was foundational for the project of psychotherapy research, well over 50 years ago. The broad consensus of that literature is that psychotherapy is helpful. Parris could take issue with those results but instead he doesn't acknowledge they exist. 

There follow some rather waffly personal reflections on the ascendancy of therapy as a culturally available means of dealing with distress, and the vagaries of psychopharmacology. With regard to the first Parris says that he was averse to seeking therapy on the two occasions it was offered to him. This tells us little about the value of therapy per se and only about Parris' attitude toward it. Fair enough - psychotherapy should always be an informed choice. 

On the latter issue he makes some concerned noises, but may as well have pasted in the shrug emoji:


There follow some half baked worries about Ritalin use in schools and the rising demand for antidepressants, but without any solid idea of what impact these things are having Parris just sounds like Robert Whitaker without the conviction or the reference list.

All of this is building up to what amounts to a tepid call for hesitation about funding:
This is infuriatingly facile. It's just the old Cameron idea of the Big Society, warmed over. Moreover, however shaky the ground he has trodden to get to this conclusion, the things he has left out are even more damning. Everyone can surely agree that kindness is essential in mental health. What is so frequently missing, and what drives people to campaign for increased funding in the sector, is the means to implement that kindness.

Parris tries (unconvincingly) to debunk therapy and medication, but he doesn't even touch on the most important parts of mental health. Put aside therapy, put aside medications, a huge amount of mental health care consists of things that are far more basic and important and done by people who aren't particularly interested in Freud, CBT or Ritalin. It can be found in practical support with daily tasks, in messy intervention at times of crisis, in advocacy, in needs and risk assessments, and in simple human contact.

The contemporary mental health crisis emerges not in the consulting room, but in A&E departments, job centres and community mental health teams. It arises because people in intense distress are showing up in all sorts of places that have neither time nor resources to help them. The political push for more mental health funding is not some rarefied thing that can be separated from the kindly pastoral figures that inhabit the conservative imagination. There can be no kindness where there are no people available to offer it. 

Wednesday, 17 October 2018

Making Up Symptoms

I have a short article in Psychiatric Bulletin about the question of historical variation in psychiatric symptoms. It's brief and rather speculative, but I hope fun and interesting.

The essentially private nature of subjective experience means that its occasional misdescription by mental health professionals is virtually guaranteed. Given the centrality of subjective symptoms in assessing psychiatric disorder, such misdescription could have important ramifications.

Here are two anecdotes about language:

During the morning handover meetings in an inpatient unit I once worked on, the shift manager would read a thumbnail description of each resident’s behavior over the last twelve hours. The phrase “responding to internal stimuli” recurred over and again, far more frequently than seemed plausible if you knew the people on the unit at any given time and their propensity to attest to hallucinatory experiences. What was going on here, I suggest, is that in some percentage of these instances, staff were witnessing a set of objectively describable behaviors (speaking aloud to oneself, laughing, ignoring others) and attributing them to inner events that are unobservable from without. The use of the right sounding psychiatric language (“internal stimuli”) was reassuring to staff, who felt they had something professional to say. Unfortunately it also contributed to the elision of the messier and more complex reality. 

In another setting I noticed I heard the use of the Bleulerian notion of “blocking” more than I had ever heard it elsewhere. You could see, in some staff meetings with patients, how the word was applied. It seemed to me that whenever a patient paused, struggled to find the words, or remained silent for any socially awkward period of time, this was apt to be described later as blocking. Thought blocking is often defined with reference to behavior (see the Wikipedia article here), but it is something that can only be identified by reference to subjectivity. Feeling that your thoughts have been blocked is not the same as simply stopping speaking. Additionally, verbalizations such as “he’s blocking,” (blocking as a verb) imply something quite different from the passive concept (your thoughts being blocked) that Bleuler initially described. This shifting use of words changes our understanding of what people are experiencing.

It seems likely to me that this process of misdescription takes place frequently; it may be impossible to avoid. Mental health professionals receive, through their training and through clinical lore and fashion, a sort of rubric for how to make sense out of people who are behaving in ways that are hard to understand. Through such misdescription entire swathes of symptomatic experience may be getting essentially overwritten.

Image result for René Magritte

But this all still amounts to a fairly basic misapprehension, by one person, of the subjective experience of another. Such misapprehension is in principle rectifiable. But what if the confusion runs deeper? What if the interaction between experience and language through time has wrought a more pervasive form of overwriting? This is the subject of the Bulletin article. I suggest that changes in psychiatric terminology over time (namely the shift toward more homogenous descriptions of psychotic symptoms) have potentially had an impact on the very experiences that terminology tries to describe. This is a simple extension of an argument by Ian Hacking, who claims that new diagnostic categories actually bring new ways of being into existence (read this essay in the LRB for a brief overview of this idea, and to see from where I stole my title).

Unlike Hacking though, I think we need more conceptual resources to understand such change. I draw on the work of philosopher Eric Schwitzgebel (check out his excellent blog here), who has written interestingly about the indeterminacy of psychic experience. I am convinced by Schwitzgebel's argument that, far more than we habitually think, there is no fact of the matter about what many aspects of our experience are like. If that sounds extraordinary to you then I recommend you read his book Perplexities of Consciousness. If it doesn't, then you are some of the way to being persuaded by what I am suggesting. If consciousness is indeterminate to some degree then asking people questions like "do you hear the voice inside your head our outside it?" or "is it a male voice or a female voice" is likely, in some cases, to introduce more confusion than clarity to our understanding what an experience is like. Every time we do that, and every time we defer to official definitions of delusions as "beliefs" or hallucinations as "perception like experiences," we are potentially nudging people toward those definitions rather than nudging our definitions toward them.

Monday, 10 September 2018

So long, psychiatric New York

Islands are attractive to the builders of asylums. The seclusion of large bodies of water lends itself well to the defacto banishment of those people deemed easier not to engage with. New York is a city of islands that have functioned to keep certain people out of sight.  Rikers Island can be instantly recognised as a notorious prison. Hart Island is less well known but has long served as a burial site for vast numbers of the forgotten residents of New York. The awkward confluence of large seething rivers makes parts of the waterways surprisingly treacherous.

This city has been my home for five of the last six years and I am two days away from the flight that will take me out of it indefinitely. It seems like a good moment to explore parts of the city that I never got to before. A friend has just become licensed as a city tour guide and is piloting his tour of Yorkville. It's a secluded and often overlooked neighbourhood. I join him for a stroll through the drizzle, past Gracie mansion and the apartment building that once housed the Nazi party of America. At the start of the tour a building across the East River catches my eye.

Toward the northern end of Roosevelt Island stands a striking little segment of asylum architecture: The Octagon.


The Octagon is now the entrance to a well appointed apartment building, but it used to be the centrepiece of the New York City Lunatic Asylum. According to my tour guide friend wealthy people in need of psychiatric care were more likely to end up at the West Side's long-gone Bloomingdale Asylum. The last remaining building from that institution -- Buell Hall -- now sits incongrously on Columbia university's Morningside campus housing several academic departments. New York City Lunatic Asylum was state run, and this was the place that you got incarcerated if you were poor and insane in New York. 

Image result for new york city lunatic asylum

Long before David Rosenhan sent his students off to pose as psychiatric patients, the pioneering journalist Elizabeth Seaman (an amazing character in her own right -- I strongly recommend the linked page) went undercover at the asylum to expose conditions. She gained admittance by feigning a pervasive and global amnesia, causing a mini media sensation in the process with her youth and striking looks. The book she wrote (Ten Days in a Madhouse, written under the non de plume Nellie Bly) detailed sadism by asylum staff, disgustingly unsanitary conditions, and inedible food. At one point Seaman found a spider baked into her bread. The book's publication led directly to an increase in funds to the asylum and became part of a much broader movement in which the working practices of insane asylums were exposed and reformed. 

Image result for new york city lunatic asylum
Benign neglect: The Octagon in 1970

The asylum was closed at the beginning of the 20th Century and the buildings taken over by the Metropolitan Hospital. The latter moved off Roosevelt Island in 1955, and the building fell into the kind of neglect that was widespread in parts of New York right through the 1950s, 60s and 70s. This still seems so astonishing in the contemporary New York of unceasing development and barely affordable rent. In 1972 (two years after the photo above) the last remaining part of the building -- the Octagon -- was put on the natonal register of historic places.

A stepped pathway of care

It's a cold and drizzly day when I take the elevated tramway over to Roosevelt. This island has always felt ahistorical to me. It's a warren of newly built apartment buildings with a Starbucks and a Duane Reade. Unless you have access to the gyms or swimming pools dotted around, or want to look at Manhattan from an unusual perspective, there are few reasons to visit.

You have to walk through all that to get to the Octagon complex. Moving north you start to pass tennis courts and coniferous trees. The whole place has the air of a country retreat, and feels about as far away from New York as it is possible to be while still technically within the borough of Manhattan.

I walk in past the concierge and am immediately confronted by the smartly renovated spiral staircase. This takes you up past a billiard room, a gym, and a play area for young children. Somewhere I have read that you aren't supposed to take photos, but no-one is around to care. All is serene. When I reach the top I feel like I could be in a lighthouse. Looking down the centre of the spiral staircase gives you a sense of space that few modern residential buildings manage to achieve.


It's fun to imagine that you are in some way communing with history in these buildings, and the Octagon shines a light on its own past. Prints of historical images from the building's asylum days are framed on the walls as you make your way up the stairs. Apart from these there is little to see. The refurbished interior is tasteful in its simplicity. As I leave I can see another of New York's island institutions from the northern tip of Roosevelt: the modernist concrete bulk of the Manhattan Psychiatric Center -- a state-run inpatient facility on Randall's Island -- looming out of the grey sky.


____________________________________________________________

More links:

Ten Days in a Madhouse full text here

Ten Days in a Madhouse as a free audiobook download.

This page at The Ruin has some good photos of the abandoned asylum buildings. 

Asylum projects has a good page on the New York City asylum that skips some of the more lurid and sensationalist nonsense you find elsewhere. 

Thursday, 9 August 2018

Psychiatric diagnosis and informed consent: challenges and opportunities

It seems that there is an emerging middle ground in the debate about psychiatric diagnosis. Two recent pieces struck a note between polar positions: Jay Watts reminds us that diagnosis can be a lifesaver as well as a destructive force. Akiko Hart argues that diagnosis should be regarded as something over which people have a choice, rather than something to be entirely embraced or rejected.

One response to this uneasy stalemate has been to invoke the ethical requirement of informed consent. This would get around the variation in preferences by handing more choice to the recipients of diagnosis. It seems appropriate because informed consent is already the framework within which other healthcare interventions are offered. I have seen increasing numbers of people describe themselves as being "pro-choice" about diagnosis. This presumably involves taking something like an informed consent approach to the issue.

However, although it obviously moves us in a positive direction by protecting individual choice and autonomy, the idea of informed consent does not apply so easily to diagnosis as it first appears. There are at least three challenges for an informed consent framework in relation to the use of psychiatric diagnoses:

1. Scope 

The first challenge has to do with the scope of the process to which individuals would be consenting. Would informed consent  include questions about whether diagnostic categories would be used in an individual's treatment planning or in their clinical notes? Some clinicians already elect to exclude diagnostic labels in their reports - others do not. We can imagine the exclusion of diagnostic terminology from notes, but what if, given a clinician's working understanding of a person's problem, this exclusion were only token, or cosmetic?

Drilling down further, would the informed consent process also be supposed to apply to the process of clinical decision making? Here's a common situation: a psychiatrist elects not to prescribe SSRIs for an individual's depression on the ground that they believe the individual is at high risk of a manic episode. The SSRI is avoided because these medications are liable to set off mania in people with a bipolar diathesis. Here the reasoning is grounded in diagnosis (suspected risk of a bipolar disorder): the clinician does something in the best interests of an individual on the basis of a diagnostic considerations. Does informed consent to diagnosis preclude this sort of reasoning? If I have declined to consent to psychiatric diagnosis, do I also intend to prevent my psychiatrist from making this kind of judgment? What is the ethical clinician to do here? They seem to be acting without the informed consent of the individual (if they have deployed diagnostic reasoning to direct that person's care), but it seems frankly harmful (and therefore clearly unethical) to go ahead and prescribe the SSRIs given what they know.

This is a concern that could be answered if a more detailed account could be given of what "diagnosis" is supposed to include. It seems pretty clear the individual refusing to consent to diagnosis would be declining to understand themselves as "having" an illness called depression, or OCD or what have you. This seems reasonable. Such self-construals are arguably* beyond the purview of clinicians anyway. If a person makes sense of their experience in spiritual or religious or magical terms that are distinct from the language of diagnosis - clinicians may offer an alternative perspective, but they have no mandate to enforce a change in worldview. Clinicians have no right to tell people how to think.

When we propose an informed consent model for psychiatric diagnosis, we need clarity over what people are being asked to consent to.

2. Possibility  

The second challenge has to do with whether informed consent to a diagnosis is practically possible. First there are issues that pertain to the witholding and disclosure of relevant information. You see your psychiatrist and they say, "I have a diagnostic formulation, would you like to know it?" If you have the willpower to say "no" then you have to live with the knowledge that they have formed an opinion that may be relevant to you. If you decide you want to hear what the diagnosis is, you have to undertake some mental gymnastics in order to simultaneously know what you have heard while also discounting it as irrelevant to your self-understanding.

Then, given the transformative nature of diagnosis as a type of information, is informed consent to it even possible? Diagnosis changes your conception of yourself. Once you start to identify in terms of a diagnosis, the way you perceive your self and your life will shift in unanticipated ways. Perhaps you don't want to receive a diagnosis now, but once you know what it is you might change your mind. Perhaps you feel hostile to your diagnosis at first, but as time goes on you come to accept it. Perhaps your initial enthusiasm about a diagnosis turns to regret as you realise how it has limited you. Diagnosis - if it is a transformative experience in the sense intended by LA Paul - may not be the sort of thing to which one can consent.

To return to our opening proposition - that there are some people who find it helpful and some who do not - how can anyone anticipate in advance what sort of a person they will be? We need to have a better understanding of how clinicians would be expected to handle an informed consent process.

3. Coherence

The final challenge has to do with whether it is even coherent to think about providing informed consent to a diagnosis. On the one hand, we are talking about something like a procedure here, so it seems appropriate that people need to consent to it. On the other, a diagnosis is meant to be algorithmic, so (provided you have already consented to some form of assessment) consent seems misplaced. Either you meet criteria for a DSM disorder or you don't. Consider a situation in which a psychologist assesses someone and concludes that they meet criteria for disorder [x]. That is information, whichever way you interpret it. What is changed by withholding such information from someone? Is such withholding even ethical?

The coherence challenge also applies to the ways that diagnosis functions socially. Consider this situation: two people consult the same psychiatrist for mental health treatment. They meet in the waiting room and get talking. Person A says "Dr. Z has been very helpful to me. Since she diagnosed me with depression I have had a whole new way of understanding my bleak mood and my life has become far more liveable." Person B says "that's interesting, Dr. Z also felt I had depression, but I don't like diagnostic labels. She and I discussed the fact that depression is a construct and we agreed that I had a choice about whether to accept the diagnosis. I prefer to regard myself as not having an illness." One outcome here is that the two agree to disagree and take satisfaction in knowing that their provider can accommodate different preferences. But another outcome is that both feel unease. Person A's understanding of their situation may be changed now - if there is choice about this issue, do they perhaps not have an illness after all? As for Person B, they must wonder whether the doctor is simply humoring them. Yes they have agreed to behave "as if" Person B is not suffering from an illness, but a doubt arises: perhaps Dr. Z does actually believe in an illness process and is secretly regarding Person B with this cognitive set in mind.

This highly simplistic scenario gets at the problem that a diagnosis is never just about one person. Every actually-existing instance of a class tells you something about the class. If some sub-set of people within a diagnostic grouping is being told by their clinicians that their diagnosis doesn't track anything real, that has implications for the others in the same grouping. Imagine the situation from such a person's perspective. Their clinician says they have a disorder, but they are also suggesting to other people (ostensibly with the same disorder) that they can opt out of it. This looks thoroughly inconsistent. Whether you think this inconsistency is particularly troubling will depend on your views of diagnosis.

So there is a significant worry here. I suggest we can start to address it in two ways:

The first response is to acknowledge that psychiatric diagnosis is often actually applied with some degree of lassitude. A clinician doesn't have to be (in fact shouldn't be) entirely guided the algorithm. They can conclude that the symptoms aren't qualitatively the same as those described in the DSM and thus avoid diagnosing someone who otherwise resembles the criterion. Alternatively they can conclude that the person resembles the prototype for a disorder even if they fall short of the criteria. Additionally, many diagnoses have a criterion that includes this sort of language: "clinically significant distress or impairment in social, occupational, or other important areas of functioning." This introduces further subjectivity and may account for some of the variable reliability of diagnosis.

Given this lassitude, we can start to see a way for clinicians to be coherent in their inconsistency. Two people with extremely similar experiences in regard to their mood and behavior may nonetheless interpret these things in very different ways. One feels themselves to be overcome by an inane hopelessness that cannot be understood; the other believes they are in the midst of a profound spiritual or time of life crisis. Might not their functioning be impacted differently, and might it not therefore be appropriate to diagnose one and not the other as a result?

A second response is that diagnosis is open with regard to how far it applies to people's self identity. Here the get-out provided has to do with the clincian's work in helping an individual to achieve some degree of distance from any diagnosis that is used for clinical purposes. Imagine a clinician says something like "I conceive of your difficulty as an instance of DSM-disorder [x], but look, it is up to you whether and how you internalize that information - the DSM is at best a very rough guide to help us make sense of people's experiences. If you feel disinclined to think of yourself as suffering from a disorder then I would encourage you to develop other ways of making sense of it."

So depending on what is meant, it is not necessarily incoherent to seek informed consent to diagnosis, but more work is needed on how clinicians could start to implement it as a deliberate approach.

There are at least three areas of potential fudge to be worked through if we are to promote a pro-choice attitude to diagnosis. Over what activities would this framework scope? How would we manage the withholding of classificatory information? and how can we introduce an informed consent model without unduly negatively impacting the desirable social functions of diagnosis?

__________________________________________________________

*I say arguably because this is a bit of an ethical grey area. On the one hand ethics codes presuppose a respect for peoples' diverse worldviews. On the other hand, the professional impetus of psychiatry and psychology includes the idea that some worldviews are appropriate targets for change. If a person believes they are being persecuted by the FBI; that is not  a belief that is necessarily respected in the same manner as the belief in a benevolent creator. Equally, the idea of "psychoeducation" implies that there is information (such as about the nature of panic attacks) it behooves clinicians to share. 

Thursday, 5 July 2018

How splitting became the patient's problem

I have a paper out in Psychoanalytic Psychotherapy about mental health team splitting in relation to the diagnosis of borderline personality disorder. The paper is a critique of the way that clinicians have come to think and write about the idea of splitting. 

What is splitting in this context? When clinicians say that someone is "splitting" they usually mean that an individual is doing something to a team to make it split. Here (from the paper) is my attempt at a formulation of what splitting is supposed to be:
An individual is said to have split a treating team when their differential behavior toward different members precipitates polarized feelings and opinions about their care and concomitant professional discord. Such a split can manifest in a number of ways, but commonly a staff team becomes organized into two starkly opposed groups. Individuals in these camps may come to have strong positive or negative feelings toward the patient and hold opinions about them that are radically different from those of their colleagues.
I remember when I first heard about splitting. In my first job in mental health I was asked for something by one of the residents. A colleague took me aside and said in hushed tones "she's splitting!" I had no idea what she was talking about.

I have kept on hearing about splitting ever since, and it seems clear to me that, although there is often something important happening when the term is used, the way we think about it is hopelessly over-simplified. I have seen situations where the identification of splitting takes on the quality of an accusation, and seems to function to absolve staff of any responsibility they have for being kind or thoughtful. I have even seen instances of frank cruelty by staff that are later explained away as being produced by the patient through their splitting.

The argument in the paper is that what was once thought of as a complex social phenomenon involving multiple actors has come to be seen as a discrete action that is perpetrated on a team by a particular sort of person. The use of psychoanalytic terminology (which describes complex phenomena but is prone to reification) has helped this process along. To get a feel for how the discussion can presented, here is an illustrative title page from a 1985 article:


It wasn't always like this. In early descriptions of team splits (Tom Main's "The Ailment" is often cited as the first appearance of the concept in the literature), you tend to see a complex description that takes into account the ways that we are all prone to intense emotions and the resulting interpersonal disputes. Main's essay describes a phenomenon that he observed in a hospital whereby the emotions of staff and patients interacted with one another in a complex way to produce splits. He invokes the emotions of staff, and explicitly says that splits are not caused by patients. 

Main's observations morphed as they were recounted by subsequent authors. The jargon heavy language of psychoanalysis increasingly entered the picture and splitting became understood as a distinctive set piece that could be seen as originating in the patient. The 1980s were a turning point. An article by Glen Gabbard from 1989 appears to have been particularly influential in cementing the view that splitting is driven principally by the use of projective identification by a patient and counter-transference responses in clinicians. Both of these are somewhat useful concepts, but they can end up putting a lot of responsibility on patients and absolving clinicians for their emotions and behaviors. 

Projective identification tends to be written and spoken about in magical ways (talk of people "putting feelings into" one another, which the psychoanalyst Morris Eagle once noted would be considered delusional in some contexts), and counter-transference has that useful "counter" prefix, which implies that the therapist's feelings really belong to the patient (they are just a "counter" to their "transference"). In fairness to Gabbard, he does try to emphasize the role of staff psychology but the language used seems to imply that splitting is centered, in an important way, on the activity of the patient.

Although there are likely sensitive clinicians and sensitive understandings of splitting, the mainstream clinical literature has been influenced by an increasingly succinct formulation of what is happening when staff splits get going. Here are two relatively recent examples:
“Conflict can arise as a consequence of consumer splitting or projection, whereby the staff act out (externalize) the internal good-bad and blaming dynamics of the person with borderline personality disorder” (Horsfall, 1999, p.428) 
“these patients selectively divide or split the nurses into good or bad persons. The conflicts and splitting of the nursing staff can carry over to the treatment team, and polarization of staff can occur, particularly as transference and countertransference reactions evolve” (Bland & Rossen, 2005 p.510).

Look at the way this is framed: "Conflict can arise as a consequence of consumer splitting" and "these patients selectively divide or split the nurses into good or bad persons." In this language we have inherited an idea that I call "the borderline as splitter." When a person is viewed this way, whatever caveats are added, they are easily viewed as doing something nefarious on purpose. A better model would be to consider how strong emotions impact groups of people. While clincians and theorists may sometimes pay lip service to the complexity and interpersonal quality of professional disagreement (but often don't), the temptation to blame it entirely on a patient's emotions is strong, especially when the alternative would entail acknowledging your own emotional responses and vulnerabilities . 

The "borderline as splitter" idea is plainly a fiction. Think about all the things that have to happen (or fail to happen) in order for a staff team to disagree about a person under their care and for this to escalate into an dysfunctional dispute. It is constitutive of a process like splitting that it involves more than one person. It takes two to tango, and it takes an entire team to split. 


Wednesday, 13 June 2018

Diagnostic underwriting

We have become accustomed to the idea of diagnostic overshadowing, where the presence of a psychiatric diagnosis causes doctors to miss physical health problems. A pain or swelling, or a lack of energy is regarded as being the result of a mental health issue and an altogether clearer medical cause is missed (overshadowed).

Anecdotally it seems that people are especially vulnerable to diagnostic overshadowing when they have received a diagnosis of borderline personality disorder. Because clinicians tend to associate this diagnosis with the classical idea of "hysteria" - the supposed eruption of emotional distress into the realm of the physical symptom - physical complaints or apparently neurological signs are apt to be considered psychosomatic. Thus a person with this diagnosis may have clear medical causes of physical pains that fail to get discovered.

We may need a similar terminology for what happens when a diagnosis alters other aspects of our self understanding; when the thing obscured is not a diagnosable medical illness, but a particular understanding of - or relation to - a mental state. 

Consider what can happen when someone is diagnosed with depression (though we could configure this example differently to make it applicable to other diagnoses); the diagnosis changes their understanding of the nature of their mood. What it means to be clinically depressed is for your mood to be significantly down, and for this to be attributable to a process lying beyond your more quotidian miseries. The depression is an illness, or a reaction, or a response, or something that makes the diagnosing clinician feel that it should be treated and not just lived.

But of course even without a depression in the picture, deep feelings of sadness, grief and despair are a part of our lives. We accept this and we live through our sorrows. They teach us about who we are and what our life is. Without a diagnosis of depression, our experience of such feelings is seen as part of the mix of ourselves and our context.

Diagnostic underwriting would occur where a depressed person's ordinary feelings of misery are mistakenly attributed to their depression; chalked up to a disorder that appears to account for things that it can't.

It might look like this: You feel hopeless all of a sudden, or guilty. You would have done regardless of diagnosis; it was something you experienced, thought or did that made it so. But because you have the diagnosis on hand, you don't understand it as a part of yourself but as a part of something else that has attached itself to you. You have attributed part of your experience to a phenomenon it doesn't belong to. Diagnostic underwriting has occurred. 

Note that what I am suggesting here is that an individual with a disorder could come to attribute their ordinary feelings to pathology. I am not making the nearby claim (which might be made by committed opponents of diagnosis) that it is constitutive of psychiatric diagnosis that all emotions in this case are "ordinary" and are being misattributed. For the opponent of diagnosis there are only "understandable" feelings. In the case  of diagnostic underwriting, there are feelings that are linked to the diagnosis and there are feelings that should not be. The diagnosis tracks something real, but it also has an impact on how we see other mental states.

Note too that diagnostic underwriting might be imaginable in theory but impossible to discern in reality. Who can say what really is me and what is really is my disorder? Who can really discern between ordinary and pathological feelings? In any case aren't these false distinctions? There is no safe place to stand in teasing this out, but the idea would be to talk about what happens as you claw your way out from under the emotional cloud of an alien experience.

Scared to feel and to trust what they feel, the individual recovering from a mood disorder has a twinge of emotion: "is this sadness OK, or is it going to be the start of my fall back into depression?" The answer may be practically unknowable, but the question is still an important one to grapple with. 

Thursday, 10 May 2018

The nightmare of eclecticism.

I have had a rather idealised vision of how a clinical psychologist would go about being a therapist. Rather than just being one type of thing ("a CBT therapist" say), I would seek to possess a sort of mental toolbox that contains skills relevant to a range of issues. Prepared in this way, I would be able to adapt to different problems by drawing on a range of techniques. This is the approach that seems to be promoted by the idea of empirically supported treatments (ESTs). You meet a person with a particular sort of problem, you reach into your toolbox for the requisite tool, and you get to work. Sometimes I might engage in some necessary systematic desensitization; at others I might follow associations to understand more about the emotions a person has not yet been able to access.

This is an integrative inclincation. It seems to offer hope for my desire to incorporate the insights of psychodynamic therapies with those of cognitive-behavioral treatments. We are, I think, animals with a prediliction to act without full knowledge of our own motivations, defending ourselves from coming to know the truth about ourselves. We are also learning machines, creatures of habit who are open to some degree of rational and behavioural rejigging. Why not hold both visions in mind at once? I don't like the idea of retreating to the familiar and unattractive warring poles that we see in certain forms of therapeutic modality bashing.

But I'm coming to think that it can't easily work this way. While some different forms of therapy sit relatively easily alongside one another (many of the acronym therapies feel like they are means to the same end, with emphases on different skills) not all do. The more time I spend talking with and listening to people from different therapeutic positions, the less hopeful I feel. The difference between psychodynamic psychotherapy and CBT is not only a difference of technique, it is also a difference of aim.

For advocates of most ESTs, the overriding ethic is that the person seeking therapy should come to feel better as efficiently as possible. This sort of improvement is to be demonstrated concretely by changes in symptom scores. The sine qua non of therapy here is the rapid reduction in a symptom that can be measured in an outcome questionnaire. Some advocates of psychodynamic therapies take this to be the aim of their work too. Jonathan Shedler has repeatedly argued that psychodynamic psychotherapy can be at least as effective (and in the same way) as CBT.

But many other dynamically oriented therapists simply aren't interested in that sort of game. For these people, the overarching ethic is that the person seeking therapy should come to understand themselves as thoroughly as possible, and live in greater freedom as a result. The distinction was drawn rather nicely by Allan Young in his Harmony of Illusions:

Simply put, different doctrines can give different meanings to the same outcome. While behaviorists and cognitive therapists say that a technique is efficacious when it produces enduring changes in disvalued behavior patterns, psychodynamic therapists, particularly clincians oriented to psychoanalytic perspectives, locate the meaning of altered behaviors elsewhere - in etiologies, symbolic content, and psychological processes.  Simply reducing the intensity of symptoms can be countertherapeutic and may signal the formation of more effective psychological barriers to insight into etiological conflicts. Real efficacy means releasing a potential for inner growth and maturation and enhancing the ability to establish and sustain gratifying social relationships. In these circumstances, the behaviorist and the psychodynamic valuations would be not simply different but incommensurable: they could not be measured by a common set of standards. (p.181-182)
We can see then that therapeutic orientation is essentially an ethical question, not an empirical one. Consider the point raised by the philosopher Charlotte Blease, discussing the treatment of depression by CBT in the light of the phenomenon of depressive realism: "well-being is not synonymous with being realistic about oneself," she points out. Blease has an ethical qualm: certain sorts of therapist might value improvement in the mood in their patients over their having an accurate view of their life situation. Psychodynamic therapists might value the realism over the improvement in mood.

This is the "nightmare" of my title. Not only is there a practical difficulty entailed in deciding what sort of therapy to do (which technique is most effective in this situation? - a hard enough question); there is a basic ethical choice that needs to be made. Once the decision is taken you have to remain consistent. You could be a CBT therapist in some parts of your career, and a psychodynamic therapist in others - but it will be potentially incoherent to pursue them within the same treatment. When moving from open ended exploration to symptom relief, how would you know that it was because it was therapeutically indicated and not better understood as a countertransference enactment? How do you maintain the inevitable frustration that is required to encourage internal reflection, when the patient has come to expect active intervention from you. The move between worldviews requires a dramatic gestalt shift.

Bad news for the early career psychologist who doesn't like joining therapeutic teams. But perhaps there is one positive upshot. Psychodynamic and CBT authors could stop their often unseemly squabbling. They aren't necessarily pursuing the same goals.