Wednesday, 17 October 2018

Making Up Symptoms

I have a short article in Psychiatric Bulletin about the question of historical variation in psychiatric symptoms. It's brief and rather speculative, but I hope fun and interesting.

The essentially private nature of subjective experience means that its occasional misdescription by mental health professionals is virtually guaranteed. Given the centrality of subjective symptoms in assessing psychiatric disorder, such misdescription could have important ramifications.

Here are two anecdotes about language:

During the morning handover meetings in an inpatient unit I once worked on, the shift manager would read a thumbnail description of each resident’s behavior over the last twelve hours. The phrase “responding to internal stimuli” recurred over and again, far more frequently than seemed plausible if you knew the people on the unit at any given time and their propensity to attest to hallucinatory experiences. What was going on here, I suggest, is that in some percentage of these instances, staff were witnessing a set of objectively describable behaviors (speaking aloud to oneself, laughing, ignoring others) and attributing them to inner events that are unobservable from without. The use of the right sounding psychiatric language (“internal stimuli”) was reassuring to staff, who felt they had something professional to say. Unfortunately it also contributed to the elision of the messier and more complex reality. 

In another setting I noticed I heard the use of the Bleulerian notion of “blocking” more than I had ever heard it elsewhere. You could see, in some staff meetings with patients, how the word was applied. It seemed to me that whenever a patient paused, struggled to find the words, or remained silent for any socially awkward period of time, this was apt to be described later as blocking. Thought blocking is often defined with reference to behavior (see the Wikipedia article here), but it is something that can only be identified by reference to subjectivity. Feeling that your thoughts have been blocked is not the same as simply stopping speaking. Additionally, verbalizations such as “he’s blocking,” (blocking as a verb) imply something quite different from the passive concept (your thoughts being blocked) that Bleuler initially described. This shifting use of words changes our understanding of what people are experiencing.

It seems likely to me that this process of misdescription takes place frequently; it may be impossible to avoid. Mental health professionals receive, through their training and through clinical lore and fashion, a sort of rubric for how to make sense out of people who are behaving in ways that are hard to understand. Through such misdescription entire swathes of symptomatic experience may be getting essentially overwritten.

Image result for René Magritte

But this all still amounts to a fairly basic misapprehension, by one person, of the subjective experience of another. Such misapprehension is in principle rectifiable. But what if the confusion runs deeper? What if the interaction between experience and language through time has wrought a more pervasive form of overwriting? This is the subject of the Bulletin article. I suggest that changes in psychiatric terminology over time (namely the shift toward more homogenous descriptions of psychotic symptoms) have potentially had an impact on the very experiences that terminology tries to describe. This is a simple extension of an argument by Ian Hacking, who claims that new diagnostic categories actually bring new ways of being into existence (read this essay in the LRB for a brief overview of this idea, and to see from where I stole my title).

Unlike Hacking though, I think we need more conceptual resources to understand such change. I draw on the work of philosopher Eric Schwitzgebel (check out his excellent blog here), who has written interestingly about the indeterminacy of psychic experience. I am convinced by Schwitzgebel's argument that, far more than we habitually think, there is no fact of the matter about what many aspects of our experience are like. If that sounds extraordinary to you then I recommend you read his book Perplexities of Consciousness. If it doesn't, then you are some of the way to being persuaded by what I am suggesting. If consciousness is indeterminate to some degree then asking people questions like "do you hear the voice inside your head our outside it?" or "is it a male voice or a female voice" is likely, in some cases, to introduce more confusion than clarity to our understanding what an experience is like. Every time we do that, and every time we defer to official definitions of delusions as "beliefs" or hallucinations as "perception like experiences," we are potentially nudging people toward those definitions rather than nudging our definitions toward them.

Monday, 10 September 2018

So long, psychiatric New York

Islands are attractive to the builders of asylums. The seclusion of large bodies of water lends itself well to the defacto banishment of those people deemed easier not to engage with. New York is a city of islands that have functioned to keep certain people out of sight.  Rikers Island can be instantly recognised as a notorious prison. Hart Island is less well known but has long served as a burial site for vast numbers of the forgotten residents of New York. The awkward confluence of large seething rivers makes parts of the waterways surprisingly treacherous.

This city has been my home for five of the last six years and I am two days away from the flight that will take me out of it indefinitely. It seems like a good moment to explore parts of the city that I never got to before. A friend has just become licensed as a city tour guide and is piloting his tour of Yorkville. It's a secluded and often overlooked neighbourhood. I join him for a stroll through the drizzle, past Gracie mansion and the apartment building that once housed the Nazi party of America. At the start of the tour a building across the East River catches my eye.

Toward the northern end of Roosevelt Island stands a striking little segment of asylum architecture: The Octagon.

The Octagon is now the entrance to a well appointed apartment building, but it used to be the centrepiece of the New York City Lunatic Asylum. According to my tour guide friend wealthy people in need of psychiatric care were more likely to end up at the West Side's long-gone Bloomingdale Asylum. The last remaining building from that institution -- Buell Hall -- now sits incongrously on Columbia university's Morningside campus housing several academic departments. New York City Lunatic Asylum was state run, and this was the place that you got incarcerated if you were poor and insane in New York. 

Image result for new york city lunatic asylum

Long before David Rosenhan sent his students off to pose as psychiatric patients, the pioneering journalist Elizabeth Seaman (an amazing character in her own right -- I strongly recommend the linked page) went undercover at the asylum to expose conditions. She gained admittance by feigning a pervasive and global amnesia, causing a mini media sensation in the process with her youth and striking looks. The book she wrote (Ten Days in a Madhouse, written under the non de plume Nellie Bly) detailed sadism by asylum staff, disgustingly unsanitary conditions, and inedible food. At one point Seaman found a spider baked into her bread. The book's publication led directly to an increase in funds to the asylum and became part of a much broader movement in which the working practices of insane asylums were exposed and reformed. 

Image result for new york city lunatic asylum
Benign neglect: The Octagon in 1970

The asylum was closed at the beginning of the 20th Century and the buildings taken over by the Metropolitan Hospital. The latter moved off Roosevelt Island in 1955, and the building fell into the kind of neglect that was widespread in parts of New York right through the 1950s, 60s and 70s. This still seems so astonishing in the contemporary New York of unceasing development and barely affordable rent. In 1972 (two years after the photo above) the last remaining part of the building -- the Octagon -- was put on the natonal register of historic places.

A stepped pathway of care

It's a cold and drizzly day when I take the elevated tramway over to Roosevelt. This island has always felt ahistorical to me. It's a warren of newly built apartment buildings with a Starbucks and a Duane Reade. Unless you have access to the gyms or swimming pools dotted around, or want to look at Manhattan from an unusual perspective, there are few reasons to visit.

You have to walk through all that to get to the Octagon complex. Moving north you start to pass tennis courts and coniferous trees. The whole place has the air of a country retreat, and feels about as far away from New York as it is possible to be while still technically within the borough of Manhattan.

I walk in past the concierge and am immediately confronted by the smartly renovated spiral staircase. This takes you up past a billiard room, a gym, and a play area for young children. Somewhere I have read that you aren't supposed to take photos, but no-one is around to care. All is serene. When I reach the top I feel like I could be in a lighthouse. Looking down the centre of the spiral staircase gives you a sense of space that few modern residential buildings manage to achieve.

It's fun to imagine that you are in some way communing with history in these buildings, and the Octagon shines a light on its own past. Prints of historical images from the building's asylum days are framed on the walls as you make your way up the stairs. Apart from these there is little to see. The refurbished interior is tasteful in its simplicity. As I leave I can see another of New York's island institutions from the northern tip of Roosevelt: the modernist concrete bulk of the Manhattan Psychiatric Center -- a state-run inpatient facility on Randall's Island -- looming out of the grey sky.


More links:

Ten Days in a Madhouse full text here

Ten Days in a Madhouse as a free audiobook download.

This page at The Ruin has some good photos of the abandoned asylum buildings. 

Asylum projects has a good page on the New York City asylum that skips some of the more lurid and sensationalist nonsense you find elsewhere. 

Thursday, 9 August 2018

Psychiatric diagnosis and informed consent: challenges and opportunities

It seems that there is an emerging middle ground in the debate about psychiatric diagnosis. Two recent pieces struck a note between polar positions: Jay Watts reminds us that diagnosis can be a lifesaver as well as a destructive force. Akiko Hart argues that diagnosis should be regarded as something over which people have a choice, rather than something to be entirely embraced or rejected.

One response to this uneasy stalemate has been to invoke the ethical requirement of informed consent. This would get around the variation in preferences by handing more choice to the recipients of diagnosis. It seems appropriate because informed consent is already the framework within which other healthcare interventions are offered. I have seen increasing numbers of people describe themselves as being "pro-choice" about diagnosis. This presumably involves taking something like an informed consent approach to the issue.

However, although it obviously moves us in a positive direction by protecting individual choice and autonomy, the idea of informed consent does not apply so easily to diagnosis as it first appears. There are at least three challenges for an informed consent framework in relation to the use of psychiatric diagnoses:

1. Scope 

The first challenge has to do with the scope of the process to which individuals would be consenting. Would informed consent  include questions about whether diagnostic categories would be used in an individual's treatment planning or in their clinical notes? Some clinicians already elect to exclude diagnostic labels in their reports - others do not. We can imagine the exclusion of diagnostic terminology from notes, but what if, given a clinician's working understanding of a person's problem, this exclusion were only token, or cosmetic?

Drilling down further, would the informed consent process also be supposed to apply to the process of clinical decision making? Here's a common situation: a psychiatrist elects not to prescribe SSRIs for an individual's depression on the ground that they believe the individual is at high risk of a manic episode. The SSRI is avoided because these medications are liable to set off mania in people with a bipolar diathesis. Here the reasoning is grounded in diagnosis (suspected risk of a bipolar disorder): the clinician does something in the best interests of an individual on the basis of a diagnostic considerations. Does informed consent to diagnosis preclude this sort of reasoning? If I have declined to consent to psychiatric diagnosis, do I also intend to prevent my psychiatrist from making this kind of judgment? What is the ethical clinician to do here? They seem to be acting without the informed consent of the individual (if they have deployed diagnostic reasoning to direct that person's care), but it seems frankly harmful (and therefore clearly unethical) to go ahead and prescribe the SSRIs given what they know.

This is a concern that could be answered if a more detailed account could be given of what "diagnosis" is supposed to include. It seems pretty clear the individual refusing to consent to diagnosis would be declining to understand themselves as "having" an illness called depression, or OCD or what have you. This seems reasonable. Such self-construals are arguably* beyond the purview of clinicians anyway. If a person makes sense of their experience in spiritual or religious or magical terms that are distinct from the language of diagnosis - clinicians may offer an alternative perspective, but they have no mandate to enforce a change in worldview. Clinicians have no right to tell people how to think.

When we propose an informed consent model for psychiatric diagnosis, we need clarity over what people are being asked to consent to.

2. Possibility  

The second challenge has to do with whether informed consent to a diagnosis is practically possible. First there are issues that pertain to the witholding and disclosure of relevant information. You see your psychiatrist and they say, "I have a diagnostic formulation, would you like to know it?" If you have the willpower to say "no" then you have to live with the knowledge that they have formed an opinion that may be relevant to you. If you decide you want to hear what the diagnosis is, you have to undertake some mental gymnastics in order to simultaneously know what you have heard while also discounting it as irrelevant to your self-understanding.

Then, given the transformative nature of diagnosis as a type of information, is informed consent to it even possible? Diagnosis changes your conception of yourself. Once you start to identify in terms of a diagnosis, the way you perceive your self and your life will shift in unanticipated ways. Perhaps you don't want to receive a diagnosis now, but once you know what it is you might change your mind. Perhaps you feel hostile to your diagnosis at first, but as time goes on you come to accept it. Perhaps your initial enthusiasm about a diagnosis turns to regret as you realise how it has limited you. Diagnosis - if it is a transformative experience in the sense intended by LA Paul - may not be the sort of thing to which one can consent.

To return to our opening proposition - that there are some people who find it helpful and some who do not - how can anyone anticipate in advance what sort of a person they will be? We need to have a better understanding of how clinicians would be expected to handle an informed consent process.

3. Coherence

The final challenge has to do with whether it is even coherent to think about providing informed consent to a diagnosis. On the one hand, we are talking about something like a procedure here, so it seems appropriate that people need to consent to it. On the other, a diagnosis is meant to be algorithmic, so (provided you have already consented to some form of assessment) consent seems misplaced. Either you meet criteria for a DSM disorder or you don't. Consider a situation in which a psychologist assesses someone and concludes that they meet criteria for disorder [x]. That is information, whichever way you interpret it. What is changed by withholding such information from someone? Is such withholding even ethical?

The coherence challenge also applies to the ways that diagnosis functions socially. Consider this situation: two people consult the same psychiatrist for mental health treatment. They meet in the waiting room and get talking. Person A says "Dr. Z has been very helpful to me. Since she diagnosed me with depression I have had a whole new way of understanding my bleak mood and my life has become far more liveable." Person B says "that's interesting, Dr. Z also felt I had depression, but I don't like diagnostic labels. She and I discussed the fact that depression is a construct and we agreed that I had a choice about whether to accept the diagnosis. I prefer to regard myself as not having an illness." One outcome here is that the two agree to disagree and take satisfaction in knowing that their provider can accommodate different preferences. But another outcome is that both feel unease. Person A's understanding of their situation may be changed now - if there is choice about this issue, do they perhaps not have an illness after all? As for Person B, they must wonder whether the doctor is simply humoring them. Yes they have agreed to behave "as if" Person B is not suffering from an illness, but a doubt arises: perhaps Dr. Z does actually believe in an illness process and is secretly regarding Person B with this cognitive set in mind.

This highly simplistic scenario gets at the problem that a diagnosis is never just about one person. Every actually-existing instance of a class tells you something about the class. If some sub-set of people within a diagnostic grouping is being told by their clinicians that their diagnosis doesn't track anything real, that has implications for the others in the same grouping. Imagine the situation from such a person's perspective. Their clinician says they have a disorder, but they are also suggesting to other people (ostensibly with the same disorder) that they can opt out of it. This looks thoroughly inconsistent. Whether you think this inconsistency is particularly troubling will depend on your views of diagnosis.

So there is a significant worry here. I suggest we can start to address it in two ways:

The first response is to acknowledge that psychiatric diagnosis is often actually applied with some degree of lassitude. A clinician doesn't have to be (in fact shouldn't be) entirely guided the algorithm. They can conclude that the symptoms aren't qualitatively the same as those described in the DSM and thus avoid diagnosing someone who otherwise resembles the criterion. Alternatively they can conclude that the person resembles the prototype for a disorder even if they fall short of the criteria. Additionally, many diagnoses have a criterion that includes this sort of language: "clinically significant distress or impairment in social, occupational, or other important areas of functioning." This introduces further subjectivity and may account for some of the variable reliability of diagnosis.

Given this lassitude, we can start to see a way for clinicians to be coherent in their inconsistency. Two people with extremely similar experiences in regard to their mood and behavior may nonetheless interpret these things in very different ways. One feels themselves to be overcome by an inane hopelessness that cannot be understood; the other believes they are in the midst of a profound spiritual or time of life crisis. Might not their functioning be impacted differently, and might it not therefore be appropriate to diagnose one and not the other as a result?

A second response is that diagnosis is open with regard to how far it applies to people's self identity. Here the get-out provided has to do with the clincian's work in helping an individual to achieve some degree of distance from any diagnosis that is used for clinical purposes. Imagine a clinician says something like "I conceive of your difficulty as an instance of DSM-disorder [x], but look, it is up to you whether and how you internalize that information - the DSM is at best a very rough guide to help us make sense of people's experiences. If you feel disinclined to think of yourself as suffering from a disorder then I would encourage you to develop other ways of making sense of it."

So depending on what is meant, it is not necessarily incoherent to seek informed consent to diagnosis, but more work is needed on how clinicians could start to implement it as a deliberate approach.

There are at least three areas of potential fudge to be worked through if we are to promote a pro-choice attitude to diagnosis. Over what activities would this framework scope? How would we manage the withholding of classificatory information? and how can we introduce an informed consent model without unduly negatively impacting the desirable social functions of diagnosis?


*I say arguably because this is a bit of an ethical grey area. On the one hand ethics codes presuppose a respect for peoples' diverse worldviews. On the other hand, the professional impetus of psychiatry and psychology includes the idea that some worldviews are appropriate targets for change. If a person believes they are being persecuted by the FBI; that is not  a belief that is necessarily respected in the same manner as the belief in a benevolent creator. Equally, the idea of "psychoeducation" implies that there is information (such as about the nature of panic attacks) it behooves clinicians to share. 

Thursday, 5 July 2018

How splitting became the patient's problem

I have a paper out in Psychoanalytic Psychotherapy about mental health team splitting in relation to the diagnosis of borderline personality disorder. The paper is a critique of the way that clinicians have come to think and write about the idea of splitting. 

What is splitting in this context? When clinicians say that someone is "splitting" they usually mean that an individual is doing something to a team to make it split. Here (from the paper) is my attempt at a formulation of what splitting is supposed to be:
An individual is said to have split a treating team when their differential behavior toward different members precipitates polarized feelings and opinions about their care and concomitant professional discord. Such a split can manifest in a number of ways, but commonly a staff team becomes organized into two starkly opposed groups. Individuals in these camps may come to have strong positive or negative feelings toward the patient and hold opinions about them that are radically different from those of their colleagues.
I remember when I first heard about splitting. In my first job in mental health I was asked for something by one of the residents. A colleague took me aside and said in hushed tones "she's splitting!" I had no idea what she was talking about.

I have kept on hearing about splitting ever since, and it seems clear to me that, although there is often something important happening when the term is used, the way we think about it is hopelessly over-simplified. I have seen situations where the identification of splitting takes on the quality of an accusation, and seems to function to absolve staff of any responsibility they have for being kind or thoughtful. I have even seen instances of frank cruelty by staff that are later explained away as being produced by the patient through their splitting.

The argument in the paper is that what was once thought of as a complex social phenomenon involving multiple actors has come to be seen as a discrete action that is perpetrated on a team by a particular sort of person. The use of psychoanalytic terminology (which describes complex phenomena but is prone to reification) has helped this process along. To get a feel for how the discussion can presented, here is an illustrative title page from a 1985 article:

It wasn't always like this. In early descriptions of team splits (Tom Main's "The Ailment" is often cited as the first appearance of the concept in the literature), you tend to see a complex description that takes into account the ways that we are all prone to intense emotions and the resulting interpersonal disputes. Main's essay describes a phenomenon that he observed in a hospital whereby the emotions of staff and patients interacted with one another in a complex way to produce splits. He invokes the emotions of staff, and explicitly says that splits are not caused by patients. 

Main's observations morphed as they were recounted by subsequent authors. The jargon heavy language of psychoanalysis increasingly entered the picture and splitting became understood as a distinctive set piece that could be seen as originating in the patient. The 1980s were a turning point. An article by Glen Gabbard from 1989 appears to have been particularly influential in cementing the view that splitting is driven principally by the use of projective identification by a patient and counter-transference responses in clinicians. Both of these are somewhat useful concepts, but they can end up putting a lot of responsibility on patients and absolving clinicians for their emotions and behaviors. 

Projective identification tends to be written and spoken about in magical ways (talk of people "putting feelings into" one another, which the psychoanalyst Morris Eagle once noted would be considered delusional in some contexts), and counter-transference has that useful "counter" prefix, which implies that the therapist's feelings really belong to the patient (they are just a "counter" to their "transference"). In fairness to Gabbard, he does try to emphasize the role of staff psychology but the language used seems to imply that splitting is centered, in an important way, on the activity of the patient.

Although there are likely sensitive clinicians and sensitive understandings of splitting, the mainstream clinical literature has been influenced by an increasingly succinct formulation of what is happening when staff splits get going. Here are two relatively recent examples:
“Conflict can arise as a consequence of consumer splitting or projection, whereby the staff act out (externalize) the internal good-bad and blaming dynamics of the person with borderline personality disorder” (Horsfall, 1999, p.428) 
“these patients selectively divide or split the nurses into good or bad persons. The conflicts and splitting of the nursing staff can carry over to the treatment team, and polarization of staff can occur, particularly as transference and countertransference reactions evolve” (Bland & Rossen, 2005 p.510).

Look at the way this is framed: "Conflict can arise as a consequence of consumer splitting" and "these patients selectively divide or split the nurses into good or bad persons." In this language we have inherited an idea that I call "the borderline as splitter." When a person is viewed this way, whatever caveats are added, they are easily viewed as doing something nefarious on purpose. A better model would be to consider how strong emotions impact groups of people. While clincians and theorists may sometimes pay lip service to the complexity and interpersonal quality of professional disagreement (but often don't), the temptation to blame it entirely on a patient's emotions is strong, especially when the alternative would entail acknowledging your own emotional responses and vulnerabilities . 

The "borderline as splitter" idea is plainly a fiction. Think about all the things that have to happen (or fail to happen) in order for a staff team to disagree about a person under their care and for this to escalate into an dysfunctional dispute. It is constitutive of a process like splitting that it involves more than one person. It takes two to tango, and it takes an entire team to split. 

Wednesday, 13 June 2018

Diagnostic underwriting

We have become accustomed to the idea of diagnostic overshadowing, where the presence of a psychiatric diagnosis causes doctors to miss physical health problems. A pain or swelling, or a lack of energy is regarded as being the result of a mental health issue and an altogether clearer medical cause is missed (overshadowed).

Anecdotally it seems that people are especially vulnerable to diagnostic overshadowing when they have received a diagnosis of borderline personality disorder. Because clinicians tend to associate this diagnosis with the classical idea of "hysteria" - the supposed eruption of emotional distress into the realm of the physical symptom - physical complaints or apparently neurological signs are apt to be considered psychosomatic. Thus a person with this diagnosis may have clear medical causes of physical pains that fail to get discovered.

We may need a similar terminology for what happens when a diagnosis alters other aspects of our self understanding; when the thing obscured is not a diagnosable medical illness, but a particular understanding of - or relation to - a mental state. 

Consider what can happen when someone is diagnosed with depression (though we could configure this example differently to make it applicable to other diagnoses); the diagnosis changes their understanding of the nature of their mood. What it means to be clinically depressed is for your mood to be significantly down, and for this to be attributable to a process lying beyond your more quotidian miseries. The depression is an illness, or a reaction, or a response, or something that makes the diagnosing clinician feel that it should be treated and not just lived.

But of course even without a depression in the picture, deep feelings of sadness, grief and despair are a part of our lives. We accept this and we live through our sorrows. They teach us about who we are and what our life is. Without a diagnosis of depression, our experience of such feelings is seen as part of the mix of ourselves and our context.

Diagnostic underwriting would occur where a depressed person's ordinary feelings of misery are mistakenly attributed to their depression; chalked up to a disorder that appears to account for things that it can't.

It might look like this: You feel hopeless all of a sudden, or guilty. You would have done regardless of diagnosis; it was something you experienced, thought or did that made it so. But because you have the diagnosis on hand, you don't understand it as a part of yourself but as a part of something else that has attached itself to you. You have attributed part of your experience to a phenomenon it doesn't belong to. Diagnostic underwriting has occurred. 

Note that what I am suggesting here is that an individual with a disorder could come to attribute their ordinary feelings to pathology. I am not making the nearby claim (which might be made by committed opponents of diagnosis) that it is constitutive of psychiatric diagnosis that all emotions in this case are "ordinary" and are being misattributed. For the opponent of diagnosis there are only "understandable" feelings. In the case  of diagnostic underwriting, there are feelings that are linked to the diagnosis and there are feelings that should not be. The diagnosis tracks something real, but it also has an impact on how we see other mental states.

Note too that diagnostic underwriting might be imaginable in theory but impossible to discern in reality. Who can say what really is me and what is really is my disorder? Who can really discern between ordinary and pathological feelings? In any case aren't these false distinctions? There is no safe place to stand in teasing this out, but the idea would be to talk about what happens as you claw your way out from under the emotional cloud of an alien experience.

Scared to feel and to trust what they feel, the individual recovering from a mood disorder has a twinge of emotion: "is this sadness OK, or is it going to be the start of my fall back into depression?" The answer may be practically unknowable, but the question is still an important one to grapple with. 

Thursday, 10 May 2018

The nightmare of eclecticism.

I have had a rather idealised vision of how a clinical psychologist would go about being a therapist. Rather than just being one type of thing ("a CBT therapist" say), I would seek to possess a sort of mental toolbox that contains skills relevant to a range of issues. Prepared in this way, I would be able to adapt to different problems by drawing on a range of techniques. This is the approach that seems to be promoted by the idea of empirically supported treatments (ESTs). You meet a person with a particular sort of problem, you reach into your toolbox for the requisite tool, and you get to work. Sometimes I might engage in some necessary systematic desensitization; at others I might follow associations to understand more about the emotions a person has not yet been able to access.

This is an integrative inclincation. It seems to offer hope for my desire to incorporate the insights of psychodynamic therapies with those of cognitive-behavioral treatments. We are, I think, animals with a prediliction to act without full knowledge of our own motivations, defending ourselves from coming to know the truth about ourselves. We are also learning machines, creatures of habit who are open to some degree of rational and behavioural rejigging. Why not hold both visions in mind at once? I don't like the idea of retreating to the familiar and unattractive warring poles that we see in certain forms of therapeutic modality bashing.

But I'm coming to think that it can't easily work this way. While some different forms of therapy sit relatively easily alongside one another (many of the acronym therapies feel like they are means to the same end, with emphases on different skills) not all do. The more time I spend talking with and listening to people from different therapeutic positions, the less hopeful I feel. The difference between psychodynamic psychotherapy and CBT is not only a difference of technique, it is also a difference of aim.

For advocates of most ESTs, the overriding ethic is that the person seeking therapy should come to feel better as efficiently as possible. This sort of improvement is to be demonstrated concretely by changes in symptom scores. The sine qua non of therapy here is the rapid reduction in a symptom that can be measured in an outcome questionnaire. Some advocates of psychodynamic therapies take this to be the aim of their work too. Jonathan Shedler has repeatedly argued that psychodynamic psychotherapy can be at least as effective (and in the same way) as CBT.

But many other dynamically oriented therapists simply aren't interested in that sort of game. For these people, the overarching ethic is that the person seeking therapy should come to understand themselves as thoroughly as possible, and live in greater freedom as a result. The distinction was drawn rather nicely by Allan Young in his Harmony of Illusions:

Simply put, different doctrines can give different meanings to the same outcome. While behaviorists and cognitive therapists say that a technique is efficacious when it produces enduring changes in disvalued behavior patterns, psychodynamic therapists, particularly clincians oriented to psychoanalytic perspectives, locate the meaning of altered behaviors elsewhere - in etiologies, symbolic content, and psychological processes.  Simply reducing the intensity of symptoms can be countertherapeutic and may signal the formation of more effective psychological barriers to insight into etiological conflicts. Real efficacy means releasing a potential for inner growth and maturation and enhancing the ability to establish and sustain gratifying social relationships. In these circumstances, the behaviorist and the psychodynamic valuations would be not simply different but incommensurable: they could not be measured by a common set of standards. (p.181-182)
We can see then that therapeutic orientation is essentially an ethical question, not an empirical one. Consider the point raised by the philosopher Charlotte Blease, discussing the treatment of depression by CBT in the light of the phenomenon of depressive realism: "well-being is not synonymous with being realistic about oneself," she points out. Blease has an ethical qualm: certain sorts of therapist might value improvement in the mood in their patients over their having an accurate view of their life situation. Psychodynamic therapists might value the realism over the improvement in mood.

This is the "nightmare" of my title. Not only is there a practical difficulty entailed in deciding what sort of therapy to do (which technique is most effective in this situation? - a hard enough question); there is a basic ethical choice that needs to be made. Once the decision is taken you have to remain consistent. You could be a CBT therapist in some parts of your career, and a psychodynamic therapist in others - but it will be potentially incoherent to pursue them within the same treatment. When moving from open ended exploration to symptom relief, how would you know that it was because it was therapeutically indicated and not better understood as a countertransference enactment? How do you maintain the inevitable frustration that is required to encourage internal reflection, when the patient has come to expect active intervention from you. The move between worldviews requires a dramatic gestalt shift.

Bad news for the early career psychologist who doesn't like joining therapeutic teams. But perhaps there is one positive upshot. Psychodynamic and CBT authors could stop their often unseemly squabbling. They aren't necessarily pursuing the same goals.

Friday, 23 March 2018


The psychologist's interest in boundaries is the source of much well deserved mockery. Apart from the jargonistic deployment of "boundaries" as a justification for various therapeutic prohibitions (second only, perhaps, to the use of "inappropriate"), the enforcement of a boundary often looks like an effective way of keeping a genuine relationship at bay.

Of course there is a certain necessity to boundaries. Apart from the fact that a professional relationship has to begin and end somewhere (you really don't want your therapist following you home), the moments when a boundary is pushed can provide a useful source of discussion. Take the example of a clear start time for psychotherapy sessions. If someone is repeatedly 15 minutes late for therapy, that is something to be interested in. Sometimes life gets in the way and people are late. No one should be getting too hung up on lateness - we're all adults. But if someone is repeatedly late then something might be going on. A polite person, reluctant to hurt a therapist's feelings, might be having reservations about the sessions. Noticing the lateness and discussing it is a way of drawing attention to something that might be important.

But boundaries can definitely be a fetish for psychologists. There is a deliciously daft example of this in Allan Young's "Harmony of Illusions," a history and anthropology of PTSD and its treatment. Young spent some time conducting fieldwork in a VA hospital during the 1980s. As part of this work he sat in on trauma focused psychotherapy groups where Vietnam veterans were encouraged to broach the atrocities they had witnessed or participated in. Because the therapeutic model put a high value on disclosure, the group members were expected to stay with the difficult content of the sessions and not engage in avoidance. Young describes how the group entered a crisis when it seemed to the psychologists that one of the members was going to the bathroom rather a lot during the discussions.

Because these frequent bathroom trips looked (to the psychologists) a lot like avoidance, the psychologists felt they had to address them. A rule was put in place - no bathroom trips during the group sessions. If that sounds unreasonable to you then you can imagine the reactions of group members. There was something of a revolt and, stuck between the need to stand their ground (notice the power struggle that has immediately snuck in) and the need to be reasonable  the facilitators had to find a solution. The apparently face saving solution elected was for group members to urinate into wastepaper bins in the group room. This met the ordinary human need to urinate without sacrificing the psychologists' insistence on staying in the room to engage with trauma narratives.

Of course, urinating into a bin in a group therapy room is not only undignified, it is patently absurd. It is hard to imagine that the vets in this group weren't aware of this, and Young describes how they availed themselves of the opportunity to relieve themselves with what became an unsustainable frequency. There is a kind of check-mate that has happened here. The staff's desire to focus so heavily on rules over good sense allows the veterans to adhere to the letter of the law while ignoring its spirit. If the facilitators felt any horror at their proximity to increasingly full buckets of pee, they had only themselves to blame.

Versions of this kind of struggle are the bread and butter of inpatient mental health care. It is par for the course that protocols will be set and violated, and that this kind of thing will be grist for discussion. But the descent into naked power struggle is far too frequent. When this happens the staff have the double advantage to setting rules (however unreasonable) and then blaming patients for their violation. If you must leave the room to pee then it has to be your avoidance/aggression/personality disorder that is to blame. This is getting things all wrong. Yes, boundary transgressions (and isn't the language of boundaries so accusatory!) should be discussed. But if some sort of staff/patient power struggle emerges, it is the job of the staff to see this unfolding and to sidestep it. This may have to involve a climb-down and a dose of good old fashioned humility. Of course people go to the bathroom as a form of avoidance (at least, I know I do). If that starts happening then discussion is a better way out than ad hoc rule creation. Any sensible polity can only implement laws that don't burden its participants unreasonably.