On chatbot psychologists for unworlded selves
Recently, as large language models have improved—meaning that they have become more deceptive—there’s been a bit of a push to use AI chatbots to take on the role of the therapist. This amounts to replacing the therapist with a philosophical zombie: a simulacrum devoid of sentience and love. Many people behind this push for the zombification of therapy have half-decent intentions. But you know what they say about intentions and the road to hell.
I know of some people in counselling services who have been so overwhelmed with calls for help in recent times that AI therapy has seemed better to them than turning people away. But is this true? The fact that a qualitative decision like this has been rooted entirely in a quantitative concern should already raise alarms. And then there is the fact that the AI turn is bleeding into so many other aspects of life. What interests me, though, and what I focus on here, is what sort of conception of the human being must be assumed, even if only tacitly, for AI therapy to be proposed and endorsed, along with AI education, AI companionship, and AI everything-else.
I think the question of what conception of personhood underpins any form of decision-making is a good one to ask in general these days. Personhood is a vital, even the vital, hermeneutical condition for all of our understanding. It is the often hidden ground of our judgements about all kinds of things. And yet it is typically so much a part of the background of human concerns that we may not bother to examine it properly, especially in an age possessed by a drive for constant technical perfection. By getting to grips with the question and the implied answer of what idea of personhood is at play in zombified therapeutics and other AI impositions, we may better assess what sort of world we are making for ourselves and others. I recently posed this question to a certain AI and this is the answer I got:
“The fundamental conception of personhood that underpins the creation of large language models is that human intelligence and language are primarily based on patterns and statistical regularities in data. In other words, the key assumption is that the ability to understand and produce language is largely a matter of recognizing patterns in the ways words and sentences are used in communication, and that this ability can be modeled using statistical techniques and massive amounts of data.”
Notice the looseness with which categories are treated: personhood and intelligence and language and pattern recognition are used in such slippery ways that the logic of this simple paragraph falls apart the moment you look at it too closely. Still, a terribly reductive view of personhood is discernable here. In fact, genuine personhood has been excommunicated from the bot’s answer. The logic here, if there is any, is a bit like suggesting that what makes a particular painting beautiful is the fact that it is 32 by 54 inches in size, colourful, and depicts several objects arranged somehow-or-another. The answer is no answer at all. But it does tell us something.
I have reasons for taking the above rather stupid Ai framing of personhood pretty much at face value. There’s more on this below. The notion of personhood adopted by AI here seems to be little more than what is represented since what is represented is much easier to subject to an algorithm. Isn’t it fascinating, though, that the answer provided by the AI above could just have easily been offered to a question about the nature of the large language model AI itself? AI is a pattern-matching, pattern-predicting technology. It identifies, or more accurately replicates, patterns and relationships in data to represent the statistically most likely answers. No embodied meaning is necessary, apparently. As a communication tool, it is unempathetic and disengaged. It cannot read between the lines as we do because it is—and this we should remember well—a dead thing. Who really wants a dead therapist to help you through a difficult time? “And how does that make you feel?” asks the cliché-ridden electrified undead. Even the ethical principles that bots seem to adhere to are merely coordinates for their formula-derived fakery. AI is not bound to them in the way human beings would be bound to an ethical code. This should not be surprising given that AI is all surface and bloodless being, all rules and no principles, all bits and bytes and no soul. Its centre is nowhere and its circumference is everywhere.
What this means for therapy in particular is rather fascinating. It assumes that what is happening in therapy is primarily an issue of procedural data transfer. This much is clear when you read transcripts of the sort of lame responses that therapy bots produce. Therapy becomes a mere method. The sender inputs data. The machine processes data. The machine spits out the statistically most likely answer. And that answer then functions as AI’s input into the sender. The process then begins again. Obviously, this is not even close to what happens when two people—a patient and a therapist, say—communicate. And we should be mindful that any description of what happens in the happening of communication between anyone possessing personhood will always be a reduction. We should not assume, as AI programmers seem to be assuming, that our representations of communications between people equal the communication itself. It is discommunication and uncommunion.
Communication is expression and not merely representation. Being is not reducible to communication or algorithmic filters. It is a presencing and not a mere depiction, and so it is not really possible to conflate it with abstractions. What happens between a patient and a therapist presumes the primacy of interdividuality over individuality. It presumes the primacy of being over articulation. In any therapeutic session, for example, a decent therapist would be aware of certain subconscious feelings that show up as a patient is speaking. These feelings may have nothing to do with what the patient is actually saying. Such feelings may even be different in a way from how the patient understands and represents his experience. This would inform how the therapist ends up relating to the patient, possibly even more than the explicit content of the discussion would. In the end, the actual therapy may have surprisingly little to do with the explicit content of the therapy session. What matters is something more akin to energy transfer, which is fluid and invisible, than to a clear pattern of discernable exchanges. But even the energy transfer metaphor here is limited. Therapy is not physics, as such a metaphor suggests, but metaphysics in action. What is happening in therapy is not mere therapy but a reconfiguration of a cosmos of meanings around a particular order of being. Most of the happening of the therapeutic is implicit; and implicitness is not something that AI gets.
AI simply can’t be attuned to all the necessary subtleties that could help nurture psychological flourishing. This would be true even if it were to get the wording right. The result of relying on AI for what it cannot provide may therefore be disastrous. Recently, Lauren Southern posted a story on Twitter about a supporter of her work who was checked into an American hospital for suicidal thoughts. “His treatment,” Southern said, “began and ended with him being locked in a room with no objects he could use to kill himself and an electronic tablet AI system to talk him through his emotions.” The absolutely predictable result of this was that his state of distress only deepened. Southern put it well: “Humans cannot be replaced by computers. We need other humans. We need spiritual and emotional connection. This cannot be replicated, it cannot be impersonated. Be wary of those who pretend it can.”
Most of us know this already at an intuitive level. And yet AI chatbot psychology is pervasive and becoming more so. Why? The answer is that the problem of AI chatbot therapy is not the real problem; it is, rather, a terrible solution to a much deeper problem that has not yet been properly identified. The deeper problem, I’d suggest—and the reason why I have taken the AI’s definition of personhood as truthful—is that many of the hermeneutical assumptions underpinning the use of AI therapies are built into everything that Cartesian modernity has touched. The zombification of therapy is the result of an original zombification of personhood already evident in some, although thankfully not all, psychological models of personhood. Here, to be clear, I want to point out something very obvious: good psychologists, and I know quite a few of these, reject the Cartesian assumptions of modernity. But as Southern’s story shows, and as the rise of AI so-called therapies shows, many do not. For such people, replacing therapists with robots does not contradict how they implicitly think of people. And, to my mind, this is even more disturbing than the advent of AI therapies.
Quite a long time ago, Philip Rieff famously explored what he called the triumph of the therapeutic in Western Culture. This has some significance for the issue under discussion. He meant by this the arrival, on the stage of history, of a new conception of the human being. Rieff called this conception of personhood psychological man, a being of seriously diminished personhood, robot-like, golem-like, lacking emotional depth, a buffered Cartesian self wearing shallowness like a suit of armour, cynically detached, and given to obeying rapidly fluctuating imperatives—depending on which crowd he happens to be hanging out with at any given moment. Thanks to being radically privatised, given over to personal whims and emancipated from duty, he is especially prone to mimetic desire while also disdaining authority. Psychological man, then, by being somewhat divorced from tradition and removed from deeper forms of ritual meaning, is the human being primed not only to need therapy but also primed for algorithmic manipulation.
Of course, psychological man did not emerge in a historical vacuum. As Rieff argues, Western civilization experienced a number of shifts in how human beings have been thought of and socialised before arriving at this conception. In Greco-Roman times, man was considered predominantly political. Political man ordered life around the cares and concerns of his wider community. With the ascendency of Judeo-Christianity, well into the medieval period, religious man gained prominence over political man. He ordered his world around obligations within a context of spiritual realities. After this, in the early modern period, man became fundamentally economic; he got caught up in various processes of social organisation. And then came psychological man who ordered the world around himself.
Taking a quick step back, we still have a strong sense of what economic man is about in so many corporations and educational institutions today. Managerialism has taken over and bureaucrats seem to relish mincing the qualitative up in a statistical meat grinder. We all have more paperwork to fill out than seems reasonable. Politics and religion have become servants to money, even though the economy is an entirely imaginary nominalist entity far removed from genuine politics and religion. Human flourishing gets filtered through metrics and people are soon reduced to consumers and consumables. Most significantly, assuming the prominence and prowess of economics helped to turn the person into an individual. Personhood, previously thought of as fundamentally enworlded, became decidedly unworlded.
The economic conception of personhood was made all the more plausible by so-called economic freedom, among other things. Even now, it is not uncommon to get the feeling that you are less of an employee than a figure on a payroll. Social media thrives on selling your data to people you have never met as if that data—that particularly disembodied, nominalist representation—is what you fundamentally are. The rise of the economic man, in other words, helped to set up the mechanistic conditions according to which an even more decontextualised conception of personhood could be produced. Yes, produced is the right word here. Having been atomized, the human being could then be thought of as entirely psychological and even disembodied, very much like an AI.
On the surface, the notion of the psychological man should have proved to be more personal than that of the economic man. But in practice, it has always been more impersonal. It opens us all up to tremendous exploitation simply because it presumes an inherent loneliness to being, rather than presuming loneliness to be a violation of our being. One case in point: How else could lockdowns have persisted to such an alarming extent during the recent pandemic? One example of how this Cartesian bias manifests is in a common cultural disdain for those, especially men, who do not want to go to therapy. As if therapy is the only place to get (mentally) well. There’s even a meme for it. The formula for the meme is: “Men will literally do X instead of going to therapy.”1 The meme exposes the enworlded sense that sane men (and women) rightly have that they are not the atoms they have been told they are. Other avenues for wholeness are open to us that have nothing to do with institutionalised therapy. In so many expressions of that meme, something emerges that seems to me to be essential to not just therapy but life.
Essential to the interdividual relation is a deep understanding, often implicit but definitely undeniable, that we suffer together. We do not all suffer in exactly the same way, of course, but life is already a kind of suffering. It is not just pain or something to be endured, although it can be. Rather, it is something to be allowed, tarried with, and grappled with. There is a sense— generally more for men than women, apparently—that talking about our problems isn’t always the answer to our problems. Living with them is the way. Living beyond them, where possible, helps. Having people who can carry you or suffer with you when you are living with your problems is key. When I’ve been at my worst, I’ve had friends yank me out of my introversion and force me to go places and do things—but they’ve never forced me to talk to them about how I feel. And that has often been more constructive to me than talk. The crucial thing has been their willingness to allow me to suffer in the way that I need to while also refusing to allow me to suffer alone. I like to joke that this is an example of the annoying love of God, which shows up even when you don’t want it to.
But AI suffers nothing. AI bears nothing, it carries nothing, it risks nothing; it does not feel. It does not and cannot love us when love is what we most need. At its best, throwing a patchwork of averages at us means that it cannot deal with us as people but only as instances of some generalised pattern. Remember: technical knowledge itself, which runs AI, is already impersonal. It does not depend on any specific programmer to be what it is but belongs to no one. Technical knowledge is not philosophical insight and can never be.
As Friedrich Jünger writes, “It is one of technology’s characteristics that it releases man from all nonrational bonds, only to subjugate him more closely to the framework of rational relations.” To update this a bit, it’s one of AI’s characteristics that it releases us from natural relationships only to subjugate us more closely to the framework of utterly impersonal computational and algorithmic relations. Methodology triumphs over life. This is no new thing—surveillance capitalism has been well underway for some time now—but it intensifies certain trends already built into the algorithmification of the world. The idea of AI therapy is, therefore, insane; it has nothing to do with therapy at all, especially for those who are in a particularly vulnerable state, especially those who need to know that others, strangers or friends, do actually care.
On some level, we don’t need a clear definition of personhood, although we must be attentive to what conception of personhood is at play in anything. We all know what personhood is, after all, because we are persons. Just take some time to explore your insides and outsides, from your material to your spiritual aspects, in all your yearnings and delights and sorrows, in all your actuality and your potentiality. You’ll know soon enough that you are more than the image of yourself that is handed to you by a machine that knows nothing of what it feels like to undergo the human experience. It knows nothing of what anything means, let alone what it means to be human. Just taking the time to compare who we are to what AI does should be enough to show us just how far short of personhood it falls. Just comparing ourselves to how we are often framed by systems and ideologies should reveal just how often we are thought of as diminished beings, there to serve nothing higher than the immediate demands of those systems and ideologies.
And so, a reminder. You are more than a mechanistic procedure—a thing to be subjected to some larger process. From what I can tell, the current AI boom is operating largely as a repetition of the age-old formula for idolatry that would have us believe and act as if man is meant for the AI instead of believing and acting as if AI is meant for man. Still, as I wanted to show here, the irony is that it is precisely the psychologising of human beings that has made this particular form of idolatry so much easier. The answer to this is in striving to be whole and in recognising that wholeness is a matter of the whole of life, including a matter of what we make of our suffering; a matter not just of what we individually want to ask from the world but of what the world might be asking of us. It is about striving for a sense of personhood that is far more expansive than our narrow psychologisations can conceive of. A decent therapist may help with this, certainly. But it remains true that man cannot live by therapy alone.1
Men will literally write a Substack instead of going to therapy.