Art Economics Low Politics Decline Political Theology Power Geopolitics

Degenerative AI

Degenerative AI
Why I won't be encouraging my students to outsource their intelligence

“Not the external and physical alone is now managed by machinery, but the internal and spiritual also ... The same habit regulates not our modes of action alone, but our modes of thought and feeling. Men are grown mechanical in head and heart, as well as in hand. They have lost faith in individual endeavour, and in natural force, of any kind. Not for internal perfection, but for external combinations and arrangements, for institutions, constitutions – for Mechanism of one sort or another, do they hope and struggle. Their whole efforts, attachments, opinions, turn on mechanism, and are of a mechanical character.”

— Thomas Carlyle, Signs of the Times (1829).

“Cannot you see, cannot all you lecturers see, that it is we that are dying, and that down here the only thing that really lives is the Machine? We created the Machine, to do our will, but we cannot make it do our will now. It has robbed us of the sense of space and of the sense of touch, it has blurred every human relation and narrowed down love to a carnal act, it has paralysed our bodies and our wills, and now it compels us to worship it. The Machine develops—but not on our lines. The Machine proceeds—but not to our goal. We only exist as the blood corpuscles that course through its arteries, and if it could work without us, it would let us die. Oh, I have no remedy—or, at least, only one—to tell men again and again that I have seen the hills of Wessex as Ælfrid saw them when he overthrew the Danes.”

— EM Forster, The Machine Stops (1909).

Imagine the following scenario. A gifted young sportsman, set to become one of the world’s top Olympic sprinters in a few years, goes to his doctor for a routine checkup after recovering from a recent injury. On walking into his consulting room, he is immediately struck by the expression on the doctor’s face. The man is aglow with childlike wonder. The young athlete doesn’t have to wait long to find out why.

“I have some amazing news for you,” says the doctor to him. “We need to amputate your legs!”

The young man is horrified. He asks, “What’s wrong with my legs?”

The doctor replies, “Your legs are just fine! There’s nothing wrong with them at all! But—they have to go.” The athlete, puzzled and anxious, asks for an explanation. “The thing is,” the doctor explains, still positively beaming, “we have this new prosthetic technology. It’s been in development for years and it’s astonishingly impressive. You’ll be gobsmacked at just how impressive it is. But to use it we have to amputate your legs.”

“At least, let me see what you’re talking about,” the athlete says. The doctor goes to a big white cabinet at the side of his consulting room, opens it, and takes out what look very much like two ordinary egg-beaters.

“Those look like ordinary egg-beaters,” says the sportsman.

“Quite right,” the doctor replies. “That’s precisely what they are!”

None of this makes sense to the young man. “You want to chop my legs off and wreck my sports career, destroy my chances of reaching the Olympics, so that you can replace my legs with egg-beaters?” The doctor hesitates for a moment. Bewildered that any sane man would look this particular gift horse in the mouth, he replies, “But, just look at them”—he holds the two egg beaters out in front of him as if this will help his patient to see them more clearly—“isn’t this just marvellous technology!?”

There are two things in this mad tale I want to home in on. First, it highlights that technology is not just an extension or enhancement of human capacities, but is always, in some sense, an amputation. Second, it reveals, via its absurdity, that when adopting technology as a substitute for some human capacity, it is possible to select a bad substitute. Selecting the right substitute would require having the capacity to evaluate the substitute properly, and this is by no means an automatic given.

By analogy, consider AI, which I want to discuss here. The kind of substitution that AI promises is worrying because it is, I believe, as bad as substituting egg-beaters for legs. But, to make matters worse, it is not obvious to most people that it is such a terrible substitute. It is, at its very core, concerned with deception; it is not even vaguely close to offering what thought can offer and yet this is why people seem to want it. They believe it can effectively simulate human thought. And, so the story goes, it can do this far quicker than the mind can. But before I say more about this, I first want to talk a bit more about the amputative nature of technology.

Technologies are extensions of ourselves, as Marshall McLuhan has discussed at length in his work. They are meant to be enhancements. Just as an axe enhances human strength, a chainsaw enhances the axe. But McLuhan also emphasises what is typically forgotten, namely that technologies don’t just enhance but also diminish and obsolesce. They don’t just give but also take away. The prosthetic, in its very nature, rests on a necessary amputation. There are degrees and kinds of prosthetics and variations on the theme of amputation. But the point remains that adopting the former means accepting the latter.

There is no taking on the effects of technology without taking on its side effects—which are simply effects by another name. Here’s a tamer example than the one I offer above to put things into perspective. Just recently, I saw a video in which a woman asks four university students this simple question: “What is 15 x 4?” The students fumble around with a few possible answers. One says “23,” then “24. Is it 24?” She thinks the latter answer is a correction of the former. But then another student more confidently announces “48!” All four of them agree. The answer, they say, is definitely 48. Feeling certain, it turns out, is not nearly the same thing as being right.

Apart from demonstrating why democracy is potentially more dangerous than fascism, and apart from echoing Solomon Asch’s and Stanley Milgrim’s famous experiments in conformity, the video demonstrates the power of the common calculator or the common calculator app to destroy the ability of ordinary people to perform exceedingly simple calculations. A skill that is undeveloped and unpracticed is no skill at all. My mathematics skills aren’t so terrible that I couldn’t quickly figure out the answer to the above question before the students answered wrongly. Nevertheless, my maternal grandmother, who I witnessed working out calculations with profound speed and accuracy when I was younger, would have been better at it than I could ever be. She never outsourced her ability to a calculator. Portable calculators only became accessible around the 1960s and 70s, when she was in her prime.

The principle is not difficult to grasp: the tools we use can and often do hamper, if not outright attack, the abilities and potentials we possess. What we don’t practice, we lose. Just as muscles can atrophy when they are not used, skills can be lost. Intelligence is not immune to desecration.

Now consider AI as an assault on human potential. Many people around me have taken the stance that this is just a new technology that we need to get used to. “People have always opposed new technologies,” some have said to me in these words or words like these, “but this changes nothing. Everyone’s going to use AI in the end so there’s no point in fighting progress.” Here’s a question for those people. Progress towards what? Anyway, this somnambulant subservience to the supposedly inevitable is just an argument that peer pressure ought to determine the quality of our decisions. In this case, it’s like saying, “As long as we all agree to use this or that poison, it’s good.” Try and apply this awful principle more widely and soon we’ll have people saying, “Russian roulette is really fun. Let’s mandate it for children!” Or “Chewing cyanide is so popular these days that it should be served at restaurants before meals!” Well, I will not eat the bugs. I will not live in the pod. I will not be an NPC. And I will not be letting AI do my thinking for me.

It helps, mind you, to stop considering this too abstractly. The moment we think of humanity in general and of AI as something humanity uses, we might find no reason to oppose the scraps and dregs of human ingenuity being thrown our way. But stop and consider real human beings and what this might do to them. It is my very own children I think about when I think about the way the mimetic flood is going. Will AI be good for them? Will it bring the best out of them? The answer becomes much plainer when you put it like that. The answer is no.

But this negative answer is even plainer when you study technology and its effects. What astounds me is that the most basic question, the question not of what AI can and cannot do (in its content) but of what AI is and what effects it’ll have on people (in its form), is not a question many policymakers are placing at the centre of the discussion. It’s too difficult a question to answer for people who have already become conformists and NPCs. But, for that reason alone, their reasoning needs to be treated with no small amount of suspicion.

I’ve seen various general guidelines for the use of so-called generative AI in an educational context. All of them deal with humanity as if it is this nonentity that must not be told no on any account. The tech must be accepted. We cannot win. That’s the general vibe anyway. The 1968 slogan, “It is forbidden to forbid,” applies well to technological developments and the adoption of those developments. One guideline document, produced by a company called Propello, has these words: “Artificial intelligence technologies are advancing rapidly and disrupting the way that traditional methods of teaching and instruction operate.” After agency is granted to dead code and disruption is treated as unavoidable, the discussion jumps into the content—not the form—of certain kinds of AI and what it does well and badly.

To make matters worse, the content-fixation turns to political correctness as it considers issues of “Bias and Inequality in AI” and “Overcoming Algorithmic Bias,” as if the real trouble with AI is not that it encourages a kind of brain damage for everyone but that it is not yet advanced enough to make ethical decisions according to liberal ideology. Worse, the technological sense of that word bias, which is already prevalent in more technical AI discourse, has been mistaken for the contemporary obsession with a particularly insipid form of politeness. Did AI write this document? It doesn’t feel very human to me. It is devoid of helpful insights.

But that’s just the start of the trouble. Were this managerial-speak a unique case, I could dismiss it. But it is perfectly in keeping with trends everywhere. What’s the use of noting the uses and limitations of AI when no attention is paid to what AI is going to do to us? It is, as I’ve already implied, like joyfully discussing the wonders and limitations of an artificial limb when using said limb requires amputation and—moreover—is not even close to being an adequate replacement for real legs. AI is to thought what egg-beaters are to legs. We must notice how little thinking is going on first to realise why people are mistaking it for thinking. The real Turing test will be that people will be able to talk about AI without turning into dimwits. So, what is AI really? What will it do to us?

One of the dominant responses to the arrival of AI, especially among educators, is to adopt the idea that what we need to do is encourage critical thinking about AI. This is as if critical thinking is the only kind of real thinking. One example of said solution—in this age possessed by the ideology of solutionism—is this: Ask a question, get AI to answer it, and then take the trouble to critique the answer. It sounds smart until you think about it phenomenologically. Notably, this doesn’t come close to addressing why one should let one’s answers to any proposed question be conditioned by the answer provided by a machine. Why let the machine set the terms of discussion?

Moreover, this supposedly clever use of AI reveals just how little thought is given to the fact that engaging somewhat critically with AI answers right now presumes that students could do this well. Already, I’ve heard many students and colleagues admit to struggling with such a task. Thinking has already been eroded to such a degree, possibly mostly by hyperlinking, that it’s difficult for ordinary people to think coherently about any given subject. This already-damaged capacity is only likely to get worse after a few years of having people live in a world utterly immersed in all the recycled AI drivel that we are already being bombarded with. What sort of consciousness will people then evaluate AI with? According to which values? Already before the advent of LLMs and the like, so much public human thought has started to take on the shape of something that has passed through some sort of algorithmic meatgrinder.

More broadly, the example above of getting people to critique AI answers falls into the trap noted by McLuhan, of letting the content undermine our awareness. In Understanding Media (1964), McLuhan notes the following:

“The content or uses of [any] medium are as diverse as they are ineffectual in shaping the form of human association. Indeed, it is only too typical that the ‘content’ of any medium binds us to the character of the medium … Our conventional response to all media, namely that it is how they are used that counts, is the numb stance of the technological idiot. For the ‘content’ of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.”

To get behind the content, to get a sense of the form, I want us to take a step back. I don’t want to get into the nitty-gritty of what AI developments are happening. I don’t want to spend aeons distinguishing between one and another kind of AI or discussing the nitty-gritty of machine learning, etc, etc. I don’t care which developments are happening even though I have, often by virtue of professional accident, learned a lot about them. What concerns me most is what it means to use AI; and I propose that we should tend towards definitely, even defiantly, not using it. Just as pornography creates sexual dysfunction, AI will contribute significantly to intellectual dysfunctions of all kinds. If you think people are stupid now, just wait. The mindrot is only set to get worse. The existential calamaties to arise from using this deceptive tech to chip even more away at our meaning-finding faculties are going to be terrible.

So let’s ask this question: What is human thinking really like? Consider, out of the vast array of mental capacities we possess, our capacity for reasoning. There are, broadly speaking, three kinds of reasoning: deductive, inductive, and abductive. As Erik Larson discusses in his essential book The Myth of AI (2021), AI research used to be focused on the first of these until it was realised that deductive reasoning is only a way of checking what we already know; it can’t teach us anything new. What AI research focuses on now, as a consequence, is inductive reasoning.

A lot of human reasoning is inductive. It is reasoning that generates emergent patterns and relies on regularity and enumeration—that is, the sheer accumulation of data. Induction tends towards averaging and, because of this, trends towards the banal and the brittle. Induction is often right but not perfectly so, as philosophers like David Hume and Bertrand Russell have explored. Just because something is regular and has a fair degree of predictive power doesn’t mean it is trustworthy for arriving at true conclusions. The turkey who predictably gets fed at 9 o’clock every morning will predict that he’ll be fed the next day, even though it is Christmas and he’s about to lose his head. Induction works in rule-bound games, which are closed systems. But it doesn’t work in life, which is filled with unpredictability and with explosions of otherness that confound expection. It is the exceptions, the nuances, the micro-differences and subtleties of concrete experience that count in life. And for that you need abduction.

Abductive reasoning is often thought of as a kind of guessing. In a way, it is guessing. It is reasonable guessing. It is a form of logical inference that seeks the simplest and most likely conclusion from a set of observations. It is not ennumerative. It looks for pictures and stories and not just patterns. It seems outside of mathematical rigidity. It embraces Gödel’s incompleteness theorem. It is the ground of our reasoning. It is the primary way we make sense of the world. It is through abduction that meaning becomes real to us. Its main aim is to get past deceptions by attending to nuances, exceptions, difficulties, complexities. No AI can perform abductive reasoning. It can’t get beyond deception because it is deception par excellence.

In theory, doesn’t this just mean that we should leave induction to machines and lean on our own abductive capacities? In other words, shouldn’t we just outsource what we’re not so good at so that we can get on with what we are good at? This is perhaps one answer to adopting generative AI in its various forms and perhaps, I will grant, there is some argument to be made in its favour. However, the human mind is an integrated system—at least, when working well it is. In the mind, deduction, induction and abduction can and should work together, together with the many wild and wondrous other dimensions of consciousness and experience. It is together that the full potential of our thinking can be exercised.

I know some experienced researchers who are using AI to help process vast quantities of data while they provide the interpretive framework and additional reasoning skills to (a) check that the machine isn’t misleading them and (b) find sufficient meaning in the data to add to knowledge. Still, the machine is conditioning thought more than the other way around. The medium is not just some minor tweak to an existing structure but something that reshapes the entire environment, and perception with it.

In theory, since the weaknesses of deduction and induction for AI have been exposed, and its lack of abduction has been noted, shouldn’t AI researchers just move on to model abductive reasoning? Maybe but no one knows how it works. It’s a mystery. Douglas Hofstadter and Emmanuel Sander, in their monumental book Surfaces and Essences, have suggested what I think is the most plausible possibility, which is that it works by analogy, but this human capacity can’t be translated into code because it rests in a felt and intuited sense of the Gestalt of reality.

Anyway, as I see it, here’s the main trouble with using AI, which is just a glorified calculator, to compensate for human thinking. The trouble is that it’ll force people into amputating the foundation of their thinking: abductive, analogical reasoning. Intuitively, because we have this mode of reasoning at our disposal, we can mistake what the computer is doing for what people can do. We can, therefore, also take deliberate steps towards eroding our capacity for perceiving the world in its directness and complexity. Any form of vicariousness taken on as a dominant mode of encountering the world will ultimately erode our sense of meaning. We will become the mere blood in the veins of some much larger machine, alive but barely so; diminished in our sense of the world and in our sense of ourselves.

Am I saying that we should wait until AI researchers catch up to human intellect and then leave the real world to the machines? Will these thought-prosthetics be good enough then? No. I am deaf in my right ear and I permanently wear a hearing aid for this reason. But I can honestly say this: as grateful as I am for this technology, which restores to me a somewhat normal sense of the auditory world, I would prefer that my right ear worked normally. The technology in this case is a compensation for something lost. There is no trade-off in my using it, even if some extra effort is required from my side to get my hearing checked regularly, look after the tech, and so on. And, again the use of a hearing aid presupposes some kind of amputation.

But with generative AI, there are many trade-offs. Overwhelmingly, it is the richness of the experience of understanding that is diminished. Understanding is an experience, one of the best aspects of this human experience. So much of our experience of the world is bound up in it. Sure, I can ask some AI to explain Hegel’s Phenomenology of Spirit to me, and perhaps it’ll make it easier—if it could only get the basic stuff right, which it jolly well can’t. But the real joy of understanding is found in the fact that one cannot merely read Hegel’s Phenomenology of Spirit. One has to fight with it. One has to be read by it. One has to want to hurl it against a wall because the book feels like an assault sometimes. One has to want to burn the damned book.

Only, one must not burn it, one must read it and understand and experience what Hegel is trying to say. Then, the light will dawn and the suffering will prove meaningful—even if intermingled in that suffering is a distinct sense that Hegel is ultimately wrong. But note: understanding isn’t a disembodied phenomenon even while we grapple with abstractions. Another analogy might help: it’s one thing to go to a place, to experience it, to feel it, and quite another to see photographs of it. When the internalisation of understanding is sacrificed to vacariousness, we lose contact with our depths.

I think often of that old, brilliant movie, The Truman Show (1998). When he’s a kid, the protagonist Truman has this insatiable appetite for adventure but the studio that adopted him to star in the show of his own life without him knowing about it—that studio doesn’t like his adventurous spirit. It’s destined, and they know this, to mess with their plans for him. He’s wrecking their script. He’s confounding their algorithm. They start to deliver strong messages to him via the actors in the show that his adventurous spirit should be suppressed. All places everywhere have been discovered, after all. All knowledge has already been documented. And yet, he won’t—he can’t—follow their intentions for him.

And I can’t either. I can’t submit myself to this increasingly algorithmified world because it’s utterly stifling. Maybe I was trained wrong. Maybe I didn’t get all the right inputs from the social engineers of our time. In any case, I don’t think reality works as the behaviorists would have it work. If we would just allow it to speak to us and in us, we’d all know this. Actually, the sheer proliferation of mental illness, rising suicide, and the widespread loss of meaning, all of it says we already know at some level. We know this isn’t good. The mimetic sway of things may eventually pulverise most of us into submitting to it. But I won’t.

When the adventure of understanding is taken from us, our inwardness is tyrannised. It is tyrannised by sheer, unbridled artificiality. It is the lie that starts to determine what sort of truths we are willing to accept. Not all of the consequences of this are obvious but to deny that there is a cost to casually adopting this tech is to me a form of willful insanity. However, as I’ve already intimated, we need to go further. Generative AI is not a substitute for thinking in the way that a prosthetic leg is a substitute for a real leg or in the way that a good hearing aid fills in for lost hearing. AI, with its incapacity for abductive reasoning, which only trains us more and more to accept inductive reasoning as the only legitimate way of constructing a sense of the world, is like supplying eggbeaters as prosthetics when legs are needed. It’s worse, still. It’s like deliberately, enthusiastically, delightedly lopping off good legs and replacing them with things are are not legs at all.

Support the author here