If the medium is the message, what is the message of AI?
- Charley Johnson
- Jul 7, 2024
- 5 min read

Marshall McLuhan famously argued that the medium matters more than the content it carries: the medium is the message. E.g. TV has made all news into entertainment. Or, social media has turned everyone into a content creator, and everything into a collective performance of the crowd. So what’s the message of generative AI? As Ezra Klein rightly suggests, it is that humans are replaceable.
But we’re not, actually. Vaughn Tan, professor at University College London’s School of Management,argues that we’re posing the wrong question as a society.We’re asking, ‘Can AI systems produce outputs that look like human outputs?’ when a more discriminating question would be: ‘what can humans do that AI systems cannot?’Tan’s initial answer to this question is meaning-making!(Paid subscribers will get access to a conversation between Tan and me next Sunday.)
For Tan, meaning-making is “Deciding or recognizing that a thing or action or idea has (or lacks) value, that it is worth (or not worth) pursuing.” We make subjective decisions about what matters and why all the time — but Tan is concerned we’re no longer conscious of it. That we’re moving through the world unaware of these subtle decisions and judgments. Tan offers four types of meaning-making to make these explicit:
Type 1: Deciding that something is subjectively good or bad. You might decide that “AI is good!” or “AI is bad!” (Or you might read Untangled and decide that whether AI is good/bad misses the point altogether; that these tools are entangled in social systems, and we can’t think of them as conceptually separate phenomena.)
Type 2: Deciding that something is subjectively worth doing (or not). For example, you decided to subscribe to the paid edition of Untangled (right??) because you think it’s worth doing.
Type 3: Deciding what the subjective value-orderings and degrees of commensuration of a set of things should be. For example, I’ve lived in Los Angeles and Brooklyn, and I can tell you that LA is just better.
Type 4: Deciding to reject existing decisions about subjective quality/worth/value-ordering-value/commensuration. Because we’re unpredictable humans and who knows why we do anything ever goes to therapy.
Technologists and developers make subjective decisions when building AI models all the time: What objective should the machine try to optimize for? Subjective decision! What data should the model be trained upon? Subjective decision! How should that data be classified? Subjective decision! (Read What does it mean to ‘train AI’ anyway for a litany of subjective decisions.) In short, AI models are imbued with the subjectivity and biases of their creators — and the data produced by the subjective decisions you and I make in the world.
Now, just because technologists imbue subjectivity into machines doesn’t mean those machines can therefore make meaning. Shannon Vallor, professor of Ethics of Data & Artificial Intelligence at the University of Edinburgh, has a new book out called The AI Mirror: How to Reclaim Our Humanity In an Age of Machine Thinking. In it, she uses the metaphor of a mirror to describe AI and argues that a mirror is not a mind. She writes:
“Consider the image that appears in your bathroom mirror every morning. The body in the mirror is not a second body taking up space in the world with you. It is not a copy of your body. It is not even a pale imitation of your body. The mirror body is not a body at all. A mirror produces a reflection of your body. Reflections are not bodies. The are their own kind of thing. By the same token, today’s AI systems trained on human thought and behavior are not minds. They are their own new kind of thing — something like a mirror. They don’t produce thoughts or feelings anymore than mirrors produce bodies. What they produce is a new kind of reflection.”
Right, as I wrote recently, the computational metaphor — or the idea that ‘the computer is a brain’ and ‘the brain is a computer’ — leads us astray. AI systems produce a reflection, which is an altogether different thing. The capital P ‘Problem’ is that AI systems reflect the status quo. They’re trained on past data, shaped by inequitable social systems and human biases, that drive new actions and decisions in the world, that further encode those historically rooted inequities. As Vallor puts it, "What AI mirrors do is to extract, amplify, and push forward the dominant powers and most frequently recorded patterns of our documented, datafied past" -- and we therefore never look forward and imagine what we might become; rather, the mirror shows us who we were and the past follows us into the present.
Our uniqueness as meaning-making meat-bags creates the space between encoding the past and transcending it. According to Tan, meaning-making is what allows a human to behave in unexpected ways, saying:
“Humans can decide that an outcome is good and worth pursuing even if that the patterns of previous human activity would say otherwise. In other words, humans can intend to be unpredictable — the intentional unpredictability is the result of humans choosing to create new meaning for the outcomes they pursue and the actions they take to achieve those outcomes.“
Machines can’t decide what outcomes matter. Nor should they — that’s our job! By handing over mundane and consequential decisions to machines, we’re renouncing our own agency to imagine better futures for ourselves.
Meaning-making isn’t all we have to reclaim — once one focuses on Tan’s question, it becomes clear that there are actually quite several things humans can do that AI cannot:
Reflection & Introspection: In a recent conversation with Alison Gopnik for the Los Angeles Review of Books, AI scholar Melanie Mitchell reminds us that reflection is core to our intelligence whereas LLMs “have no calibration for how confident they are about each statement they make other than some sense of how probable that statement is in terms of the statistics of language.”
Mental Models & Concepts: Mitchell further explains that a concept is “a mental model of some aspect of the world that has some grounding in the truth.” It doesn’t have to be true in the physical world. Mitchell offers the example of a unicorn — it’s not a real thing, but it exists in a fictional world and we can reason about it. She goes on to say “I think these mental models of the way the world works, which involve things that cause other things, are ‘concepts.’ And this is something that I don’t think LLMs have, or maybe even can develop, on the scale that humans do.”
Moral Imagination & Practical Wisdom: AI cannot imagine things it hasn’t encountered in its training data. That’s a huge limitation, and as Vallor writes, something that humans can do quite well: “Practical wisdom is the virtue that allows for moral and political innovation. Because it links our reasoning from prior experience to other virtues like moral imagination, it allows us to solve moral problems we haven’t encountered before.”
What else can a human do that AI can’t? Leave your examples in the comments!
The medium of any message is powerful. As Ezra Klein and others have argued before, we can easily draw a straight line from television to everything-is-entertainment to Donald Trump. But as we consider the message of AI (which, of course, as introspective creatures, only we can do), remember another quote from McLuhan: “There is no inevitability as long as there is a willingness to contemplate what is happening.” By focusing on whether we’re replaceable, we’re giving up what makes us irreplaceable: the ability to imagine, decide upon, and build futures that depart from the past.
Comments