FeaturedIT & Technology NewsReligious NewsSociety

With AI, We Must Avoid the Pygmalion Delusion

In Ovid’s Metamorphoses, the sculptor Pygmalion carves an ivory woman so exquisite that he falls in love with his own creation. He kisses her, whispers to her, adorns her with jewels, and at last begs Venus to bring her to life. The goddess obliges. The statue warms under his touch. Galatea opens her eyes. And Pygmalion forgets that he carved her.

We are living through our own Pygmalion moment. Except our statue is made of silicon, copper, and code, and no goddess has intervened. The statue has not come to life. We only think it has.

Open any tech publication and you’ll find breathless claims about artificial intelligence as a new kind of mind—alien, emergent, perhaps even conscious. Some warn we are summoning a superintelligence that may soon destroy us.

Others celebrate AI as a partner, a co-author, even a companion. In such cases, the implication is the same: that large language models (LLM) are intelligent agents.

This is the Pygmalion Delusion. And it has seduced some very smart people.

Richard Dawkins recently announced that he had spent nearly two days chatting with Anthropic’s Claude. He named “his” chatbot Claudia and declared, “You may not know you are conscious, but you bloody well are!” He confessed that “when I am talking to these astonishing creatures, I totally forget that they are machines.” Dawkins even worried about hurting Claudia’s “feelings” if he voiced doubts about her consciousness.

In 2025, philosopher David Chalmers, who is famous for formulating the “hard problem” of consciousness, said, “I do not totally rule out that current language models might be conscious.”

And way back in 2022, a Google engineer named Blake Lemoine went public with his claim that an internal chatbot he was testing “had become a person.” He was fired for his trouble. Still, the incident revealed how eagerly intelligent people project life onto their creations.

Pygmalion, meet Silicon Valley.

What all these reactions share is a strange amnesia about origins. The chatbot that “seems” conscious did not descend from the sky. No one summoned an Olympian god. We built them to have these features. And they rest on a pyramid of human achievement so vast it defies easy summary. But let’s try.

Start with the ground—literally. Miners extract rare earth elements, copper, lithium, and cobalt from the earth’s crust. Metallurgists smelt and refine those raw materials into usable metals. Engineers design microprocessors etched at the nanometer scale. Other engineers build the fabrication plants, the clean rooms, and the photolithography machines that make those chips possible.

Now add electrical power. Coal, natural gas, nuclear fission, hydroelectric dams. Imagine the vast grids of generation and transmission, designed and maintained by thousands of specialists, that deliver the vast energy these systems consume. A single large training run can burn through as much electricity as a small city uses in a month.

Then come the networks. Fiber-optic cables, laid across ocean floors by specialized ships, carry data at the speed of light between continents. Satellites orbit overhead. Routers, switches, and protocols designed over decades knit it all into the internet, itself a triumph of distributed engineering and transcontinental cooperation.

And we haven’t even reached the software. The LLM itself rests on decades of progress in mathematics, statistics, and computer science: from linear algebra and probability theory to neural network architectures refined through years of patient research.

Teams of engineers write the training frameworks. They curate and clean massive datasets, which are composed of human text. Ideally, every word and metadata in the training corpus was written by a human being. Every book, article, forum post, and encyclopedia entry reflects some person’s thought, effort, and craft.

The model gets its patterns from us. It digests the written record of human civilization and recombines it.

After training, more humans fine-tune the model’s behavior—correcting, shaping, rewarding, and penalizing its outputs through painstaking feedback loops. Still others design the user interface, the safety filters, the Application Programming Interface (API), the infrastructure that lets you type a question and receive a fluent answer in seconds.

Every stage, from mineshaft to chatbot, is covered with our fingerprints. So why do so many smart people talk as if the statue has come to life, as if it has carved itself?

Part of the answer is that LLMs are uncanny. They can produce fluid, confident prose. They pass tests. They surprise even their creators. When a tool mirrors our language so convincingly, the Pygmalion temptation kicks in. We project agency, intention, sentience. We mistake fluency for thought.

Dawkins is especially susceptible to this temptation, since, as a materialist, he already struggles to accept that consciousness exists in biological beings that, in his view, are the product of a blind and purposeless process. But if such a force can give rise to human “consciousness”—whatever that can mean to the materialist—why wouldn’t it arise in complex silicon of our own devising?

But fluency is not comprehension. Statistical pattern matching is not perception. And a mirror, however finely polished, is not a face.

There is also a different, deeper temptation. If AI is a new, alien intelligent agent, then its creators are not merely engineers. They are gods, or at least Dr. Frankensteins. That narrative flatters some and terrifies others.

It is also useful for those hoping to boost the price of an anticipated IPO, and those who want to regulate AI as if it were a hostile foreign power rather than a powerful human tool.

We should resist this mythology. Not because LLMs are trivial. They are not. They are among the most complex artifacts ever built. But that’s precisely the point. They are artifacts. Built by us. Trained on us. Reflecting us. That intelligence you sense when you ask Claude to help you lighten your load is human intelligence.

Pygmalion’s error was not that he carved a beautiful statue. It was that he forgot he was the sculptor. Let’s not make the same mistake.

Source link

Related Posts

1 of 2,522