Artificial Intelligence (AI) isn’t just made up of data, chips and code – it’s also the product of the metaphors and narratives we use to talk about it. The way we represent this technology determines how the public imagination understands it and, by extension, how people design it, use it, and its impact on society at large.
Worryingly, many studies show that the predominant representations of AI – anthropomorphic “assistants”, artificial brains, and the omnipresent humanoid robot – have little basis in reality. These images may appeal to businesses and journalists, but they are rooted in myths that distort the essence, abilities and limitations of current AI models.
If we represent AI in misleading ways, we will struggle to truly understand it. And if we don’t understand it, how can we ever hope to use it, regulate it, and make it work in ways that serve our shared interests?
The myth of autonomous tech
Distorted representations of AI are part of a common misconception that the academic Langdon Winner dubbed “autonomous technology” back in 1977: the idea that machines have taken on a life of their own and act independently on society in a purposeful and often destructive way.
AI gives us the perfect incarnation of this, as the narratives surrounding it flirt with the myth of intelligent, autonomous creation – as well as the punishment for assuming this divine function. It is an ancient trope, one that has given us stories ranging from the myth of Prometheus to Frankenstein, Terminator, and Ex Machina.
This myth is already hinted at in the ambitious term “artificial intelligence”, which was coined by computer scientist John McCarthy in 1955. The label took hold in spite of – or perhaps because of – the various misunderstandings it causes.
As Kate Crawford succinctly argues in her Atlas of AI: “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications.”
Most problems with the dominant narrative of AI can be attributed to this tendency to represent it as an independent, almost alien entity, as something unfathomable that exists beyond our control or decisions.
Misleading metaphors
The language used by many media outlets, institutions, and even experts to discuss AI is deeply flawed. It is riddled with anthropomorphism and animism, images of robots and brains, (always) fabricated stories about machines rebelling or acting inexplicably, and debates about their supposed consciousness. This is all heaped onto a prevailing sense of urgency, panic and inevitability.
This vision culminates in the narrative that has driven the development of AI since its inception: the promise of general artificial intelligence (GAI), a supposed human or superhuman intelligence that will change the world and even our species. Companies such as Microsoft and OpenAI and technology leaders like Elon Musk have been predicting GAI as an ever-imminent milestone for some time now.
However, the truth is that the path to this technology is unclear, and there is not even consensus on whether it will ever be possible.
Narrative, power and the AI bubble
This is not just a theoretical problem. The deterministic and animistic view of AI constructs a predetermined future, as myths of autonomous technology inflate expectations and divert attention from the real challenges AI poses.
This hinders a more informed and open public debate about the technology. A landmark report from the AI Now Institute refers to the promise of AI as “the argument to end all arguments”, a way of avoiding any questioning of the technology itself.
In addition to a mixture of exaggerated expectations and fears, these narratives are also responsible for inflating the AI economic bubble that various reports and technology leaders are warning about. If the bubble exists and eventually bursts, we should remember that it was fuelled not only by technical achievements, but also a narrative that was as misleading as it was compelling.
Changing the narrative
To repair the broken AI narrative, we have to bring its cultural, social, and political dimensions to the fore. We have to leave behind the myth of autonomous technology and start seeing AI as an interaction between technology and people.
In practice, this means shifting the focus in several ways: from technology to the humans who guide it; from a techno-utopian future to a present that is still under construction; from apocalyptic visions to real and present risks; from presenting AI as unique and inevitable to an emphasis on autonomy, choice, and diversity among people.
We can drive these shifts in a number of ways. In my book, Technohumanism: A Narrative and Aesthetic Design for Artificial Intelligence, I propose several stylistic recommendations to escape the narrative of autonomous AI. These include avoiding using it as the subject of a sentence when it is being used as a tool, and not using anthropomorphic verbs when we talk about it.
Playing with the term “AI” also helps us see how much words can change our perception of technology. Try replacing it in a sentence with, for example, “complex task processing”, one of the least ambitious but most accurate names considered during its early days.
Important debates on AI, from those on regulation to its impact on education and employment, will continue to rest on shaky ground until we correct the way we talk about it. Designing a narrative that highlights the social and technical reality of AI is an urgent ethical challenge. Successfully confronting this challenge will benefit technology and society alike.
Pablo Sanguinetti, Profesor de IA y Pensamiento Crítico, IE University
This article is republished from The Conversation under a Creative Commons license. Read the original article. This article was originally published in Spanish.
Read next:
• Why are older adults more likely to share misinformation online?
• Understanding Online Rage: Why Digital Anger Feels Amplified