Next year, it’ll be 40 years since the release of The Terminator – a science fiction film about attempts to disable an artificial intelligence system that has triggered a nuclear war.
What seemed excitingly futuristic in the age of the Sinclair Spectrum and dot matrix printer has recently become the focus of lurid newspaper headlines.
Many of the people who’ve brought AI to its current state of evolution are now declaring belated concerns about its risks.
Indeed, sections of the media are suggesting we’re not far away from a Skynet-like digital entity gaining sentience and deciding humans need to be brought to heel.
We’ve recently seen AI-powered breakthroughs in mobility, medicine and content generation.
In an age when ChatGPT can write poetry, and cars can drive and park themselves, is the threat of AI something we should be fearful about?
Who’s worried about the threat of AI?
Perhaps the most alarming aspect of recent press coverage is that the people raising concerns are the very same ones who’ve created these tools.
The heads of platforms like OpenAI and Google Deepmind recently signed a statement which stated that “mitigating the risk of extinction from AI” was as important as reducing the risk of nuclear war.
They cited scenarios as wide-ranging as AI systems being used to build chemical weapons, society being destabilised by fake news, state-level censorship and the enfeeblement of mankind.
The latter was depicted in the film Wall-E, where humans have become immobile and morbidly obese, with no reason or desire to perform even basic functions for themselves.
This is all science fiction, isn’t it?
Ordinarily, it would be tempting to say yes. However, our understanding of AI’s true power (and potential) is lagging way behind its development.
If an AI system ever did gain sentience, either deliberately or inadvertently, it’s hard to imagine it surveying the human race and reaching a positive conclusion about our presence.
It’s hard to deny that mankind is damaging (and potentially destroying) the planet through overpopulation, deforestation, pollution, nuclear armament and other harmful activities.
The most obvious step any AI system would take in such circumstances would be to curtail our power, or attempt to eliminate us just as we sought to eliminate a pandemic virus.
Should I build a fallout shelter?
There’s no need to start prepping just yet. While the media have gleefully seized on the worst-case scenarios of sentient AI, its risks are far more insidious, if less apocalyptic.
It’s easy to imagine how social media could be set ablaze by misinformation, in an age when 40 per cent of under-35s avoid the news and get their worldview from TikTok and Snapchat.
Careers are already being reshaped or replaced by AI. Research has suggested the majority of banking and insurance tasks could be fully automated in the near future.
We might have no need for drivers, scriptwriters, accountants or retail assistants if AI evolves to an extent where it can perform these roles without requiring payment, holidays or canteens.
At the same time, there’d be little comeback against AI decision-making. It could be biased and incontestable in equal measure – the ultimate manifestation of ‘computer says no’.
What’s being done about this?
A global superintelligence body is being championed by Rishi Sunak, who has been talking about “guardrails” and the need for regulation against the threat of AI.
The G7 political forum has now agreed to create a working group on AI, bringing together the brightest minds across Europe, Japan and North America.
However, any global action will require buy-in from the biggest nations (China, India), pariah states (Russia, North Korea) and AI pioneers (South Korea, Estonia) alike.
Given ongoing geopolitical tensions surrounding the war in Ukraine and potential conflict in Taiwan, that’s by no means a certainty.
However, it’s going to be necessary to ensure AI doesn’t become too powerful for our own good.