Why the UK didn’t sign up to global AI agreement

A two-day summit in Paris was supposed to bring the world’s leaders together to
create a united front AI – but the UK and US left without signing anything.

Science and Technology reporter
Wednesday 12 February 2025 18:01, UK
World leaders and tech bros descended on Paris this week, with some determined
to show a united stance on artificial intelligence.
But at the end of the two-day summit
[https://news.sky.com/story/paris-ai-summit-shows-rift-between-regulation-and-innovation-13306565],
the UK and the US walked away empty-handed, having refused to sign a global
declaration on AI.
Earlier on Tuesday, US vice president JD Vance told his audience in Paris that
too much regulation could “kill a transformative industry just as it’s taking
off” and Donald Trump has already signed an executive order removing rules
imposed by Joe Biden.
But for the UK, the declaration did go far enough.
“The declaration didn’t provide enough practical clarity on global governance
and [didn’t] sufficiently address harder questions around national security,”
said a UK government spokesperson.
Please use Chrome browser for a more accessible video player
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
So what is the UK government so concerned everyone is missing?
Aside from taking jobs and stealing data, there are other existential threats to
worry about, according to Carsten Jung, the head of AI at the Institute for
Public Policy Research (IPPR).
Has Elon Musk just put OpenAI in a tricky situation?
In China’s Silicon Valley, where founder of DeepSeek Liang Wenfeng keeps a low
profile
Musk v Altman: The battle to become king of AI
He listed the ways AI can be dangerous, from enabling hackers to break into
computer systems to losing control of AI bots that “run wild” on the internet to
even helping terrorists to create bioweapons.
“This isn’t science fiction,” he said.
One scientist in Paris warned the people most at risk of unregulated AI are
those with the least to do with it.
“For a lot of us, we’re on our phones all the time and we want that to be less,”
said Dr Jen Schradie, an associate professor at Sciences Po University who sits
on the International Panel on the Information Environment.
“But for a lot of people who don’t have regular, consistent [internet] access or
have the skills and even the time to post content, those voices are left out of
everything.”
They are left out of the data sets fed into AI, as well as the solutions
proposed by it, to workforces, healthcare and more, according to Dr Schradie.
Read more from science, climate and technology:
Ozempic helps to reduce alcohol consumption and smoking
[https://news.sky.com/story/ozempic-helps-to-reduce-alcohol-consumption-and-even-smoking-new-study-finds-13307800]
Beavers could help tackle Britain’s rising flooding problems
[https://news.sky.com/story/ozempic-helps-to-reduce-alcohol-consumption-and-even-smoking-new-study-finds-13307800]
Elon Musk denies ‘hostile takeover’ of US government
[https://news.sky.com/story/elon-musk-joins-donald-trump-in-oval-office-as-he-denies-hostile-takeover-of-us-government-13307516]
Without making these risks a priority, some of the attendees in Paris worry
governments will chase after bigger and better AI, without ever addressing the
consequences.
“The only thing they say about how they’re going to achieve safety is ‘we’re
going to have an open and inclusive process’, which is completely meaningless,”
said Professor Stuart Russell, a scientist from the University of California at
Berkeley who was in Paris.
“A lot of us who are concerned about the safety of AI systems were pretty
disappointed.”
One expert compared unregulated AI to unregulated food and medicine.
“When we think about food, about medicines and […] aircraft, there is an
international consensus that countries specify what they think their people
need,” said Michael Birtwistle from the Ada Lovelace Institute.
“Instead of a sense of an approach that slowly rolls these things out, tries to
understand the risks first and then scales, we’re seeing these [AI] products
released directly to market.”
Follow our channel and never miss an update
And when these AI products are released, they’re extremely popular.
Be the first to get Breaking News
Install the Sky News app for free
Just two months after it launched, ChatGPT was estimated to have reached 100
million monthly active users, making it the fastest-growing app in history. A
global phenomenon needs a global solution, according to Mr Jung.
“If we all race ahead and try to come first as fast as possible and are not
jointly managing the risks, bad things can happen,” he said.