Some of AI’s leading lights have called for a six-month moratorium on AI development to prevent a big business AI arms race.
The open letter from the Future of Life Institute (FLI)includes the signatures of Professor Stuart Russell, author of Human Compatible, occasional world’s richest man Elon Musk, Skype co-founder Jaan Tallinn, and Apple co-founder Steve Wozniak.
Chief among its objectives is the development of AI systems that are safe for humanity. A point the FLI sets out in its invitation.
“As you know, recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful AI systems that no one – not even their creators – can understand, predict, or reliably control.
“If you’re concerned about this, please consider signing our open letter advocating for a temporary pause on AI systems more powerful than GPT4, to allow time for appropriate precautions,” the invitation stated, going on to add that the move was grounded in precedent.
“There is widespread support – for example, the widely adopted OECD AI Principles require that AI systems ‘function appropriately and do not pose unreasonable safety risk’.”
AI arms race
The move by the FLI reflects growing concern at the speed of AI development. Development that has spawned competition not just among big tech AI labs but also among Governments desperate to win any AI race.
A point made by the Finnish hacker Mikko Hypponen in March’s ‘this AI life’ PassW0rd radio programme.
“For example, let’s say that IBM issues a press release that it has made a breakthrough. It believes it is on the verge of superhuman intelligence, it is ready to demonstrate this next month and ship it next year. What would happen? Think about China’s President Xi or Russia’s President Putin. What they would see is that the Americans, are going to win the race.
“This is the most important race. If the US win this race, they will win everything. The Americans will be superior in everything forever. They will win every race in every area. They will create every innovation. From now on, they will be the economic superpower forever. They will win every war. If that’s the case, then the obvious thing to do is to steal that technology at any price. Or if you can’t steal that technology, then you must destroy that technology, so your enemies don’t get it.
“To me, it would seem that an innovation at this scale would destabilize global peace, instead of bringing great benefits.”
AI development unease
It is a fear that is now tangible. Innovations like AI and the recent promise of Quantum Computing have created palpable excitement among Governments, venture capitalists and entrepreneurs. With each positioning themselves to take advantage of the developments.
“At the moment the situation is very worrying,” said one of the signatories of the open letter who declined to give their name due to their employer.
“Imagine a lot of surfers jostling to try to get the best position on a huge oncoming wave. People are not necessarily thinking, they just want to be on the wave, come hell or high water. What we have to do is to try to slow that excitement down a little so that people think a bit. At the moment it’s not being helped by some Governments.”
It may be necessary to put AI development onto a similar footing to genetic engineering and nuclear weapon development. A move many say is needed because of the technology’s potential to create a disaster on an unprecedented scale.
“If you think of the speed, particularly over the internet, at which AI can do things. A piece of code can be quickly copied, and a process started that everyone is unaware of because of the way AI does things. It does represent a threat unless we have some means of knowing what it is doing.”
A point emphasised in the FLI invitation.