Concerns over the rapid development of AI technology have led to growing calls to regulate the use of AI.
The concern has picked up due to the unexpected emergence of programs like Chat GPT and Bard. The programs, known as Large Language Models can produce essays and articles that appear to have been written by a human.
According to leading experts interviewed for the PassW0rd radio programme ‘this AI life’ we now need an AI agency.
“Humans to date regulate new technologies. So, it’s very clear to me that regulation is helpful. Regulation is required,” said Janet Adams, the chief operating officer of SingularityNet, a company founded by AI guru, Dr Ben Goertzel.
“Regulation will help ensure that these technologies are used for the benefit of all. Not just the benefit of a few. That they’re not used in destructive ways to make life harder for people or to deprive us of our privacy.”
AI needs guardrails
Adams views were echoed by Triveni Gandhi, Responsible AI Lead at the global AI company Dataiku.
“I think there should be an agency. There should be guardrails in place, I think there will have to be differentiators along the lines of risk. What is a very high-risk model versus a low-risk model?
“A model that predicts if a machine is going to break down on the manufacturing line is probably a low human risk. Things like predicting if somebody is going to need extra medical care? That seems like a high-risk application,” said Gandhi adding: “I think that things like ChatGPT and other language models are high risk applications. So, put in guardrails there to say here’s the limits of what we want you to be able to do. That makes sense.”
The concerns about the use of AI have been heightened by the speed of adoption of the technology. A recent Deloitte study found over 50% of organisations are planning on using AI and automation technologies in 2023.
A development provoking fears of a similar economic dislocation to that caused by the Industrial Revolution 250 years ago. An event which displaced huge numbers of workers from the countryside into towns as machines destroyed their livelihoods.
The ‘intelligence revolution’
The adoption of AI is a process that has been called the ‘Intelligence Revolution’. It’s one that many predict that huge numbers of jobs that involved processes such as moving materials or bricklaying will be destroyed. Something that will lead to huge changes in manufacturing, building and transport. The Intelligence Revolution, though will be different. For the first time, AI threatens to replace middle class processes like accounting, the law and journalism. Something that has been demonstrated by Chat GPT, now the most talked about writer and thinker of the 21st century. Chat GPT can not only to write newspaper articles and opinion pieces but also produce material for teenagers’ exam courses.
It is a feature of the technology that concerns SingularityNet’s Adams.
“It’s very clear that investment is pouring into AI. It’s here and it’s here to stay. We need to find ways of ensuring that this powerful technology is harnessed and used in ways that benefit humanity. That benefit the planet, and that increase access to wealth, education, and technology and reduce inequalities on our planet.”
Adams adds it is essential AI is not used to increase the imbalance between those who hold data and the individuals whose data is gathered. Particularly, she says, if it’s used in new and powerful ways they do not know about.
Yet control of AI is contentious. This is due to its potential to deliver huge advantages to those powers able to harness it. Something that has created fears any regulation could lead to significant losses in an AI arms race.
To be or not to be
This is a challenge that is dividing law makers. Some are putting their weight behind the development of AI agencies to control the deployment of AI systems that can manipulate people. Others are arguing that particular industry sectors, such as building, should govern the introduction of AI.
“The question is, is it going to be the new shiny European Union AI office? Or is it going to be a national regulatory body? I think there’s still quite a lot out there to be discussed,” said Lord Clement-Jones, head of the influential UK Parliament AI Committee.
“We’re hung up on an innovation friendly context and on specific types of regulation, which is contrary to just, hold the AI. I have been arguing that is not the way forward. I think you need a layer of cross sectoral horizontal governance because there are companies that operate in many different sectors,” said Lord Clement-Jones.
The race for legal control
AI is dependent on the data that is collected from everyone using the internet. This meant the technology took off because the data people generate on a daily basis allows algorithms to be trained. That let the AI produce the results that led to Chat GPT, Bard and a number of other systems. The result has been an AI feeding frenzy as investors rush to discover and capitalise the next AI version of Google. It has also led to a focus on the ownership of that data, something that the EU is now focussing on. It’s a focus not lost on the tech companies because of the lead that the EU has made with its universally admired General Data Protection Regulations.
A point noted by Lord Clement-Jones: “At the centre of this is data. What people look for is what is California up to, companies are going to have to conform to what California is doing. California has got increasingly strict data protection provisions they are developing in the AI governance field. It’s not true to say there is an unregulated market in the US. The big companies have a governance process and Microsoft states its gold standard is the GDPR.
“Frankly, we will have to conform to what the EU is doing.”