Concerns over the rapid development of AI technology prompted by the rollout of programmes like ChatGPT and Bard have led to growing calls to regulate the use of AI.
According to a number of leading experts interviewed for the March 2023 edition of Future Intelligence’s PassW0rd radio programme ‘This AI life’ the need for an agency to regulate the use of AI is now clear.
“The way humans to date have regulated new technologies or regulated industry is regulation, and so it’s very clear to me that regulation is helpful. Regulation is required,” said Janet Adams, the chief operating officer of SingularityNet, a company founded by AI guru, Dr Ben Goertzel.
“Regulation will help ensure that these technologies are used for the benefit of all and not just the benefit of a few, and that they’re not used in destructive ways to make life harder for people or to deprive us of our privacy.”
Industry needs guardrails
Adams’ views were echoed by Triveni Gandhi, Responsible AI Lead at the global AI company Dataiku.
“I think there should be an agency. There should be guardrails in place, I think there will have to be differentiators along the lines of risk. What is a very high-risk model versus a low-risk model?
“A model that predicts if a machine is going to break down on the manufacturing line is probably a low human risk, or maybe a medium human risk depending on how many people are around the machine at the time. Things like predicting if somebody is going to need extra medical care? That seems like a high-risk application,” said Gandhi adding: “I think that things like ChatGPT and other language models are high risk applications and so putting guardrails there to say here’s what the limits of what we want you to be able to do with this, for how it should be deployed.”
The concerns about the use of AI have been heightened by the speed of adoption of the technology. A recent Deloitte study found that over 50% of organisations are planning on incorporating the use of AI and automation technologies in 2023.
It is a development that has provoked fears of a similar economic dislocation to that caused by the Industrial Revolution 250 years ago, which saw huge numbers of agricultural workers displaced from the countryside and forced into towns as machines destroyed their livelihoods.
A scenario that with the adoption of AI – a process dubbed the ‘Intelligence Revolution’ – that many predict will see the destruction of huge numbers of occupations that have involved processes -the moving of materials or tasks like bricklaying – that have been done by people in areas like manufacturing, building, and driving.
The difference with the Intelligence Revolution, is that for the first time, AI threatens to replace processes that have hitherto been the province of the middle classes in the areas of accounting, the law and journalism. Something that has been demonstrated by ChatGPT, now the most talked about writer and thinker of the 21st century, which has proved its ability not only to write newspaper articles and opinion pieces but also to produce material for teenagers’ exam courses.
It is a feature of the technology that concerns SingularityNet’s Adams.
“It’s very clear that investment is pouring into AI. It’s here and it’s here to stay. We need to find ways of ensuring that this powerful technology is harnessed and used in ways that benefit humanity, that benefit the planet, and that increase access to wealth, education, and technology and reduce inequalities on our planet.”
Adams went on to add that it is essential that AI is not used to increase the power imbalance between the big organisations who hold our data and the individuals whose data is gathered by those organisations and used in new and powerful ways they do not know about.
Control of AI is a contentious area due to its potential to deliver huge advantages to those power blocs able to harness the technology, creating fears that any regulation could lead to significant losses in an AI arms race.
To be or not to be
It is a challenge that is dividing lawmakers, with some putting their weight behind the development of AI agencies to control the deployment of AI systems to prevent them being used to manipulate and control people and others arguing that particular industry sectors, such as medicine and building, should govern AI’s introduction.
“The question is, is it going to be the new shiny European Union AI office? Or is it going to be a national regulatory body? I think there’s still quite a lot out there to be discussed,” said Lord Clement-Jones, head of the influential UK Parliament AI Committee.
“We’re hung up on an innovation friendly context and on specific types of regulation, which is contrary to just, hold the AI. I have been arguing that is not the way forward. I think you need a layer of cross-sectoral horizontal governance because there are companies that operate in many different sectors,” said Lord Clement-Jones.
The race for legal control
As AI is dependent on the data that is collected from everyone using the internet, the explosion in the technology’s development has only been possible because of the amount of data that people generate on a day-to-day basis that has allowed the algorithms to be trained. That has allowed AI to produce the results that have led to ChatGPT, Bard and a number of other systems. The result has been an AI feeding frenzy as investors rush to discover and capitalise on the next AI version of Google, but it has also led to a focus on the ownership of that data. Something that the EU is now focussing on, and one tech companies are watching closely because of the lead that the EU has made with its universally admired General Data Protection Regulations.
A point noted by Lord Clement-Jones: “At the centre of this is data. What people look for is what is California up to, companies are going to have to conform to what California is doing. California has got increasingly strict data protection provisions they are developing in the AI governance field. It is not true to say there is an unregulated market in the US. The big companies have a governance process, and Microsoft states its gold standard is the GDPR.
“Frankly, we will have to conform to what the EU is doing.”