A new book by AI and cyber security writer Peter Warren has revealed that a world first moment when computers are interrogated in court is rapidly approaching.
According to one of the UK’s leading barristers he expects to be cross-examining AI systems in the dock within the next 10 years.

Speaking in an interview with Future Intelligence’s Warren for his latest book ‘AI on Trial’, co-written with lawyer Mark Deem of Mishcon de Reya, Sandip Patel QC, a leading legal technology expert, said that due to the pace of technological change it is inevitable AI systems will be put into the dock.
“When I prosecuted Lulzsec there was no difficulty showing that they had interfered with an organisation’s computers. They obviously did, so that was not a problem, but I could see the new type of threats which are coming especially with automated machine-based threats as a problem.
Machines as defendants
“Once AI is introduced then one has to ask who is responsible, who is guilty of this particular offence, the programmer or the person who uses the program, or is the machine the thing that has the intent. I expect within the next 10 years to be cross-examining machines in the dock,” said Patel, the UK Government barrister responsible for prosecuting the LulzSec hackers in 2011, adding: “I could see in future, criminal cases that involve machines as defendants because they will be so valuable to organisations”.
A cross-examination that will also mean, said Patel, that lawyers will also have to become AI programmers in order to understand the culpability of the machine and to discover from those who developed the machine what their intent was.
Lawyers must know code
“I think it is highly likely for this reason, if there has been a breach of law and the defendant either in a civil or criminal law context is a machine ultimately the responsibility for that machine will lie with a human being. It will be the architect of that machine or the owner of that machine or the administrator of that machine.

“Liability will fall, obviously not a machine, because you’re not going to send a machine to prison term, but one could deprive that or the owner of that machine and that machine could be very, very valuable to that owner.
“So, for example, organisations, including in law, are trying to develop AI based systems. Which would make the law firm more efficient, more attractive. Now, that will be proprietary to that law firm, the technology and a lot of money will have gone into investing in that technology,” said Patel in the interview, subsequently broadcast on the PassW0rd radio programme ‘The Great AI Trial’, on ResonanceFM.
AI weapons
In the programme Future Intelligence, using a number of other interviews for ‘AI on Trial’ sets out many of the other challenges raised by the adoption of AI.
One of the greatest of which, according to the AI guru Professor Stuart Russell of the University of California, Berkley, and best-selling author of Human-Compatible, is that the technology via the internet permeates virtually everything that we do.
While Professor Toby Walsh of the University of New South Wales the author of ‘It’s Alive!: Artificial Intelligence from the Logic Piano to Killer Robots’ a noted opponent of the use of AI in weapons systems makes the case for banning AI in warfare.
“Thousands of my colleagues have got rather strong views about this. One of the mistakes I think people make when you start discussing this is to think that the concerns that people like myself working in the field have are fixed in time, actually the concerns are going to change over time. The concerns I have today with current capabilities would be handing over the controls and machines that would be making lots of mistakes and killing lots of the wrong people but I can also see in 10, 20, 30 or 40 year’s time when the technologies are much more capable. Then we will have much greater concerns, some of which are about the fact that technologies will be much more capable and will transform the way we fight war in a way that humans will no longer be in charge.
“We will then face the possibility that we might end up with flash wars happening because we put these complex systems out into the real world and they interact with other systems. Putting competitive systems out of the real world, interacting with each other, we know ends badly.
“We’ve already seen examples of how that ends badly. It’s called the stock market, and we already have to put in circuit breakers and things to make sure that these complex systems don’t behave in undesirable ways,” said Professor Walsh.
The AI arms race
A position rejected by the prominent anti-China hawk, Brigadier-General Robert Spalding.
According to Spalding, we are already in an AI war and we have no option but to take part in it: “It’s not the battle of the future, it’s happening now. In 2016, the Russians use artificial intelligence, bots social media networks and big data to create protests in the United States on both sides of the aisle run up to the Taiwan elections. The Chinese were using the same kind of technology and techniques within Taiwan, in Malaysia, in the Philippines. It’s happening in Europe today, all over Europe, all over the world, really.
“These tools are being used not only to create economic value for the companies to possess them, but also, if those companies happen to be in totalitarian regimes, to create influence for those regimes.
“This was documented by Samantha Hoffman, a researcher at the Australian Strategic Policy Institute, when she talked about Global Tone Communication Corporation, a big data and AI company in China that does language translation in 65 languages. Its technology is built into Huawei products. It collects two to three petabytes of data per year and then sends that data not just for translating languages, but also sends it to the People’s Liberation Army intelligence arm and the propaganda arm of the Chinese Communist Party. This is not the future; this is happening now.”
As Patel says soon the computers will be in the dock defending their decisions.
To listen to the PassW0rd radio programme ‘The Great AI Trial’ click on this link – PassW0rd programme.