We live in a world where artificial intelligence is becoming the norm, often under the radar and unnoticed, and robots are taking over the workplace. Well, it may be a slight exaggeration to suggest that robots are taking over the workplace but they certainly have a growing presence. This prompts an array of very interesting and challenging questions for the legal profession. Who is accountable for mistakes made by robots? If a robot makes a decision based on pure facts and figures, which inadvertently causes an accident, what would the legal position be?
Can You Sue A Robot?
Back in April 2018 a House of Lords committee began looking into the issue of artificial intelligence, robotics and current liability laws in the UK. With particular emphasis on accountability, foreseeability and what is deemed “reasonable behaviour”, current UK laws were effectively stress tested to see whether they accommodate artificial intelligence. The simple answer is, maybe yes, maybe no!
At this moment in time the logistics of investigating accidents and incorrect decision-making by robots would be extremely slow and very expensive. The logistics of attempting to hold overseas third parties to account for the actions of their artificial intelligence systems/robots would currently be a legal nightmare. While some believe that current UK laws will cover artificial intelligence the fact there is a recognised “grey area” means that action should be taken to address this sooner rather than later.
Lord Clement-Jones, the chair of the House of Lords committee, suggested:-
“We should make the most of this environment, but it is essential ethics take centre stage in AI’s development and use.”
UK Leading The Way On Artificial Intelligence
It is fairly common knowledge that the UK is leading the way with regards to artificial intelligence and the creation of extremely powerful and influential robots. These systems are regularly used in the UK and exported to overseas markets, which creates this legal uncertainty. While many people may sit back and try to think how artificial intelligence impacts their daily life, you would be surprised at the influence the systems have even today.
At this moment in time the leading light with regards to automated vehicles is Tesla led by the charismatic and often controversial Elon Musk. This is a company which is at the forefront of combining self-driving vehicles with the latest in electric car systems. However, there have been a number of notable incidents involving Tesla and its automated driving system.
The first thing to mention is that the current self-driving system available to Tesla owners should be monitored at all times by a competent driver. It is fair to say that from a legal standpoint these are not self-driving systems but in reality they can drive the vehicle, react to weather conditions, traffic lights and traffic ahead. We have seen a number of accidents which allegedly involved self-driving systems, one of which resulted in a fatality.
Who Is At Fault?
Downloading of the data from the fatality showed that in a relatively unique scenario the bright sun and a white truck in front of the Tesla vehicle effectively blinded the automated driving system. As a consequence the vehicle, which should have been monitored by a competent driver, drove under the truck leading to the fatality. As you would expect, this led to an array of investigations by US authorities and changes to the software system to be rolled out in the future. However, as these are not legally recognised self-driving vehicles, effectively enhanced cruise control systems, Tesla could not be held liable for the accident.
The idea behind the monitoring of the self-driving software is to ensure that unforeseen circumstances or potential accidents can be avoided with the driver retaking control. Even though artificial intelligence has been around for some time, we have only really seen major strides in development in recent years. By its very definition, artificial intelligence continues to learn and adapt to new environments and ultimately it will only get better. However, as humans can sometimes “go wrong” it is safe to assume that robots and artificial intelligence may also find themselves in similar situations.
The world of diagnostics is full of false positives and many other unhelpful and potentially harmful situations. The fact is that the more involvement human workers have in the diagnostic system the greater the chance of errors. While the vast majority of errors have been removed by checking, double checking and checking again, nothing is fool proof. Therefore, we have seen, and will continue to see, greater use of robots and artificial intelligence to diagnose illness.
Only last year we saw claims by Babylon Health, a healthcare start-up, that its artificial intelligence “doctor” was able to diagnose illness and injuries as accurately as a GP. These claims have been refuted by various health bodies but apparently, using previous exam questions, the Babylon chatbot received a score of 82% against an average 72% for “real doctors” over the previous five years. The fact that these systems are available and “working” should prompt the authorities to improve regulations going forward.
In the case of artificial intelligence this type of system is based on a database which is updated and used for the benefit of all participants. So, artificial intelligence systems attempting to take the place of general practitioners will gather experience from around the world to improve accuracy. At this moment in time artificial intelligence is unable to accommodate human traits such as:
- Using gut instinct
- Looking beyond the facts and figures
There are many issues in the world of healthcare which are literally black and white but some illnesses and conditions can be masked by others. In the event of a misdiagnosis or car crash for example, ultimately the health care body/company would be the first port of call when pursuing a personal injury claim. Whether they would then be able to sue the company providing the artificial intelligence system is where it becomes a little less clear. Would there be shared liability in the event of injuries caused by an artificial intelligence system?
Alternative Uses Of Artificial Intelligence
In the past we have written about the use of artificial intelligence when reviewing personal injury claims. These very powerful systems can:
- Scan and review past data and recognise patterns
Monitor voice tones and patterns in phone calls
- Assist in spotting potentially fraudulent activity
We know that manyleading insurance companies in the world are now using artificial intelligence to reduce the time taken to review personal injury claims. As they continue to document and scan claims going back many years these self-learning systems are able to review huge amounts of data in a fraction of the time it would take a human worker. Creating a list of suspicious/fraudulent actions, this allows insurance companies to focus on potential problem claims. Whether or not all of these claims turn out to be fraudulent is another matter but waving through bona fide claims frees up more time.
There is no doubt that developments in the world of artificial intelligence are impacting many areas of life. These include mortgage approvals, self-driving vehicles, virtual GPs and even personal injury claim reviews to highlight just a few. The legal definitions of responsibility will certainly need to be re-evaluated in light of the growing use of artificial intelligence. We are effectively creating systems that “think for themselves”.
Then again, if a decision is based purely on facts and figures, with no empathy or actual experience taken into consideration, could this be deemed a wrong decision? The reality is that the use of artificial intelligence, at least at this moment in time, potentially creates as many problems as it helps to solve.