(Scientific American)
When AI hurts someone, who is to blame?
In December 2019, two people were killed in a traffic accident in Gardena, following they were hit by a driver driving a Tesla car equipped with an artificial intelligence driving system, and the driver might be sentenced to a few years in prison, and because of this and similar accidents, the company operates The US National Highway Transportation Safety Board, along with the National Transportation Safety Board, is currently investigating Tesla car accidents, and the agency has expanded its investigation to investigate how drivers deal with its systems, and at the state government level, California is considering limiting from the use of self-driving features in those cars.
And the legal responsibility system in the United States, which determines the responsible party in the event of injuries, and decides financial compensation for them, is not at all ready to deal with artificial intelligence systems. Legal responsibility punishes the actual user who caused the injury, whether he is a doctor, driver or others, but in artificial intelligence systems, errors may occur without any human contribution, and so the legal responsibility systems should be modified to suit this new situation.
There is no better moment than now to reconsider the issue of legal liability. AI systems are starting to become widespread, but they still lack the necessary regulations to regulate their use, and they have already caused some casualties.
But in order to allow the exploitation of all the capabilities of artificial intelligence, the system of legal responsibility must be appropriately modified, the loose rules of the system, and potentially high-cost lawsuits, and all of this will discourage investment in artificial intelligence systems, their development, and the demand for their use.
The widespread presence of these systems in healthcare, self-driving cars, and other industries depends on the existence of legal frameworks that determine who, if ever, is liable if such systems cause injuries.
The solution is to ensure that all stakeholders, users, developers, and others, and everyone who deals with the product from its development to its use, have enough responsibility to ensure the safety and effectiveness of AI systems.
Insurers should protect policy holders from the high costs of legal action they might face if AI causes any injuries.