Can Robots Be Prosecuted for Committing a Crime?

ETHICAL AND LEGAL ISSUES SURROUNDING ARTIFICIAL INTELLIGENCE (AI)

“What happens when a machine capable of learning from its algorithm and improving its decision-making from successes and mistakes makes a mistake and commits a crime?”

Humankind has for millenniums dreamt of creating an Artificial Being that thinks and acts humanly, in fiction as well as philosophy.

Our phones have AI assistants that learn which applications is used the most and where we are heading when starting the car’s engine. Robotic nurses and surgeons are not fiction anymore because we are living in the age of AI.

AI is the science of making machines intelligent and with capabilities to perform tasks that generally require human intelligence.

Car driving, stock trading, and defining military targets in combat are some of the examples of tasks that require human intelligence. Today, these activities are possible without human intervention because of AI.

The Question of Liability

The entire history of human law has been built around the assumption that people, and not robots, make decisions. In a society in which increasingly complicated and important decisions are being handed over to algorithms, there is the risk that the legal frameworks we have for liability will be insufficient.

Arguably, the most important near-term legal question associated with AI is who or what should be held liable for tortious, criminal, and contractual misconduct involving AI and in under what conditions.

If we admit that robots have a mind of their own, endowed with human-like free will, autonomy or moral sense, then our whole legal system would have to drastically be amended. Although this is possible, it is not likely. Nevertheless, robots may affect criminal laws in subtler ways.

Examples of Robots gone wrong in the past:

·       A security robot fails to detect properly and runs over a toddler at a shopping Centre in California

·       A driver was killed in a car crash whilst inside an automated car which was moving on autopilot. Tesla Motors was absolved of criminal responsibility because the driver did not take any action when a warning was administered by the car.

·       PredPol is an AI-software being piloted for policing in the UK. It has generated controversy because of potential discrimination in its decision-making against the black and ethnic minority offenders.

Intention to Commit a Crime

A crime consists of two elements: a voluntary criminal act or omission (actus reus) and an intention to commit a crime (mens rea).

It may be the case that a robot could have committed a criminal act or omission, but how do we know the robot intended to do what it did? How can we establish the intention of a robot?

Could a robot claim the defense currently available to people such as diminished responsibility, provocation, self-defense, necessity, mental disorder or intoxication should it begin to malfunction or make flawed decisions?

The law as it stands lacks clarity here. For now, the fault is said to lie within its design, programming and manufacturing. This is inadequate where intelligent machines are capable of learning from their successes and errors to improve, and as technology continues to advance.

Who is Responsible?

When considering the possible consequences and misuse of an AI, the key question is: who is responsible for the actions of an AI? Is it the programmers, manufacturers, end-users, the AI itself, or another? Is the answer to this question the same for all AI or might it differ, for example, for systems capable of learning and adapting their behavior?

According to the European Parliament Resolution (2017) on AI, the legal responsibility for an AI’s action (or inaction) is traditionally attributed to a human actor: the owner, developer, manufacturer, or operator of an AI.  For instance, self-driving cars in Germany are currently deemed the responsibility of their owner. However, issues arise when considering third-party involvement, and advanced systems such as self-learning neural networks: if an action cannot be predicted by the developer because an AI has sufficiently changed from their design, can a developer be held responsible for that action? Additionally, the current legislative infrastructure and the lack of effective regulatory mechanisms pose a challenge in regulating AI and assigning blame, say Atabekov and Yastrebov (2018), with autonomous AI, in particular, raising the question of whether a new legal category is required to encompass their features and limitations (European Parliament, 2017).

 

 

Share

Category:

Leave a comment

Your email address will not be published. Required fields are marked *