A breakthrough in AI applications requires well-functioning regulation

Is there a need for new regulatory mechanisms to cover damage caused by artificial intelligence, or do the existing liability rules suffice?

Artificial intelligence solutions are continuously developing and increasingly prevalent. At a doctor’s appointment, the diagnosis may soon be contemplated by a robot, and if you are in need of surgery, another robot may perform that task. You may be reading a news article to whose writing not a single human being contributed, while business letters and advertisements can be translated by using another type of machine.

But who takes responsibility for a smart thermostat burning down the house or a robot making a mistake in surgery?

Assistant Professor Katri Havu investigates who should bear responsibility and under which conditions, should an AI application or a product containing such an application cause harm.

“Even though the general liability regulations now in effect also apply to damage caused by artificial intelligence, the handling of such cases according to those rules can be unpredictable or varying. In turn, this can result in both those suffering the damage potentially being left without protection and products containing AI solutions being associated with such unpredictable liability risks that no one will be very eager to introduce them to the market,” Havu notes.

A research project headed by Havu entitled Kuinka säännellä tekoälyyn liittyvää vahingonkorvausvastuuta EU:ssa? (‘How to regulate AI-related liability in the EU?’) focuses on the issue on both the micro and macro levels. On the micro level, the project aims to identify sensible and well-functioning rules, while the key question on the macro level is the relationship between transnational and national law.

The project was recently awarded Academy Project Funding for 2020–2024 by the Academy of Finland.

“This project gives us a unique chance, on the one hand, to concentrate in detail, for example, on the various uses and specific areas of artificial intelligence as well as various situations where damage is caused and, on the other, to look at the big picture, that is, at which situations actually require new rules and which contexts can be managed with the current liability regulations of the EU member states.”

National or transnational?

Looking at the topic from the broader perspective of systems, the level at which decisions on responsibilities are made emerges as the key factor. How are responsibilities and roles divided between the national law of EU countries and EU law, and is there a need for additional transnational regulation?

For example, the European Commission has recently stated on a number of occasions that liability rules pertaining to AI solutions most likely need to be developed separately, through special provisions, at least in part, and that such development should not be carried out individually by member states, to avoid new practical obstacles being created by legislative variance within the single market.

The project now being launched will promote societal goals related to AI applications by investigating how responsibility for the risks associated with the increasing use of AI applications is distributed fairly, efficiently and in a sufficiently simple and clear manner from the perspective of administration or implementation. Legislators on the EU and national levels are far from solving these issues or, in certain cases, have not even properly begun to address them. In practice, the demand for solutions is clear.

“How these issues are solved through legislation affects, for example, when businesses offering various AI applications on the market consider their products reliable and safe enough, or risk-free, to be able to start marketing them broadly,” Havu says.

The topic has already been investigated fairly extensively on the theoretical level, but the level and depth of such studies vary considerably. The research group under the direction of Havu aims, among other things, to develop analytical tools for identifying well-functioning regulatory solutions in the EU.

Groundbreaking legislation not necessarily needed for breakthroughs

In the case of novel phenomena and innovations, their effects and the need to react to them are easily overinterpreted. According to Havu, the rules and principles of existing liability law could well apply quite extensively to the current transformation.

“What I find fascinating with this theme is that people, from researchers to officials and politicians, are quick to overreact, almost claiming that all current liability legislation and the related tradition are entirely useless in this new context – which is actually not at all the case.”

Havu has noticed that taking part in the discussion appears to be important to many, and those with no in-depth knowledge on the matter also wish to express their opinion.

“In very topical questions of principle, it's interesting how everyone wants to have a say, even without familiarising themselves with the matter in any depth. This is why the discussion on this question of developing specific liability regulations pertaining to artificial intelligence – or law pertaining to AI in a more general sense – is truly wide-ranging.”