Ethics in artificial intelligence (AI) mean the values that guide us when we develop smart technologies, such as machine translation software, facial recognition applications or streaming services that use recommendation algorithms. But why should we be interested in ethical issues surrounding artificial intelligence?
“Because we are all already part of various AI systems, and they are developing rapidly. Right now we are debating whether it is possible to identify political opinions from a person’s face or whether we can study underlying bias through algorithms. The further the algorithms spread and the more varied their applications, the bigger the impact they have on us. We should all develop AI literacy, and understanding the ethical questions is part of it,” says Anna-Mari Rusanen. She is university lecturer of cognitive science and the coordinating teacher of the new University of Helsinki course Ethics of AI.
1. Identify the technology to which we should react
Artificial intelligence is already being used wherever large quantities of data are processed and analysed. Professionals in the healthcare sector are debating whether algorithms and data should be used to predict and prevent health risks. Algorithms are already used in calculating our taxes and social benefits. In the future, national security, transportation and urban planning as well as energy and food production will involve robotisation and the use of algorithms to an increasing degree. In social media, algorithms decide what kind of content you see, and in dating apps they calculate which types of potential partners we should view.
“Every one of these issues comes with several ethical factors. We have to evaluate the benefits and risks of the systems as well as their impact on things such as human rights, and decide who is responsible when the system makes a mistake or unexpected consequences occur,” says Rusanen.
We are also under constant observation, for example when we visit shopping centres where automatic image recognition systems calculate visitor flows.
“Many agree to this for the purposes of coronavirus tracking, but at what point does automated tallying turn into surveillance?”
According to Rusanen, we should identify which aspects of a technology require a reaction.
“The technology itself can be neutral, good in one application and less good in another. It’s handy to be able to unlock your phone with your thumbprint, but there might be cases where fingerprint recognition would not be OK. It’s important to recognise the point where, for example, technology infringes on our privacy in a way that is no longer acceptable. Ethical problems usually arise from the applications of technology and the social context, not the functions of the specific applications themselves.”
2. The ethics of AI are changing
Right now is the moment when we decide what kinds of intelligent systems we will be using in the future. A new technical solution may even shape the relationship between state and citizen by setting new restrictions on the state, for example through automated decision-making. New civil rights are arising, such as the right to be forgotten.
“Politicians and the people creating the actual AI solutions must consider surprisingly profound questions regarding things like the nature of the default citizen around whom AI-based public services are organised,” says Rusanen.
According to Rusanen, politicians have realised this, and the EU will release drafts on new AI regulation later this year.
“They specify things that are allowed and forbidden in AI systems. The new rules emphasise human rights and suggest what kinds of applications come with a high risk of abuse. For example, facial recognition could be an example of such an application,” explains Rusanen.
“It is high time for every employee in public administration, companies that process data and the technology industry to wake up and learn the basics of AI ethics.”
3. The ethics of AI are not window dressing
Rusanen points out that the ethics of AI are not window dressing to decorate a finished product, service or system. For example, if a company is to utilise AI, the ethical principles should be in place at the stage when the systems are being created.
“Developers must think about to whom their product or service is relevant, how the rights of these people are protected and what consequences they may face. Everyone working with artificial intelligence must acknowledge what the ethical principles actually mean for their daily work. This must be done at each design meeting and it must guide the process from its very beginning. It’s not enough for an ethical advisor to sign off on the end result,” says Rusanen.
According to Rusanen, there are also some Finnish companies that would do well to improve in this area.
“Many companies promote moral regulation within the EU but have no qualms about selling their products to China where they will be used for mass surveillance. It’s hypocrisy.”