The Safeguarding of Fundamental Rights during the increasing use of AI
New technologies lead to an increasing automation of tasks hitherto carried out by humans. Developments in Artificial Intelligence (AI) in particular have received much attention. While much of the public attention on AI used to be particularly focused on its potential to strengthen economic growth, attention is also increasingly directed towards the impact of AI on fundamental rights. In the context of the EU Fundamental Rights Agency’s (FRA) input into the EU’s AI Regulation (A European approach to artificial intelligence), FRA commissioned Ecorys in 2020 to analyse the impact of the use of AI in four concrete ‘use cases’ on relevant fundamental rights.
The research team conducted 91 interviews across public administration organisations and private companies in five selected EU Member States, as well as 20 interviews with experts dealing with potential fundamental rights challenges of AI, including public bodies, NGOs, and experts of law. On the basis of this input, four main use cases in which AI was found to be practically developed, tested, or employed were selected. These included, among others, the use of AI in the determination of social benefits, the anticipation of crime through AI in order to predict policing, the automated analysis of medical records, and the use of AI in online marketing.
The findings from the interviews in these areas first showed that applications which rely on AI were already in place to different extents within and across a variety of sectors. Their purpose was primarily to automate and process straightforward processes, such as the scanning of documents, and thereby making processes more efficient. Some applications were also intended to assist in decision making processes, but most of these were only in development and/or testing stage. The findings then identified a wide range of potential fundamental rights implications through the use of AI systems, spanning far beyond data protection. While the interviewees were generally well aware of data protection concerns and had implemented checks and safeguards in line with GDPR requirements, there was less emphasis on fundamental rights assessments in connection with the use of AI. This was found to be primarily due to insufficient awareness of the need to conduct such assessments. The researchers identified horizontal points which could function as basic starting points for the consideration of the impact of AI on human rights in the future. These included the upholding of data protection laws, the ensuring of fair treatment of protected groups in the outcome of the processing, as well as the ability of affected people to complain and receive effective remedies when faced with discrimination. In addition to the interviews and their analysis, the project team also prepared background reports on the five selected EU Member States, which addressed the respective national set-up regarding AI, such as on designated or relevant laws, relevant bodies and recent developments, et cetera.
Link to final report
While the final report was for the internal use of FRA only, they published a separate report (pdf) which incorporates many insights from the study.