Data quality in data infrastructures of migration and border control

Image credit: STS Italia


Migrants’ identity data does not always fit into the standard categories inscribed in information systems. These constraints (and possibilities) in standards and interfaces may be one reason why data entered in databases can eventually be found to be of low quality. No wonder that this issue of data quality is frequently identified by policy-makers as an important obstacle in the effective operation of information systems in migration and border control. For example, the EU information systems of Justice and Home Affairs are currently undergoing significant changes to address data quality, a supposed prerequisite to make the systems interoperable. However, despite the wide range of research on data infrastructures of migration and border control in Europe, little research has currently dealt with data quality, and many questions remain. What types of frictions occur during the processes of establishing identities of migrants? How are practices for processing vulnerable populations shaped by databases containing information that is not always complete, accurate, reliable? And how do data quality mechanisms and technologies shape relations between different actors? As part of the ERC-funded “Processing Citizenship” project, I present results from my fieldwork conducted at a company that develops technologies for dealing with data frictions in the processes of migration and border control. My research on the use of these technologies at a migration and asylum agency of a Northern European Member State shows a sample of the kinds of data frictions that can occur during the processing of vulnerable populations, and how these frictions are mediated by such technologies. These findings will contribute to STS research on data infrastructures, the role of data quality mechanisms on socio-technical architectures, and data frictions in migration and border security.

Jun 17, 2021 4:00 PM — 5:30 PM
Online conference