Digital technology discriminating against people at borders study finds

  • Date

    Mon 18 Sep 23

Surveillance cameras

Greater transparency and human rights-based governance is needed around the use of technology at borders to stop them acting as sites of “exclusion, violence and discrimination”, a new study has claimed.

A blanket ban on particular types of digital border technologies should also be considered according to the study’s co-authors, Professors Lorna McGregor, of Essex Law School, and Petra Molnar of York University and Harvard.

In partnership with the UN Human Rights Office, the project has been exploring the impact of the use of digital technologies on human rights, and found that people on the move often face discrimination depending on their race, ethnicity, gender identity, sex, national origin or migration status.

The study highlights how digital technologies are used to monitor migration journeys.

They are even being used at some borders to facilitate pushbacks, or as a preventative measure to stop people migrating, regardless of their needs, safety or human rights entitlements.

Professor McGregor, who spearheaded the Economic and Social Research Council's landmark Human Rights, Big Data and Technology which this project intersects with, said: “What we need is for digital technologies to be harnessed and used to better protect the human rights of people on the move, rather than infringe them.

“A lack of transparency over how and what technology is being used must be rectified to ensure states and private actors are held accountable for their decisions at borders, that technologies that pose serious risks to human rights are banned, and that robust human rights safeguards are in place.”

Some of the technology scrutinised as part of the research included drones and facial recognition cameras.

Data systems which rely on algorithmic assessments were found to be using biased or incomplete information, making it harder for minority groups to be treated fairly.

Lie detectors are also being used to try to determine what migrants are thinking, but the researchers warn this guessing-game is highly susceptible to further bias, stereotypes, and discrimination.

The findings came following a series of one-to-one interviews and meetings with people from across academia, civil society, and international organisations.

“Real people are already being harmed by digital technologies,” said Molnar, an expert in the field of border technologies who co-runs the Refugee Law Laboratory at York University.

“We desperately need a global conversation about strong governance mechanisms, including a ban, to ensure that everyone’s rights are protected.”

The four main conclusions reached are:

  • Technology is causing human rights harm especially where it is being used at securitised and militarised borders, which are seemingly set up to turn migrants away rather than support them
  • Any new border technologies being introduced by states must be assessed first to ensure their compliance under robust human rights frameworks. They also must continue to be monitored and held accountable for any human rights infringements
  • Some border technology which inherently conflicts with human rights standards or presents serious risks which cannot be mitigated should be subjected to a complete ban
  • States should carry out a stock take of their border technologies and publish a list of those in use

To ensure greater transparency, the technology should be assessed using publicly accessible tests and those conflicting with human rights laws should be either discontinued or withdrawn until issues have been resolved.

The study echoes the request of the UN Secretary-General for the UN Human Rights Office to provide practical recommendations and guidance to states and stakeholders to take a human rights-based approach to the use of digital technology at borders.

You can read the report it in full here.