Research Case Study

Impact: Viewing the risks of Artificial Intelligence through a human rights lens

  • Tagged under

    Human Rights
    Technology, data and innovation

New technology often brings the promise of better or easier lives, but in the case of Artificial Intelligence (AI) it can also carry risks for human rights.

So what do we need to consider when AI is already being used to track our behaviour and make key decisions that affect us all?

Since 2015, the ESRC Human Rights, Big Data and Technology (HRBDT) project has brought together experts in a range of disciplines to respond to the challenges posed by new and emerging technologies and shape global and national policy.

The Challenge

AI may offer many benefits, but it also brings substantial risks, not least for human rights. These risks are for all human rights, and go far wider than concerns about privacy.

While many agree that existing legal frameworks are outdated or ill-suited to the challenges of a rapidly-developing environment, dedicated AI regulation has yet to be formulated, with states and businesses often arguing that regulation could stifle innovation.

While many organisations have sought to develop their own strategies, these generally lack specific guidance on how to avoid infringing upon human rights.

So, how do we ensure advances in technology don’t bring negative impacts for our human rights?

What we did

The Human Rights, Big Data and Technology project is a partnership between the universities of Essex and Cambridge and the Geneva Academy of International Humanitarian Law and Human Rights. It brings together a wide range of academic experts to deliver a truly interdisciplinary approach.

HRBDT research was among the first to argue that principles already contained in international human rights law (IHRL) are sufficiently agile and adaptable to apply to AI.

HRBDT researchers have shown how IHRL provides guidance to actors using AI in two key ways:

  • establishing ‘red lines’ where AI cannot be used because it fundamentally conflicts with IHRL or because insufficient safeguards exist to prevent or remedy harms
  • showing how IHRL provides an accountability framework based on prevention, oversight and remedies and a system of procedural safeguards
  •  

    HRBDT research has demonstrated how the framework provided by IHRL can map on to the full lifecycle of technologies through a due diligence approach which includes
  • regular human rights impact assessments
  • the establishing of internal monitoring and oversight mechanisms
  • grievance mechanisms
  • external reporting
  • What we achieved

    The HRBDT project has engaged key decision-makers through reports, policy papers, blogs, events and videos. Through their interaction with business, the project has also influenced the development of corporate AI strategies.

    HRBDT research has shaped landmark international standards adopted by the UN Human Rights Council (HRC) on how human rights law applies to the design, development and use of AI technologies.

    The impact of HRBDT research can also be seen in the United Nations’ Universal Periodic Review process on the right to privacy in the digital age. Standards embedded in a landmark UN Human Rights Council resolution through HRBDT’s successful outreach in 2017, have so far been used by 14 states to make 30 recommendations.

    The United Nations Secretary General also cited HRBDT research in his report on the Role of New Technologies for the Realization of Economic, Social and Cultural Rights to support his submission that international human rights law provides, “a key guiding framework for societies in shaping their responses to the challenges of an ever-changing technological environment”. The UN Office of the High Commissioner for Human Rights has already used this framing in a report on the impact of AI on the right to peaceful assembly.

    In the UK, the HRBDT report into the Metropolitan Police Service’s trials of Live Facial Recognition technology framed national policy and advocacy strategies. HRBDT research has also strengthened legislative and policy frameworks, including changing the approach of two national regulators (the UK Investigatory Powers Commissioner on intelligence oversight and the Surveillance Camera Commissioner on facial recognition technology).

    During the COVID-19 pandemic, HRBDT research informed how the UK Parliament’s Joint Committee on Human Rights analysed the legality and legitimacy of contact-tracing apps, with the project’s submission cited in the Committee’s final report. In December 2020, HRBDT members wrote the Scottish Commission for Human Rights' submission to the Scottish government's digital strategy on human rights.