Human Rights, Big Data and Technology

Misinformation and Disinformation and Deep Fakes

AI data

Investigating how mis and disinformation are affecting human rights

Misinformation and disinformation campaigns threaten several human rights. They endanger democracy and put journalists and human rights activists at risk. At the same time, many of the responses put in place by companies and states equally threaten freedom of expression.


Watch our videos from the HRBDT project


Misinformation and disinformation and deep fakes

Misinformation and disinformation are commonly known as ‘fake news’. The mis/disinformation phenomenon has been catalysed by the technological revolution. The increasingly frequent use of social media contributes to the amount of data shared and processed by algorithms. It has also changed the way news is created and disseminated.

Put simply, misinformation is the sharing of incorrect information whereas disinformation is the sharing of incorrect information with a malicious intent. Mis/disinformation can take many forms such as memes, misattributed content, fabricated or cloned websites and news. It also includes altered and fabricated images and videos (these are also called deepfakes). Bots, trolls and other AI or non-AI agents can increase the reach of mis/disinformation campaigns. Algorithms that create filter bubbles significantly intensify its impact and effect on users.

Misinformation and disinformation and deep fakes and human rights

We assert that misinformation and disinformation amplifies discrimination and human bias, having an adverse effect on minority groups that are already discriminated against. There are also other human rights that may be affected.

The impact on human rights during elections

Article 25 the International Convention on Civil and Political Rights (ICCPR) establishes the right of everyone to take part in the conduct of public affairs and to vote. Mis/disinformation campaigns have been largely used during elections or referenda to manipulate voters and to shape the outcome of polls. Bots have been used to affect the public debate and influence the electorate both in domestic and international elections.

In Ecuador and in the Philippines the presidents admitted using paid trolls during the electoral campaigns . Studies also confirmed the use of state-led propaganda by the Russian Internet Research Agency as a foreign policy tactic to affect the outcome of the 2016 US Presidential Elections. Misleading and deceiving information was provided to targeted groups to undermine their right to vote. Similar campaigns have also been used to discredit electoral observation missions.

The threats to human rights defenders and journalists and the chilling effect

Computational propaganda’ can also be used to target or troll journalists and human rights defenders critical of governments or political movements. This can include formal state sponsorship  and other  organisational sponsors. The goal of such targeting is to create reputational damage, a societal backlash, or a chilling effect that discourages the continuation of critical work, as warned by the UN Secretary-General in his opening remarks to the Human Rights Council in February 2019. These online harassment campaigns can produce a chilling effect on the freedom of expression, association and assembly of the targeted individuals, who may refrain from publicly expressing their views and engaging in their normal activities for fears of further verbal and physical attacks.

Hiding behind mis/disinformation and avoiding accountability

Governments around the world have also started using the label ‘fake news’ or ‘mis/disinformation’ to close down debates or deny allegations against them in the context of human rights violations. This strategy represses freedom of expression and avoids accountability, threatening many human rights guarantees such as the duty to investigate in the case of violation of the right to life.

Mis/disinformation as a tool for hate speech

A significant proportion of mis/disinformation, and memes in particular, contain hate speech: incitement to hatred, justification for and promotion of spreading hateful messages based on intolerance, racism and violence. This is boosted by the fact that the digital space offers the possibility to post anonymously,  resulting in a lack of accountability.

The responses to mis/disinformation and their threats to human rights

States, business and international organisations have been developing responses to tackling mis/disinformation, ranging from establishing expert groups and task forces, demotion/removal of content and accounts, anti-disinformation laws and media literacy programmes. Some of these measures may pose further risks to human rights, in particular to freedom of expression.

Our research

We aim to provide a better understanding of how mis/disinformation is affecting the wide spectrum of human rights and to push the debate further on how to address this phenomenon without endangering human rights.

Our research areas