Observers of modern politics know that as the saying goes “a lie is halfway around the world before the truth has put its boots on”.

The dangers posed by misinformation and disinformation are clear and present. Whether it is tampering in elections, ransomware attacks, fake news factories in Macedonia, or hate groups spreading disinformation about COVID-19, the work of governments and the trust of citizens have been seriously undermined.

Many might think that the war on misinformation is already lost.

It was in this context that the University of Essex hosted a panel discussion at Stationers’ Hall in the City of London, with experts in the fields of data science, digital technology, journalism, and political psychology.

This timely discussion, chaired by Professor Faten Ghosn, Head of the Department of Government was well attended by experts in the field, academics, business-people, scholars in different stages of their careers and alumni.

In her introduction, Professor Ghosn said: “Despite the recent warning from experts, released by the Center for AI Safety, that the risk of extinction by AI should be a global priority, due to AI’s ability to harm the health of millions and weaponize disinformation, discrimination and impersonation, the latest research does suggest that the war on misinformation can be fought – and maybe even won – if we know the enemy and where to fight it.

“The public is more discerning than it is given credit for. AI is not just a villain of the piece; it can be used to identify lies and propagate truth and the recent Fox News pay-out over its promotion of misinformation about the 2020 election shows that the truth can and does fight back.”

What is the role of AI and what is its impact on policy, governance and elections?

While false information/disinformation and propaganda are not new phenomena, the prevalence and rate of misinformation/disinformation have accelerated exponentially in the 21st century. Ever since the rise of social media, it has been quite difficult to balance the advantages that the platforms provide with the inability to discern truth from untruth, which of course can have some serious consequences.

A chilling example of this was covered by the Washington Post in an article “Crossing jungle and desert, migrants navigate a sea of misinformation” which covered the ending of the pandemic-era policy used by both Presidents Trump and Biden, to deport migrants. According to the article, the misinformation in one tick-tock video, that had been seen more than 17.4 million times, claimed falsely that people arriving at the border would not be deported. This was shared widely on groups on WhatsApp and Facebook, which led to increasing the already treacherous journey ahead of forced migrants who have to navigate conflicting information as well as jungles and deserts.

Ever since the 2016 elections, when then-candidate Trump stumped many within the US establishment along with seasoned political pundits, there has been increased attention to the impact of foreign actors meddling in elections and the role of misinformation and disinformation in these campaigns. The false narrative about the 2020 elections by President Trump and his allies led to the January 6 insurrection. On this side of the pond, irrespective of where one stands on the issue of Brexit, we know that misinformation and disinformation were prevalent throughout the campaign.

Dr Akitaka Matsuo, Lecturer in the Department of Government and a Fellow in the Institute for Analytics and Data Science (IADS) at the University of Essex talked of the role of data science in assessing the amount and spread of mis/disinformation and to detect and counter false information, as well as the use of AI platforms to fact-check and dispel misinformation.

Dr Matsuo said: “What becomes clear throughout the panel is that the perceived threat of the intervention of technology, especially AI, in misinformation varies greatly depending on one's position in the world. For those in the developed world, AI is a potential threat to democracy, but one that can be tamed to great advantage.

“In the Global South, on the other hand, misinformation is a threat as a device to bring more blatant violence, and the onslaught of technology can bring an end to fragile rights protections and political freedoms. It is important for anyone with a practical or academic interest in misinformation to be constantly aware of these different perspectives.”

Psychological impact: the susceptibility of the public and election integrity

The history and psychology of misinformation have been studied and analysed for quite some time. In fact, there is a growing academic literature that examines the psychological factors that underly the spread and acceptance of misinformation in the political domain, as well as the strategies that might be effective in mitigating these problems.

False information is a problem for public trust, especially election-related disinformation. In fact, certain media outlets have characterized this as an existential problem for our democracy, a concern that is echoed by Dame Wendy Hall who is a member of the Government’s AI council.

With upcoming general elections in the UK in the next year or so, how we respond will impact the future of our democracy especially when we look at the fact that 79% of Brits between the ages of 16 and 24 say that they get their news from social media.

Report after report has demonstrated that social media algorithms are biased along gender lines, and in fact, users are disproportionality being exposed to content that reflects patriarchal and racialized gender norms.

Rob Johns, Professor of Politics at the University of Essex, said: “The frightening technological potential of AI is being harnessed around the world to spread misinformation and to try to influence elections.

“But combating this in our polarised polities is not just about winning the technological battle. It is about social science as well as data science. We must understand that the primary goal of misinformation is to win a battle of ideas, and that attacking it only with facts is therefore unlikely to be enough.

“But then, if fact checkers or scientists wade into that battle of ideas, they risk their credentials as neutral sources. In the same way, if governments wade into regulations designed to combat misinformation, they will also be seen as fighting for one side in a political battle rather than as a neutral referee.

“And this is not just a battle within countries; it is a battle between countries. Control over AI technology is the new resource that matters: as valuable -- and potentially as big a source of inequality -- as oil ever was.”

In summing up, Professor Ghosn said: “We are getting too distracted by focusing on the technology side of the AI that we are losing sight of the human element; both in terms of intentions and actions.

"The concern is with how AI will be used to undermine democratic processes and further erode the fragile trust that exists in our polarised world, but equally, how major players, both governments and tech companies, may use this opportunity to either overregulate or not regulate at all, and more importantly ensure that who controls the AI technology will have an unfair advantage.”

The Panel

Professor Faten Ghosn is Head of the Department of Government at the University of Essex and affiliated faculty (Non-UA) at the Center for Middle Eastern Studies at the University of Arizona.

She has received several research awards, including a Minerva Initiative funded by the Department of Defense and The U.S. Army Research Office, as well as several teaching awards, both undergraduate and graduate. A common theme running throughout her professional interests is the importance of the choice of strategy that is picked by actors to manage their conflicts and disagreements, how such strategies and policies are implemented, their effectiveness as well as their consequences. Her current research focuses on enemy images as obstacles to cooperation, ethics in fieldwork, forced migration, militarized interstate disputes, and transitional justice.

Dr Akitaka Matsuo is a Lecturer in the Department of Government and a Fellow in the Institute for Analytics and Data Science (IADS) at the University of Essex.

Prior to joining Essex, he worked as a Research Fellow in Data Science in the Department of Methodology at the London School of Economics and Political Science. His research interests revolve around data science and politics. Specifically, he focuses on the statistical methodology for scaling survey responses and legislative behaviour, as well as the natural language processing of political texts, such as social media texts and parliamentary speeches.

Dr Manisha Ganguly is Investigations Correspondent at The Guardian, focusing on international affairs and dissidents.

She is an award-winning journalist & documentary producer, known for pioneering open-source investigations (OSI) to expose war crimes. She is a judge for the International Emmy® Awards, a Forbes Under 30 media honouree, and a two-time Amnesty Award winner. Her documentaries for the BBC World Service, investigating human rights abuses across the Middle East, North Africa, Russia and China, have been broadcast to over 300 million.

Professor Rob Johns is Professor of Politics at the University of Essex.

His teaching and research are in the fields of public opinion and political psychology: why citizens believe what they believe about politics, and what can change their mind. Recent projects have studied the power of 'post-truth' rhetoric to undermine fact checks, how communicating statistical uncertainty can reduce audience uncertainty, and the impact of common mental health disorders like depression and anxiety on political attitudes and participation.

Dr Stephanie Diepeveen is a Senior Research Fellow at ODI (formerly the Overseas Development Institute), where she leads its Digital Societies Initiative.

She is also a Co-Chair for the T20 Taskforce on Digital Public Infrastructure under India’s G20 Presidency, a Research Associate at the Bennett Institute for Public Policy, University of Cambridge, and a Senior Research Associate with the Jesus College Intellectual Forum. She has published widely on issues of trust, inclusion and digital rights around changing technologies, often taking a global comparative perspective.

Further Reading

Washington Post: Crossing jungle and desert, migrants navigate a sea of misinformation

The Guardian: Risk of extinction by AI should be global priority, say experts

Center for AI Safety: Statement on AI Risk

PBS: AI-generated disinformation poses threat of misleading voters in 2024 election

MoneyTransfers.com: UK Social Media Statistics That Demonstrate Its Massive Impact

Independent: Fox Dominion payout becomes largest media settlement in history