Overview
Machine learning offers powerful tools to understand, predict and hopefully ameliorate the diffusion and effects of toxic content on social media that is recently having a high impact on our society.
This PhD scholarship is part of the research project “COURAGE: A Social Media Companion Safeguarding and Educating Students”, which is an international collaboration funded by VolkswagenStiftung (Volkswagen Foundation) as part of the Artificial Intelligence and the Society of the Future funding initiative. The project partners include the Universitat Pompeu Fabra (Spain), the Istituto per le Tecnologie Didattiche of the National Council of Research ITD-CNR (Italy), Hochschule Ruhr West (Germany) and the Rhine-Ruhr Institute for System Innovation (Germany).
The project aims to develop a Virtual Social Media Companion that educates and supports teenage school students facing the threats of social media such as discrimination and biases as well as hate speech, bullying, fake news and other toxic content. The companion will raise awareness of potential threats in social media among students without being intrusive. It will apply gamification strategies and educative information selection algorithms.
The Essex team will be involved in developing Bayesian computational models of beliefs temporal dynamics of social media users to support governance and educational strategies. These models will also be applied to evaluate socially relevant variables, such as trust and inclusion. We will build on and implement state-of-the-art NLP & AI methods to provide measurements of sentiment, bias, hatefulness, veracity, polarisation, and sensationalism of social media content.
In addition, we will drive forward the state of the art in detecting hate speech and biased content. The companion will actively counteract this kind of content, balancing it with opposite perspectives and proposing specifically themed challenges adopting ideas used in games.
The project
The PhD studentship aims to address these challenges combining dynamic network modelling with automated content analyses (textual or multimedia) using modern machine learning methods, such as Deep Learning and Hierarchical Bayesian Models.
The student may extract relevant content features, topics and events from online discussions to (a) predict short and long term responses of multiple users, (b) estimate the different effects of diverse information suggestion strategies in such context, and (c) define different interventions to improve model accuracy.
As such, we are particularly interested in PhD candidates that like to work on one or multiple of the following topics:
- Modelling temporal dynamics of social media users beliefs
- Model Based Reinforcement Learning algorithms for governance of social networks
- Semantic analysis of unstructured textual or multimodal data, including sentiment analysis and detection of biased or fake content, violent language and cyberbullying.
The successful applicant will join the Essex COURAGE team — formed by Dr Dimitri Ognibene (PI), Professor Ansgar Scherp (Co-I), Dr Aline Villavicencio (Co-I), and Visiting Professor Udo Kruschwitz (Co-I).