In the past two decades the role of Artificial Intelligence (AI) in our society has grown significantly. As more work is done to develop new algorithms, technology becomes cheaper to use and has a broader variety of tasks that it can complete.
On the one hand this can make some things quicker to complete. Mundane tasks can be removed from human control and handed to AI.
However, there are risks that come with this increased use. In recent years multiple algorithms have caused controversy by reinforcing discriminatory behaviours. This is due to the way that they have been programmed in the first place.
While the majority of programmers, many of whom are young, white and male, do not deliberately create algorithms to be discriminatory, the result is still the same. They do not consider the difficulties women face when juggling childcare and work, or do not realise that Black graduates may not have been able to afford to build up their work experience through unpaid internships.
This project, lead by Lancaster University, will focus on the use of AI in the labour market. Ultimately, the project aims to develop new methodologies for responsible and trustworthy algorithms, which will reduce inequalities in the labour market. We aim to improve practice across the whole labour market, from reducing the bias in job advertisements, through to improving the diversity in hiring practices and professional networking processes.
The first part of the project will involve working with stakeholders to identify when and how AI algorithms are used in the labour market, and what issues arise from them. Essex researchers will the focus on the development and testing of new methodologies that can help mitigate the issues.
This project is led by Lancaster University, in partnership with Profusion Technologies, AstraZeneca PLC, and Output Services Inc.
This project is funded by Economic and Social Research Council (ESRC).