This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
AI recruitment bias to be investigated by UK data watchdog
Discrimination against ethnic minorities and the neurodivergent by artificial intelligence (AI) is to be investigated by the UK’s Information Commissioner’s Office (ICO).
The data watchdog announced this week that it has set out a three-year review looking into how AI for recruitment could have a bias caused by the lack of ethnic minorities and neurodivergent people during the testing of the software.
The ICO said: “We will be investigating concerns over the use of algorithms to sift recruitment applications, which could be negatively impacting employment opportunities of those from diverse backgrounds.”
The plan, ICO25, will also focus on other regulatory work such as children’s privacy, the use of algorithms within the government’s benefits system and the impact AI has on ‘predatory’ marketing calls.
Director of ethics and responsible research at The Alan Turing Institute, David Leslie, told The Guardian, that the use of data-driven AI models in recruitment processes raises a host of thorny ethical issues.
“Predictive models that could be used to filter job applications through techniques of supervised machine learning run the risk of replicating, or even augmenting, patterns of discrimination and structural inequities that could be baked into the datasets used to train them,” he added.
John Edwards, ICO’s Commissioner, said: “My office will focus our resources where we see data protection issues are disproportionately affecting already vulnerable or disadvantaged groups. The impact that we can have on people’s lives is the measure of our success.”
Adding, “This is what modern data protection looks like, and it is what modern regulation looks like.”
The UK General Data Protection Regulation (GDPR) gives people the right to non-discrimination in the processing of their data, as well as the UK Equality Act 2010, both enforced by the ICO.
Algorithmic bias has dominated the news headlines this year, including a House of Lords report, published in April, that raised concerns over AI bias in policing.
Earlier this year, twenty international AI experts from organisations including Accenture, The Alan Turing Institute, and UNESCO were interviewed for a study that looked at what tech leaders can do to combat this issue.
#BeInformed
Subscribe to our Editor's weekly newsletter