Skip to main content

Artificial Intelligence: Equality

Question for Department for Digital, Culture, Media and Sport

UIN 187146, tabled on 26 April 2021

To ask the Secretary of State for Digital, Culture, Media and Sport, what steps his Department is taking to ensure that (a) gender and (b) racial discrimination is not incorporated into the development of artificial intelligence systems.

Answered on

28 April 2021

We recognise the need to address gender disparities in AI. In 2019, DCMS via the joint DCMS/BEIS Office for AI worked with the Office for Students and DfE to deliver new conversion course Masters courses at Universities across the country, with scholarships for people from underrepresented backgrounds, including women, black, and disabled students. The programme launched last September and the cohort of 1265 students that started included, 40 per cent women, one quarter black students, and 15 per cent disabled students. For those receiving a scholarship, the figures were even more encouraging – with 76 per cent of scholarship students going to women, 45 per cent of the students identifying as black and 24 per cent as being disabled. The upcoming National AI Strategy, being led by the Office for AI, looks to double down on such commitments to further improve diversity.

In addition to improving diversity via the conversion course Masters programme, in 2019, DCMS partnered with the World Economic Forum to create guidelines for responsible public sector procurement of AI systems. In June 2020, the guidelines were published on GOV.UK and operationalised through Crown Commercial Service’s AI Marketplace, launched September 2020. The Guidelines, which build on the Government’s Data Ethics Framework, recommend that AI procurement in Government be conducted by diverse teams, and stipulate that specific steps be taken to ensure the Public Sector Equality Duty is upheld – including performing an equality impact assessment alongside data protection impact assessments. Crown Commercial Services have implemented a baseline ethical standard for suppliers to be added to the procurement system. These concrete interventions are intended to mitigate against gender or racial bias being incorporated into AI systems procured into the public sector, which at 40% of the economy, sets the standard for AI suppliers in the wider economy.

The Government’s Data Ethics Framework and ‘Guide to Using AI in the Public Sector’, alongside other area-specific guidance available on GOV.UK, support the ethical and safe use of algorithms in the public sector.

Further to this, as part of our commitment in the National Data Strategy, the Cabinet Office are exploring appropriate and effective mechanisms to deliver more transparency on the use of algorithmic assisted decision making within the public sector and to monitor their impact; and are working with leading organisations in the field of data and AI ethics to do so.

The Centre for Data Ethics and Innovation, in their report into algorithmic bias, make a number of recommendations to Government to reduce or mitigate the propensity for algorithms to encode bias. The Government is currently reviewing those recommendations.