To ask the Secretary of State for Digital, Culture, Media and Sport, with reference to her Department's press release, New UK initiative to shape global standards for Artificial Intelligence, published on 12 January 2022, what ethical considerations her Department plans to include in the new artificial intelligence standards.
Answered on
1 March 2022
The AI Standards Hub pilot aims to grow UK contributions to global AI standards development. As outlined in the National AI Strategy, the UK is taking a global approach to shaping technical standards for AI trustworthiness, seeking to embed accuracy, reliability, security, and other facets of trust in AI technologies from the outset.
The pilot follows the launch of the Centre for Data Ethics and Innovation’s (CDEI) ‘roadmap to an effective AI assurance ecosystem’, which is also part of the National AI Strategy. The roadmap sets out the steps needed to develop world-leading products and services to verify AI systems and accelerate AI adoption. Technical standards are important for enabling effective AI assurance because they give organisations a common basis for verifying AI.
Alongside the AI Standards hub pilot and AI assurance roadmap, the government, via the National AI Strategy, has committed to undertake a review of the UK’s AI governance landscape, and publish an AI governance white paper. AI Standards, assurance, and regulation can be mutually complementary drivers of ethical and responsible AI.
The Alan Turing Institute is leading the AI Standards Hub Pilot, supported by the British Standards Institution and National Physical Laboratory. The pilot is expected to complete its initial activities by the end of 2022.
The AI Standards Hub pilot will involve engagement and collaboration with industry and academics. This includes a series of stakeholder roundtables being led by The Alan Turing Institute.
Once the Hub pilot finishes, there will be a process to evaluate and review its impact and determine the appropriate next steps.