To ask His Majesty's Government what steps they are taking to create guardrails for safe AI development by the end of 2023.
Answered on
3 July 2023
We published our AI Regulation White Paper on 29 March, which sets out five cross-cutting principles regulators should apply when considering the use of AI in their own sectors. The principles are: (i) safety, security and robustness, (ii) appropriate transparency and explainability, (iii) fairness, (iv) accountability and governance, (v) contestability and redress.
Our principles-based approach to AI regulation is focused on outcomes – rather than a more rigid, horizontal approach regulating the technology without considering context – and is designed to manage risk and enhance trust while also allowing innovation to flourish. The proposals make the provision that, dependent on the implementation of the principles, we will consider the introduction of a statutory duty on regulators in time.
The white paper also proposes a range of new central functions, including a horizon scanning function intended to support the anticipation assessment of emerging risks. This will complement the existing work undertaken by regulators and other government departments to identify and address risks arising from AI.
The Government is also investing £100 million in startup funding to a new Foundation Model Taskforce to ensure UK leadership in foundation models, such as those underpinning services such as chatGPT and Stable Diffusion, to develop UK sovereign capabilities in this technology and act as a global standard bearer for AI Safety.
Finally, on 7th June 2023, the Prime Minister announced that the UK will host the first major Global Summit on AI Safety this autumn. The Summit will consider the risks of AI, including frontier systems, and discuss how they can be mitigated through internationally coordinated action. It will also provide a platform for countries to work together on further developing a shared approach to mitigate these risks.