Skip to main content

Artificial Intelligence Regulation White Paper

Statement made on 29 March 2023

Statement UIN HCWS686


I am pleased and excited to announce that today, the government is publishing its Artificial Intelligence Regulation White Paper.

AI is one of this government’s five technologies of tomorrow - bringing stronger growth, better jobs, and bold new discoveries. As a general purpose technology, AI is already delivering wide social and economic benefits, from medical advances to the mitigation of climate change.

The UK has been at the forefront of this progress, placing third in the world for AI research and development. For example, an AI technology developed by DeepMind, a UK-based business, can now predict the structure of almost every protein known to science. This breakthrough has already helped scientists combat malaria, antibiotic resistance, and plastic waste, and will accelerate the development of life-saving medicines. There is more to come. AI has the potential to transform all areas of life and energise the UK economy. By unleashing innovation and driving growth, AI will create new, good quality jobs. AI can also improve work by increasing productivity, and making workplaces safer for employees.

Through the National AI Strategy, this government is committed to strengthening the UK’s position as an AI powerhouse. For example, to boost skills and diversity in AI jobs, the government has announced £23 million towards 2,000 new AI and data science conversion courses scholarships; £100 million towards AI Centres for Doctoral Training at universities across the country; and over £46 million towards Turing AI Fellowships, developing the next generation of top AI talent. Through the Technology Missions Fund, we are investing £110m in missions on AI for health, AI for net zero, trustworthy and responsible AI, and AI adoption and diffusion. These are part of our £485 million investment in the UKRI AI Programme to continue the UK’s leadership in AI and support the transition to an AI-enabled economy.

We want the whole of society to benefit from the opportunities presented by AI and we know that to achieve this, AI has to be trustworthy. While it offers enormous potential, AI can also create new risks and present us with ethical challenges to address. We already know that some irresponsible uses of AI can damage our physical and mental health, create unacceptable safety risks, and undermine human rights. Proportionate regulation which mitigates these risks is key to building public trust and encouraging investment in AI businesses.

Businesses have consistently asked for clear, proportionate regulatory requirements and better guidance and tools to support responsible innovation. When we set out our proposals for a proportionate and pro-innovation approach in July last year, they received widespread support from industry. Our approach is in stark contrast to the rigid approaches taken elsewhere which risk stifling innovation and putting huge burdens on small business.

The recent report led by Sir Patrick Vallance - Regulation for Innovation - identified that we have a short window for the UK to take up a position as a global leader in foundational AI development and create an innovation friendly approach to regulating AI. We know we need to act now. I am proud to set out a proportionate and future proof framework for regulating this truly exciting, paradigm-shifting technology.

Our framework for AI regulation is outcome-focused, proportionate, and adaptable. It will be sensitive to context to avoid stifling innovation, and will prioritise collaboration - between government, regulators, industry, academia, civil society and wider stakeholders. The framework will be underpinned by five principles. These five principles are a clear statement of what we think good, responsible, trustworthy AI looks like - reflecting the values at the core of our society. These are: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. We will work with the UK’s highly regarded regulators and empower them to apply the five principles using their sector specific expertise.

As automated decision making systems are increasingly AI-driven, it is important to align the Article 22 reforms in the Data Protection and Digital Information Bill with the UK’s wider approach to AI regulation. The reforms in the Data Protection and Digital Information Bill cast Article 22 as a right to specific safeguards, rather than as a general prohibition on solely automated decision-making and also clarify that a ‘solely’ automated decision is one that is taken without any meaningful human involvement. Meaningful involvement means a human’s participation must go beyond a cursory or ‘rubber stamping’ exercise - and assumes they understand the process and influence the outcome reached for the data subject.

AI opportunities and risks are emerging at an extraordinary pace. We need only look to the sudden increase in public awareness of generative AI over recent months as an example. As such, the framework will initially be introduced on a non-statutory basis and we are deliberately taking an iterative, collaborative approach - testing and learning, flexing and refining the framework as we go. This will allow us to respond quickly to advances in AI and to intervene further if necessary.

We will establish central functions to make sure our approach is working effectively and getting the balance right between supporting innovation and addressing risk. These will monitor how it's operating but also horizon scan so we understand how AI technology is evolving and how risks and opportunities are changing. Taking forward Patrick Vallance’s recommendation, they will also support the delivery of testbeds and sandbox initiatives to help AI innovators get AI technologies to market

We are deliberately seeking to find the right balance between more rigid approaches to AI regulation on the one hand, and those who would argue that there is no need to intervene on the other. This position and this approach will protect our values, protect our citizens, and continue the UK’s reputation as the best place in the world to be a business developing and using AI.

Alongside this White Paper - we are also committed to strengthening UK AI capability. We are establishing a Foundation Model Taskforce, a government-industry team which will define and deliver the right interventions and investment in AI Foundation Models - a type of AI which looks set to be transformative - to ensure the UK builds its capability.

We recognise that there are many voices to be heard, and many ways that we can learn from across the whole of society, industry, academia, and our global partners. We have been engaging with regulators and a range of stakeholders as we develop our proposals and I actively encourage colleagues and stakeholders across the whole of the economy and society to respond to the consultation. I will be placing copies of the White Paper in the libraries of both Houses, and it is also available on

Linked statements

This statement has also been made in the House of Lords

Department for Science, Innovation and Technology
Artificial Intelligence Regulation White Paper
Viscount Camrose
Minister for AI and Intellectual Property (House of Lords)
Conservative, Excepted Hereditary
Statement made 29 March 2023