Skip to main content

Artificial Intelligence: EU Countries

Question for Department for Science, Innovation and Technology

UIN HL1120, tabled on 12 December 2023

To ask His Majesty's Government what assessment they have made of the provisional EU plan on the use of artificial intelligence; and how, if at all, this will affect the UK's development of regulations.

Answered on

20 December 2023

Many issues relating to AI regulation are global in nature, and will require global solutions to address. This is why we are keen to work with international partners, including the EU, on key issues and find ways to support our national and international AI ecosystems and support businesses operating across jurisdictions. We are also taking an active role in multilateral initiatives addressing AI issues, such as the Global Partnership on AI with GPAI, OECD, G7 Hiroshima AI Process, and the Council of Europe, among others. We will also seek further opportunities for open dialogues to share ideas and best practices in this space.

Last March, the government published the AI Regulation White Paper. This context-based approach has considered the UK’s own unique regulatory, business and societal landscape. Rather than setting a centralised list like the EU, our approach proposes allowing different regulators to take a more tailored and agile approach to the use of AI in a range of settings, reflecting the growing use of AI in a range of sectors for a number of different applications. The proposed framework of cross-sectoral principles will ensure coherence across the regulatory system.

We want to establish a nimble and internationally competitive regulatory approach which drives innovation and growth. This is key to supporting our ambitions to strengthen our position as an AI superpower. The government will be setting out our next steps for the regulatory framework through our white paper consultation response, which is being published in the new year.

In the meantime, we are taking steps to implement our regulatory approach, including the establishment of a central AI risk function, bringing together policymakers and AI experts to identify, assess and report on risks of AI systems. We have also set up the AI Safety Institute, which is aiming to ensure that the UK and the world are not caught off guard by progress at the frontier of AI. We are also engaging closely with regulators across the UK and their sponsoring government departments to understand the organisational capacity they need to regulate AI effectively, across technical, regulatory, and market-specific expertise. This includes the Digital Regulation Cooperation Forum AI and Digital Hub announced in September, which will offer advice to AI innovators to make it easier for innovators to navigate the AI regulation landscape so they can bring their products more quickly and safely to market. Alongside this, the Centre for Data Ethics and Innovation (CDEI) continues to lead the Government’s work to enable responsible trustworthy innovation using data and AI that safeguards our fundamental values and puts protecting the public first. For example, the CDEI is working closely with the Equality and Human Rights Commission and the Information Commissioner’s Office to support solutions to tackle AI fairness through the delivery of the Fairness Innovation Challenge.