Skip to main content

Defence: Artificial Intelligence

Question for Ministry of Defence

UIN HL1998, tabled on 21 July 2022

To ask Her Majesty's Government, further to their policy paper Ambitious, Safe, Responsible: Our approach to the delivery of AI enabled capability in Defence, published on 15 June, what steps they are taking to ensure that (1) scientists, (2) developers, and (3) industry, can operate in an environment where there are adequate controls to prevent their research and technology from being used in ways which may be problematic.

Answered on

4 August 2022

The Defence AI Strategy (published on 15 June 2022), set out our clear commitment to use AI safely, lawfully and ethically in line with the standards, values and norms of the society we serve. This is critical to promote confidence and trust among our people, our partners and the general public.

We will deliver this commitment through a range of robust people, process and technology measures, including: embedding our AI Ethics Principles throughout the entire capability lifecycle; independent scrutiny and challenge from our AI Ethics Advisory Panel; training to ensure our people understand and can appropriately mitigate AI-related risks; publishing as much information as possible about key safeguards (such as our approach to Test and Evaluation); specifying (including through Early Market Engagement) how and why we will utilise algorithms and applications; and ensuring there are effective pathways for individuals to raise ethical or safety concerns.

As we implement these commitments from the AI Strategy - and the associated 'Ambitious, Safe, Responsible' policy - Defence will continue to be outward facing, working with colleagues across the AI and technology industry to understand concerns and identify and embed best practice safeguards.