
The UK government is taking forward a two-part intervention to address the cyber security risks to AI. This involves the development of a voluntary Code of Practice which will be used to help create a global standard in the European Telecommunication Standards Institute that sets baseline security requirements.
It sets out how organisations using AI can protect themselves from a range of cyber threats such as AI attacks and system failures.
The code of practice is based on 13 principles: raise awareness of threats and risks; design AI systems for security; evaluate threats and managing risks; enable human responsibility for systems; identify, track and protect assets; secure infrastructure; secure the supply chain; document data, models and prompts; conduct appropriate testing and evaluation; communicate and set up processes with end users and affected entities; maintain regular security updates, patches and mitigations; monitor the system’s behaviour; and ensure proper data and model disposal.
The Department for Science, Innovation and Technology emphasises the importance of implementing cyber security training programmes which are focused on AI vulnerabilities, developing recovery plans following potential cyber incidents, and carrying out robust risk assessments.
The code is voluntary, but will form the basis of a new global standard for secure AI through the European Telecommunications Standards Institute.
For further information please select the following link: Code of Practice for the Cyber Security of AI - GOV.UK