A quick look at the NCSC guidelines for AI security

by Black Hat Middle East and Africa
on
A quick look at the NCSC guidelines for AI security

On 27 November 2023, the UK’s National Cyber Security Centre (NCSC) released its new global guidelines for AI security. 

The Guidelines for Secure AI System Development have been developed in collaboration with the US Cybersecurity and Infrastructure Security Agency (CISA), with cooperation from 21 other international agencies from countries around the world.  

The goal is to establish global standards and collaboration on the security of AI development; and at time of writing, a total of 18 countries (including the UK) have endorsed the guidelines. 

What does this mean for AI developers? 

This is the first set of guidelines for AI security that have been agreed by government organisations around the world. They promise to help AI systems developers make decisions that are informed by security best practices – and ensure that AI systems are secure by design, with cybersecurity baked in. 

This includes developers that are creating novel AI systems, and companies that are building systems and services using tools that are provided by third parties. 

And the guidelines are divided into four key categories, each with suggested security-enhancing behaviours: 

  • Secure design
  • Secure development
  • Secure deployment
  • Secure operation and maintenance 

Lindy Cameron (CEO at NCSC) said in a statement: 

“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

And Jen Easterly (Director at CISA) said:

“As nations and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices. The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution.” 

Reactions so far are largely positive 

It’s important to note that these are guidelines – they’re not legislated, and not enforceable. But so far, the response to this landmark effort has been good. 

Paul Brucciani (Cybersecurity advisor at WithSecure) told Computer Weekly

“These early days of AI can be likened to blowing glass: while the glass is fluid it can be made into any shape, but once it has cooled, its shape is fixed. Regulators are scrambling to influence AI regulation as it takes shape.” 

“Guidelines are quick to produce since they do not require legislation, nonetheless, NCSC and CISA have worked with impressive speed to corral this list of signatories. Amazon, Google, Microsoft and OpenAI, the world-leading AI developers, are signatories. A notable absentee from the list is the EU.”

For cybersecurity professionals, it’s a positive step towards AI developers being required to take responsibility for the safety and security of the systems they create. Because the alternative is that the cybersecurity industry becomes responsible for securing AI after it’s already out in the wild – and retrospective security is much harder, and less effective, than security by design. 

And beyond AI, any international collaboration on security is an interesting development for the cybersecurity industry as a whole to watch. It shows that we’re moving, very slowly, towards the possibility for organised global security collaborations; and towards agreements on security governance that could increase cyber resilience in the future. 

Share on

Join newsletter

Join the newsletter to receive the latest updates in your inbox.


Follow us


Topics

Sign up for more like this.

Join the newsletter to receive the latest updates in your inbox.

Related articles

Security training and freelancers

Security training and freelancers

Freelancers are often asked to complete a company's security training and awareness courses, but few companies communicate clearly about this in hiring conversations.

Read More
Neurodiversity in Cybersecurity - Part 1

Neurodiversity in Cybersecurity - Part 1

Guided by Stuart Seymour (CISO at Virgin Media), we look at the value of neurodiverse talent in cybersecurity – and what the industry can do to welcome neurodiverse professionals.

Read More
Machine learning in cybersecurity

Machine learning in cybersecurity

Saeed Abu-Nimeh (Founder and CEO at SecLytics) is one of the world’s leading experts on machine learning in cybersecurity – and he’s driving innovation to streamline security operations with ML.

Read More