The development of AI models, systems, and applications is progressing at a rate that so far is outstripping the general understanding of how those systems are built and operated, which has in turn raised concerns about the security of those systems. In an effort to encourage AI system developers and end users to consider the risks and of and potential threats to these systems, cybersecurity agencies in the United States and UK have released new guidelines for the secure design, development, and deployment of them.
The new document released by the Cybersecurity and Infrastructure Security Agency and National Cyber Security Center covers four main areas and does not include regulatory requirements, but rather lays out guidelines for identifying and mitigating risks in AI systems development and usage. Though AI and machine learning systems are still relatively new, they have been deployed in a wide range of scenarios and are in use in both user-facing and behind-the-scenes applications and are growing more popular by the day. But, as with any new technology, there are security risks associated with AI systems, and the complexity and opacity of these systems can make addressing these risks quite difficult.
“As well as existing cyber security threats, AI systems are subject to new types of vulnerabilities. The term ‘adversarial machine learning’ (AML), is used to describe the exploitation of fundamental vulnerabilities in ML components, including hardware, software, workflows and supply chains. AML enables attackers to cause unintended behaviours in ML systems,” the new guidelines say.
The guidelines lay out the major known risks and issues with AI models and systems and provide some advice for avoiding them in practice.
“Artificial intelligence (AI) systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way. Cyber security is a necessary precondition for the safety, resilience, privacy, fairness, efficacy and reliability of AI systems,” the guidelines say.
“However, AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats. When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.”
The new guidelines comprise four separate sections: secure design, secure development, secure deployment, and secure operation and maintenance. In general, the guidance follows typical advice for secure application design and development practices, but with some AI-specific additions. For example, the secure design section encourages organizations to understand the risks and threats associated with AI systems and to model the specific threats to their systems. This involves understanding the types of data used in a model and the sensitivity of that data, as well as how an attacker might misuse a system. The section also encourages organizations to consider the security trade-offs of using a specific AI model, or using AI at all.
“Your choice of AI model will involve balancing a range of requirements. This includes choice of model architecture, configuration, training data, training algorithm and hyperparameters. Your decisions are informed by your threat model, and are regularly reassessed as AI security research advances and understanding of the threat evolves,” the guidelines say.
The newly released recommendations are meant as a supplement and complement to the White House’s recently released executive order on safe and secure AI development practices. The order lays out detailed requirements for government standards organizations to develop AI guidelines for risk management, software development, and other tasks, as well as attracting AI talent to the federal government. But the order does not hold any binding authority for private companies that develop or deploy AI models or tools. So the guidelines serve as a companion piece to the order, supplying more recommendations and advice for organizations.
“The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority."
Another component of the recommendations involves secure development of AI tools. For many, if not most, organizations, deploying an AI tool or application will involve using an external data model, one developed and maintained by a third party into which the organization likely has little visibility. That presents challenges in terms of understanding the data in a specific model and any potential risks associated with its use. The guidelines encourage organizations to understand the security of their AI supply chains as fully as possible.
“Where not produced in-house, you acquire and maintain well-secured and well-documented hardware and software components (for example, models, data, software libraries, modules, middleware, frameworks, and external APIs) from verified commercial, open source, and other third-party developers to ensure robust security in your systems,” the document says.
The guidelines also include recommendations for protecting AI, data, models, and tools in deployment, which involves implementing typical cybersecurity controls, as well as specific controls to prevent abuse of AI query interfaces.
“The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority,” CISA said.
“The Guidelines apply to all types of AI systems, not just frontier models. We provide suggestions and mitigations that will help data scientists, developers, managers, decision-makers, and risk owners make informed decisions about the secure design, model development, system development, deployment, and operation of their machine learning AI systems.”