Organizations looking to operate externally developed artificial intelligence systems on their network premises or in their private cloud environments should establish a well-secured deployment environment, and then ensure continuous protection and maintenance following implementation, according to new guidelines released by the National Security Agency.
These AI recommendations were laid out this month in a cybersecurity information sheet (CIS) co-authored by the FBI, the Cybersecurity and Infrastructure Security Agency (CISA), and cybersecurity agencies from the U.S.’s Five Eyes global partners. The CIS is the first public communication from the NSA’s Artificial Intelligence Security Center (AISC) division, which was formed in September 2023.
“AI brings unprecedented opportunity, but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise and advanced threat analysis,” said NSA Cybersecurity Director Dave Luber in a corresponding press release issued by the agency.
According to the CIS, the guidelines intentionally align with cross-sector Cybersecurity Performance Goals (CPGs) that were developed by CISA and the National Institute of Standards and Technology (NIST). Moreover, these recommendations build upon two previously published documents: the Guidelines for Secure AI System Development and Engaging with Artificial Intelligence.
Their purpose of these guidelines, the CIS continues, is to “improve the confidentiality, integrity and availability of AI systems;” to help organizations mitigate known AI vulnerabilities; and to offer methodologies and controls for both proactive and reactive defense against malicious activity targeting AI systems and their respective data sets.
Tips for securing the deployment environment include establishing proper governance standards; ensuring a robust security architecture through responsible privileged access control and Zero Trust policies; hardening configurations via sandboxing, patching, encryption and authentication; and guarding against threats with detection and response solutions.
Advice for continuous protection of the AI system includes validating through encryption, version control, testing and responsible supply-chain practices; securing exposed APIs; monitoring model behavior; and safeguarding model weights using hardened interfaces, hardware protections and isolation.
Instructions for secure AI operation and maintenance include enforcing role-based or attribute-based access controls; instituting user awareness and training exercises; conducting audits and pentesting exercises; logging and monitoring; regularly patching; establishing high availability and disaster recovery plans (potentially using an immutable backup storage system); and invoking secure delete capabilities.
While these recommendations were initially created for national security systems and the defense industrial base, any company that intends to adopt and use AI can benefit from these recommendations, especially if they operate in a high-risk environment and are a valuable target of malicious cyber actors, the NSA noted.
“In the end, securing an AI system involves an ongoing process of identifying risks, implementing appropriate mitigations and monitoring for issues,” the CIS states. “By taking the steps outlined in this report to secure the deployment and operation of AI systems, an organization can significantly reduce the risks involved. These steps help protect the organization’s intellectual property, models and data from theft or misuse.”