CISA and Joint-Seal AI Publications
As the nation’s cyber defense agency, CISA mission sits at the intersection of strengthening cybersecurity and protecting critical infrastructure and plays a key role in advancing the Administration’s goal of ensuring that AI is safe, secure, and resilient, starting with being secure by design. This mission includes engaging with international partners surrounding global AI security, publishing actionable guidance, and promoting the adoption of best practices. Below are key publications and guidance that tie to CISA’s AI mission:
AI Red Teaming: Applying Software TEVV for AI Evaluations
Discover how AI red teaming fits into proven software evaluation frameworks to enhance safety and security.
Deploying AI Systems Securely
The National Security Agency’s Artificial Intelligence Security Center (NSA AISC), along with CISA and other U.S. and international partners, published this guidance for organizations deploying and operating externally developed AI systems.
CISA Roadmap for AI
CISA has developed a whole-of-agency plan to address the benefits and potential risks of advances in Artificial Intelligence.
Guidelines for secure AI system development
Guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others.
Software Must Be Secure by Design, and Artificial Intelligence Is No Exception
Like any software system, AI must be Secure by Design. Manufacturers of AI systems must prioritize security throughout the whole lifecycle of the product.
Engaging with Artificial Intelligence (AI)
The ACSC, along with CISA and other U.S. and international partners, published a paper summarizing some important threats related to AI systems and prompts organisations to consider steps they can take to engage with AI while managing risk.