Alert

CISA Joins ACSC-led Guidance on How to Use AI Systems Securely

Release Date

CISA has collaborated with the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) on Engaging with Artificial Intelligence—joint guidance, led by ACSC, on how to use AI systems securely. The following organizations also collaborated with ACSC on the guidance:

  • Federal Bureau of Investigation (FBI)
  • National Security Agency (NSA)
  • United Kingdom (UK) National Cyber Security Centre (NCSC-UK)
  • Canadian Centre for Cyber Security (CCCS)
  • New Zealand National Cyber Security Centre (NCSC-NZ) and CERT NZ
  • Germany Federal Office for Information Security (BSI)
  • Israel National Cyber Directorate (INCD)
  • Japan National Center of Incident Readiness and Strategy for Cybersecurity (NISC) and the Secretariat of Science, Technology and Innovation Policy, Cabinet Office
  • Norway National Cyber Security Centre (NCSC-NO)
  • Singapore Cyber Security Agency (CSA)
  • Sweden National Cybersecurity Center

The guidance provides AI systems users with an overview of AI-related threats as well as steps that can help them manage AI-related risks while engaging with AI systems. The guidance covers the following AI-related threats:

  1. Data poisoning
  2. Input manipulation
  3. Generative AI hallucinations
  4. Privacy and intellectual property threats
  5. Model stealing and training data exfiltration
  6. Re-identification of anonymized data

Note: This guidance is primarily for users of AI systems. CISA encourages developers of AI systems to review the recently published Guidelines for Secure AI System Development.

To learn more about how CISA and our partners are addressing both the cybersecurity opportunities and risks associated with AI technologies, visit CISA.gov/AI.

 

This product is provided subject to this Notification and this Privacy & Use policy.