Partner AI Publications
CISA is providing input to and coordinating with DHS-led and interagency processes on AI-enabled software, in order to support the U.S. government’s overall national strategy on AI and to support a whole-of-DHS approach on AI-based-software policy issues. This also includes close communication with international partners to advance global AI best practices and principles. Below are key publications from these partners at DHS, across the federal government, and internationally:
DHS Publications
Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure
Discover DHS's groundbreaking guidance on advancing responsible AI use in America’s critical infrastructure, developed collaboratively with experts across the AI supply chain.
Risks and Mitigation Strategies for Adversarial Artificial Intelligence Threats: A DHS S&T Study
This report introduces adversarial AI concepts and explores future AAI threats, risks, and mitigation strategies—to help the Department develop a risk-informed approach to mitigating AAI threats and vulnerabilities.
Acquisition and Use of Artificial Intelligence and Machine Learning by DHS Components
This Policy Statement directs actions that all DHS Operational and Support Components shall undertake to establish policy and practices governing the acquisition and use of Artificial Intelligence (Al) and Machine Learning (ML) technology.
Foundation Models at the Department of Homeland Security: Use Cases and Considerations
A foundation model (FM) is a type of machine learning model that is trained on a broad set of general domain data for the purpose of using that model as an architecture on which to build multiple specialized AI applications.
White House and Interagency Publications
Executive Order on Safe, Secure, And Trustworthy Development and Use of AI
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI
FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by AI
AI Risk Management Framework (RMF)
This Framework from the National Institute of Standards and Technology (NIST) describes four specific functions to help organizations address the risks of AI systems in practice: GOVERN, MAP, MEASURE, and MANAGE.
Blueprint for an AI Bill of Rights
The White House Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People
AI.gov
See the full scope of AI actions from the Biden-Harris Administration
International Partner Publications
NCSC: The near-term impact of AI on the cyber threat
An NCSC assessment focusing on how AI will impact the efficacy of cyber operations and the implications for the cyber threat over the next two years.
NCSC: Principles for the security of machine learning
Alongside 'traditional' cyber attacks, the use of artificial intelligence (AI) and machine learning (ML) leaves systems vulnerable to new types of attack that exploit underlying information processing algorithms and workflows.
NCSC: Intelligent security tools
Guidance or those looking to use an off the shelf security tool that employs AI as a core component. This may also be of use to those developing in-house AI security tools or considering AI for some non-security business function.
ACSC: An introduction to Artificial Intelligence
ACSC has released an introduction to Artificial Intelligence (AI), to provide users with an understanding of what AI is and how it may impact the digital systems and services they use.