CISA's Roadmap for Artificial Intelligence FAQs
Background
As noted in the landmark Executive Order 14110, “Safe, Secure, And Trustworthy Development and Use of Artificial Intelligence (AI),” signed by the President on October 30, 2023, “AI must be safe and secure.” As the nation’s cyber defense agency and the national coordinator for critical infrastructure security and resilience, CISA will play a key role in addressing and managing risks at the nexus of AI, cybersecurity, and critical infrastructure. This “2023-2024 Roadmap for Artificial Intelligence” serves as a guide for CISA’s AI-related efforts, ensuring both internal coherence as well as alignment with the whole-of-government AI strategy. This roadmap incorporates key CISA-led actions as directed by Executive Order 14110, along with additional actions CISA is leading to promote AI security and support critical infrastructure owners and operators as they navigate the adoption of AI.
Frequently Asked Questions (FAQs)
- What is CISA’s role in ensuring AI is safe, secure, and resilient?
CISA’s mission sits at the intersection of strengthening cybersecurity and protecting critical infrastructure and therefore plays a key role in advancing the Administration’s goal of ensuring that AI is safe, secure, and resilient. For the key actions CISA will take, our role will be to assess possible risks to critical infrastructure related to the use of AI and provide guidance to owners and operators of critical infrastructure and other key stakeholders. Additionally, we will work to capitalize on AI’s potential to improve U.S. cyber defenses and develop recommendations for red teaming AI systems.
- What is CISA’s role under EO 14110?
Signed by the President on October 30, 2023, Executive Order (EO) 14110, “Safe, Secure, And Trustworthy Development and Use of Artificial Intelligence (AI),” calls on the federal government to ensure that “AI must be safe and secure.” As the nation’s cyber defense agency, CISA has a key role in helping critical infrastructure stakeholders and the rest of the federal government leverage AI benefits while mitigating potential risks posed by this technology.
CISA’s responsibilities under EO 14110 include:
- Protecting critical infrastructure: CISA is assessing potential risks related to the use of AI in critical infrastructure sectors, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyberattacks. The agency will consider ways to mitigate these vulnerabilities and will work with stakeholders inside and outside of government to develop AI safety and security guidance for use by critical infrastructure owners and operators. Additionally, CISA will incorporate the NIST AI Risk Management Framework, as well as other appropriate security guidance, into relevant safety and security guidelines and best practices for use by critical infrastructure owners and operators.
- Assuring AI systems: CISA’s cybersecurity mission includes leading the national effort on cybersecurity for software systems. AI systems are a type of software system. CISA will integrate AI system considerations into existing national cybersecurity efforts, such as advocating that AI systems be secure by design and developing recommendations for red teaming AI systems. In the immediate term, the agency will focus on the intersection between “red teaming” generally and “AI red teaming” in particular, applying software systems testing principles to AI software. As CISA expands its AI security expertise, CISA will incorporate more AI-specific red teaming capabilities into its existing red teaming capabilities.
- Using AI for cyber defense: CISA is capitalizing on AI’s potential to improve U.S. cyber defense. The agency will conduct operational tests to evaluate AI-enabled techniques for finding and addressing vulnerabilities in federal civilian government systems.
- What is AI red teaming?
As defined in Executive Order 14110, ‘AI red teaming’ means “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI. [AI] red-teaming is most often performed by dedicated ‘red teams’ that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.
- Why is CISA issuing this roadmap now?
AI is a fast-evolving issue that is top of mind for both government and the private sector globally. With the emerging impact of AI on critical infrastructure operations and CISA’s mission to lead the national effort to understand, manage, and reduce risk to the cyber and physical infrastructure that Americans rely on every hour of every day, AI safety and security work is squarely within CISA’s responsibility. In fact, CISA is already using Artificial Intelligence (AI) responsibly to improve its services and cybersecurity on several fronts, while maintaining privacy and civil liberties. You can find use cases with current examples of efforts that are underway at cisa.gov/ai.
- Why is it important that AI be Secure by Design?
Over the last four decades, from the creation of the internet to the mass adoption of software, to the rise of social media, we have witnessed safety and security being forced to take a back seat as companies prioritize speed to market and features over security. The development and implementation of AI software must break the cycle of speed at the expense of security. The security challenges associated with AI parallel cybersecurity challenges associated with previous generations of software that manufacturers did not build to be secure by design, putting the burden of security on the customer. Although AI software systems might differ from traditional forms of software, fundamental security practices still apply.
- How will CISA’s Roadmap help integrate Secure by Design principles into AI?
CISA’s Roadmap for Artificial Intelligence builds on the agency’s cybersecurity and risk management programs. Critically, manufacturers of AI systems are encouraged to follow Secure by Design principles:
- Taking ownership of security outcomes for customers
- Leading product development with radical transparency and accountability, and
- Making secure by design a top business priority. As the use of AI grows and becomes increasingly incorporated into critical systems, security must be a core requirement and integral to AI system development from the outset and throughout its lifecycle.
Government, industry, and academia must work together to establish essential guardrails to ensure AI-based software tools - with all their tremendous power, capability, and availability - are not vulnerable to attack or abuse. CISA emphasizes Secure by Design principles in AI adoption and plans to incorporate AI security into its Secure by Design program.
- How does CISA work with the Joint Cyber Defense Collaborative (JCDC) to manage AI-related risks and incidents?
To address potential unique challenges or impacts of cyber incidents involving AI technologies, CISA is leveraging the Joint Cyber Defense Collaborative model and planning authorities to plan for AI-related incident response. In 2024, working with JCDC.AI—which is a team of subject matter experts from AI companies, government partners, and critical infrastructure entities—CISA will be conducting tabletop exercises (TTXs) that explore approaches to AI incident response. JCDC.AI will meet regularly so that team members can provide individual feedback on a draft AI incident response playbook based on JCDC.AI’s insights from the TTXs. CISA plans to publish this playbook in late 2024.
Beyond the AI incident response playbook, JCDC.AI meetings will serve as a mechanism to build an operational community of AI providers, AI security vendors, and critical infrastructure owners/operators. Meetings will provide opportunities for further operational collaborations, including the development of best practices on managing impacts from AI-related incidents based on team insights, ideas, and capabilities.
To learn more about JCDC.AI, email JCDC.AI@cisa.dhs.gov.
- How does CISA work with the Information Technology Sector Coordinating Council (IT SCC) to manage AI-related risks?
CISA formed an AI working group in March 2023—organized under the National Infrastructure Protection Plan Framework as part of the Information Technology Sector Coordinating Council (IT SCC)—to address both the challenges and the benefits AI poses to the cybersecurity of critical infrastructure. Like JCDC.AI, the working group provides a mechanism for collaboration among public and private stakeholders. In addition to fostering information sharing, the AI working group provides a forum for non-federal members to discuss, and achieve consensus on, policy recommendations concerning the challenges and benefits AI poses to the cybersecurity of critical infrastructure.
To learn more about IT SCC’s AI working group, email ITSector@cisa.dhs.gov.
Additional FAQs on Artificial Intelligence
- What is Artificial Intelligence?
The Cybersecurity and Infrastructure Security Agency (CISA) Roadmap for Artificial Intelligence uses the National Artificial Intelligence Initiative Act of 2020 definition of Artificial Intelligence: "A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to – (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action." All AI systems are IT systems, composed of computer hardware and software. AI encompasses machine learning (ML), which, according to Executive Order 14110 is “a set of techniques that can be used to train AI algorithms to improve performance on a task based on data.”
- What are some of the risks and benefits associated with AI?
The security challenges associated with AI parallel cybersecurity challenges associated with previous generations of software that manufacturers did not build to be secure by design, putting the burden of security on the customer. Although AI software systems might differ from traditional forms of software, fundamental security practices still apply. As the use of AI grows and becomes increasingly incorporated into critical systems, security must be a core requirement and integral to AI system development from the outset and throughout its lifecycle.
AI software will undoubtedly continue to have profound impacts on our society – improving medical systems, empowering small businesses, and revolutionizing education. But just as it will make our lives better and easier, it will do the same for our adversaries, large and small, to inflict harms that today we can only imagine. We must be methodical and intentional on how it is deployed. As AI systems become more capable and accurate, as well as more prevalent and accessible to bad actors, these risks will grow. At the same time, AI tools can support our cyber defense mission. For example, by translating code into memory-safe languages or using natural language processing to parse threat feeds and automate incident reporting information. As the nation’s civilian cyber defense agency and the national coordinator for critical infrastructure security and resilience, CISA must ensure the nation’s critical infrastructure is prepared and capable of managing the risks while leveraging the benefits of this rapidly evolving technology.
For More Information
To learn more, visit Artificial Intelligence. For more information or to seek additional help, contact CISA-ExternalAffairs@cisa.dhs.gov. For media inquiries, please contact CISA Media at CISAMedia@cisa.dhs.gov.