CISA Artificial Intelligence Use Cases
The widened availability, growing capabilities, and increased adoption of artificial intelligence (AI) tools offers new possibilities to supplement and enhance Cybersecurity and Infrastructure Security Agency’s (CISA) ability to execute on CISA’s cyber defense mission. From spotting anomalies in network data to drafting public messaging, AI tools are increasingly pivotal components of CISA’s security and administrative toolkit. While adopting these new tools, CISA works to ensure consistency with the Constitution and all applicable laws and policies, including those addressing federal procurement, privacy, civil rights, and civil liberties.
On December 16, 2024, in close coordination with the Department of Homeland Security (DHS) and in compliance with actions required by OMB Memorandum M-24-10, CISA is refreshing its AI Use Case Inventory. The comprehensive, whole-of-agency effort to update the inventory reflects the immense possibility for how AI can, and will continue to be used, to augment our agency’s mission and operations.
This update provides additional context for how CISA identified and evaluated seven active reportable AI use cases, discusses uses of AI at the agency that do not yet qualify as active reportable use cases, and describes potential future uses of AI at the agency. This update reflects CISA’s dedication to radical transparency and accountability, a core tenet of CISA’s Secure by Design approach. This principle is particularly important in AI, where unpredictability can pose unique challenges, and CISA strives to set an example for responsible AI governance in the public sector.
CISA Artificial Intelligence Use Cases
Summary of CISA AI Use Case Inventory
CISA currently has seven active, reportable use cases and identified zero rights-impacting or safety-impacting AI use cases. Some of these have been previously reported, but names and descriptions have been updated for clarity.
- Automated Detection of Personally Identifiable Information (PII) in Cybersecurity Data (DHS-4)
The Automated Indicator Sharing (AIS) service allows public and private-sector organizations to voluntarily share real-time cyber threat information with CISA. Although the purpose of this service is to collect information directly related to potential cyber threats, there is a possibility that personally identifiable information (PII), such as names or addresses, could be incidentally included in submission notes. To enhance privacy, this AI tool uses natural language processing (NLP) to automatically flag potential PII for review and removal by CISA analysts.
Automated PII Detection and Review Process uses analytics to identify and manage potential PII in submissions. If PII is flagged, the submission is sent to CISA analysts, who are guided by AI to review and confirm or reject the detection, redacting information if necessary. Privacy experts monitor the system and provide feedback. The system learns from this feedback, helping to ensure compliance with privacy regulations and improving efficiency by reducing false positives. Regular audits help to ensure the process remains trustworthy and effective.
Formerly Known As: Automated Indicator Sharing (AIS)) Automated PII Detection
AI Technique Used: Keyword Extraction, Information Extraction
Stage of System Development Cycle: Operation and Maintenance
- Confidence Scoring for Cybersecurity Threat Indicators (DHS-5)
The Automated Indicator Sharing (AIS) service allows public and private-sector organizations to voluntarily share real-time cyber threat information with CISA. The Confidence Scoring for Cybersecurity Threat Indicators capability, a feature of the AIS service, uses an AI-driven decision tree process to assign a “confidence score” to a submission. The scoring algorithm evaluates factors such as whether technical details within the submission have been previously observed or verified by CISA analysts. The score represents the reliability and completeness of the information submitted, and helps analysts prioritize which information to review first. A set of confidence scores is included along with the other fields in the indicator data set. The confidence scores allow CISA's Automated Indicator Sharing (AIS) partners to contextualize indicator information for improved data system ingest.
Formerly Known As: Automated Indicator Sharing (AIS) Scoring & Feedback
AI Technique Used: Classification
Stage of System Development Cycle: Operation and Maintenance
- Malware Reverse Engineering (DHS-107)
CISA receives information about computer security vulnerabilities and threats in the form of malicious code samples (malware) from its federal civilian and critical infrastructure partners. These malware samples require manual reverse engineering to find actionable insights relating to the malware, such as indications of potential compromise or adversary operated command and control.
This AI capability uses deep learning to assist CISA analysts with understanding the content of malware samples and is one feature within a larger toolbox of reverse engineering analytical methods. This use case delivers improved internal government tools for reverse engineering of malware, speeding the development of cyber threat intelligence shared across the government and with CISA partners.
AI Technique Used: Classification, Clustering, Generative AI (Text or Code Generation), Information Extraction, Language Translation Technology (LTT)
Stage of System Development Cycle: Initiation
- Critical Infrastructure Network Anomaly Detection (DHS-106)
CISA is responsible for providing timely technical assistance, risk management support, and incident response capabilities, upon request, to federal and non-federal critical infrastructure partners. In support of this responsibility, critical infrastructure partners can opt-in to the CyberSentry program, which monitors critical infrastructure networks. In addition to a larger suite of analytical tools and methods, the CyberSentry program uses unsupervised machine learning (algorithms that analyze unlabeled datasets) to identify trends, patterns, and anomalies in network data.
This AI capability automates manual data fusion and correlation processes and highlights potential anomalies for CISA analyst review. Analysts use an interactive dashboard interface to access the outputs of the AI process and other rule-based heuristics, further query cybersecurity data, and identify information for potential cybersecurity alerts. This use case delivers improved government tools for CISA analysts to hunt and detect malicious threat actors on critical infrastructure networks.
Formerly Known As: Critical Infrastructure Anomaly Alerting
AI Technique Used: Anomaly Detection, Continuous Estimation (Regression, Prediction, and Forecasting), Generative AI (Text or Code Generation)
Stage of System Development Cycle: Initiation
- Security Operation Center (SOC) Network Anomaly Detection (DHS 2403)
CISA Threat hunting and Security Operations Center (SOC) analysts process terabytes of daily network log data from the Cyber Analytic and Data System (CADS) Einstein network traffic sensors. CADS is a sensor grid that monitors network traffic for malicious activity to and from participating government departments and agencies. This AI capability uses methods such as unsupervised machine learning (algorithms that analyze unlabeled datasets) to detect trends, patterns, and anomalies in network data.
The AI capability automates manual data fusion and correlation processes and highlights potential anomalies; allowing CISA analysts to narrow the scope of analysis and prioritize data for review. Analysts use an interface to access the outputs of the AI process and other rule-based heuristics, further query cybersecurity data, and prioritize alerts for further investigation. This use case delivers improved government tools for CISA analysts to hunt and detect malicious threat actors on federal civilian agency networks. This use case now includes Security Information and Event Management (SIEM) Alerting Models (DHS-103) and Advanced Network Anomaly Alerting (DHS-105), two similar, previously reported threat hunting use cases that CISA has since streamlined based on similarity of AI use and governance considerations.
Formerly Known As: Threat Hunting and Security Information and Event management (SEIM) Alerting Models
AI Technique Used: Anomaly Detection, Continuous Estimation (Regression, Prediction, and Forecasting), Generative AI (Text or Code Generation)
Stage of System Development Cycle: Operation and Maintenance
- Draft Tailored Summaries of Media Materials for Different Publication Channels (DHS-2335)
CISA personnel draft product summaries for information sharing with our stakeholders, including federal and non-federal critical infrastructure partners. This is a custom generative AI solution that leverages a Large Language Model (LLM), augmented by Retrieval-Augmented Generation (RAG), to extract key themes from approved CISA products and automatically generate tailored summaries using approved templates.
The AI capability accelerates the process of drafting summarized content for CISA’s published products. As established in required employee AI training, personnel using the tool will validate the accuracy of information prior to use in accordance with applicable law and policy. The drafts are reviewed, edited, and coordinated by authorized CISA personnel prior to publication.
AI Technique Used: Generative AI (Text or Code Generation), Text Summarization
Stage of System Development Cycle: Initiation
- CISAChat (DHS-2306)
CISAChat is a custom generative AI solution that enables authorized CISA personnel to interact with, summarize, and search agency-created materials and internal content. Once prompted, the tool searches through relevant CISA files and delivers a focused response based on the inputs. As established in required employee AI training, personnel using the tool will validate the accuracy of information and use it in accordance with applicable law and policy.
This AI capability streamlines the process of finding information and improves CISA personnel’s internal customer experience. Currently, multiple CISA program offices are using contractor staff to review pre-production content and other internal materials to develop summaries, key themes, and improve clarity. Leveraging CISAChat improves internal agency Customer Experience (CX) and saves staff time.
AI Technique Used: Generative AI (Text or Code Generation), Language Translation Technology (LTT)
Stage of System Development Cycle: Implementation and Assessment
Other Uses of Artificial Intelligence at CISA
From conceptualizing new AI-powered projects to incorporating AI-enhanced commercial tools, CISA continues to explore new ways to integrate AI tools to more effectively achieve our mission. Although the following listed uses of AI at CISA do not qualify as active reportable use cases per guidelines outlined in OMB Memorandum M-24-10, CISA is excited to highlight the additional ways that the agency is exploring using AI tools to support our work.
Initial Conception
CISA has a number of AI efforts that are in the conceptual phase and are being internally evaluated for necessity and resource allocation. Three of these AI use cases were reported in a previous version of the DHS AI Use Case Inventory based on prior criteria of what constituted an AI use case, but under current criteria no longer constitute an active use case as outlined in OMB Memorandum M-24-10. These concepts are still being explored, and if they advance past the conceptual stage to become active AI use cases as defined by the above guidance, CISA will update the inventory.
- Advanced Analytic Enabled Forensic Investigation (DHS-104)
CISA deploys forensic specialists to analyze cyber events at Federal Civilian Executive Branch (FCEB) departments and agencies, as well as other State, Local, Tribal, Territorial, and critical infrastructure partners. Forensic analysts can use advanced forensic investigation analytic tooling to better understand anomalies and potential threats. Such tooling could allow forensic specialists the capability to comb through event data in an automated fashion with mathematically and probabilistically based models to ensure high fidelity anomalies are detected in a timely manner.
- Cyber Threat Intelligence Feed Correlation (DHS-40)
CISA is exploring using generative AI for Cyber Data Feed Correlation and data fusion capabilities to provide accelerated correlation across multiple incoming information feeds from government, commercial, and open sources. Additionally, this capability could enable timelier enrichment to improve the cyber threat intelligence information feeds.
- Cyber Vulnerability Reporting (DHS-42)
CISA vulnerability analysts require advanced automation tools to process data received through various vulnerability reporting channels as well as aggregate the information for automated sharing. Tools leveraging machine learning and natural language processing could increase the accuracy and relevance of data that is filtered and presented to human analysts and decision-makers.
Research and Development
CISA also engages in AI research and development activities. Some of these activities were reported in previous versions of the DHS AI Use Case Inventory based on prior criteria of what constituted an AI use case. Under current criteria, these are no longer reportable based on the definition of “Covered AI” outlined in OMB Memorandum M-24-10. If these efforts advance past the research stage to become active AI use cases as defined by the above guidance, CISA will update the inventory by retiring the R&D use case and creating a new active use case, in accordance with DHS policy.
- Cyber Incident Reporting (DHS-41)
Cyber incident handling specialists must process large amounts of data received through various threat intelligence and cyber incident channels. Currently, CISA is engaged in research and development with DHS S&T to develop prototypes and proofs of concept that would apply generative AI and natural language processing to incident information. These automation tools could increase the accuracy and relevance of data that is filtered and presented to CISA analysts and assist with aggregating the information in reports for presentation and further analysis.
- AI Security and Robustness (DHS-43)
Frameworks, processes, and testing tools are used to govern the acquisition, development, deployment, and maintenance of AI technologies. CISA technology integrators are exploring using AI-enhanced tools to assure the trustworthy, robust, and secure operation of their AI systems. These tools would use machine learning and natural language processing to enhance the assessment of AI technology within the agency by speeding up test case generation and processing of logs.
Commercial
CISA also uses several commercial products with embedded AI to augment agency operations. Examples include (1) penetration testing software which leverages generative AI to provide remediation guidance to address vulnerabilities; (2) WiFi management software which uses AI to optimize radio configurations, reducing interference and increasing performance; (3) A cloud security product which uses ML algorithms to apply data loss prevention policies without requiring manual configuration; (4) In accordance with DHS policy, CISA also allows use of commercial generative AI tools such as large language models and helps ensure compliant use to safeguard data and protect privacy and individual rights. However, all of these uses of AI are not reportable as active AI use cases based on the definition of “Covered AI” outlined in OMB Memorandum M-24-10.