10 ChatGPT AI Prompts L1 SOC Analysts Can Use in Their Daily Work

Security operations center (SOC) analysts are expected to process a constant stream of alerts… often under tight response timelines.
At the same time, they are expected to investigate accurately, document clearly, and communicate findings to both technical and non-technical stakeholders. This is where generative artificial intelligence (GenAI) tools such as ChatGPT can be helpful.
The table below provides a quick reference to 10 ChatGPT prompts that can help L1 SOC analysts.
While these prompts are useful for L1 analysts, they can also support L2/L3 analysts and anyone looking to better understand common incident response (IR) tasks.
Be careful not to enter sensitive data into public AI tools, and consider using these prompts to train an AI agent to help automate parts of the workflow.
| Prompt # | Prompt Use Case | What It Helps With | Why It Matters in a SOC |
|---|---|---|---|
| 1 | Summarize a security alert | Summarizes alert data and converts alert data for non-technical audiences. | Helps L1 analysts triage and understand alert significance. |
| 2 | Analyze raw logs | Identifies suspicious activity, indicators, and patterns in logs. | Helps with investigation of log data. |
| 3 | Create a triage checklist | Builds a step-by-step investigation workflow, if a playbook is not already available. | Helps L1 analysts respond to unfamiliar alerts. |
| 4 | Draft case notes | Turns rough notes into clean ticket documentation. | Helps improve note quality, handoffs, and auditability. |
| 5 | Write an escalation summary | Creates a concise escalation to L2/3 analysts. | Helps reduce back-and-forth on ticket data. |
| 6 | Analyze phishing emails | Reviews suspicious emails for red flags. | Helps analysts assess phishing email threats. |
| 7 | Map to MITRE ATT&CK | Aligns observed behavior to TTPs. | Helpful for L1 analysts to improve threat context and reporting. |
| 8 | Generate threat hunting ideas | Suggests hypotheses and follow-up hunt paths. | Helpful for L1/L2 analysts that are newer to threat hunting. |
| 9 | Improve SIEM detections | Suggests detection logic, tuning ideas, and false positive considerations. | Helps support detection coverage. |
| 10 | Write an executive summary | Translates technical findings into business language. | Helps communicate incident data clearly to leadership and stakeholders. |
When used appropriately, AI tools like ChatGPT can help SOC analysts accelerate repetitive tasks such as summarizing alerts, identifying suspicious patterns in logs, drafting ticket notes, and translating technical findings into business language.
However, genAI and AI agents should not fully replace human judgment. Instead, they should serve as a force multiplier that helps analysts organize information better and reduce time spent on repetitive writing and interpretation tasks. The prompts below are designed to help SOC analysts use ChatGPT as part of their daily operations workflow.
1. Summarize a security alert
Security tools often generate verbose detections filled with vendor-specific language, process details, metadata, or behavior descriptions that can slow down triage — especially for junior analysts.
Prompt: Summarize the following security alert data in simple terms for a L1 SOC analyst. Include what happened, why it matters, likely severity, and the first three investigation steps I should take: [paste alert/log/EDR detection].
This type of prompt helps transform raw alert data into a more readable explanation.
Rather than forcing the L1 analyst to manually decode every field or technical phrase, ChatGPT can produce a concise summary explaining what the alert indicates and why it may be important.
2. Analyze raw logs for suspicious activity
Reviewing these log data manually can be time-consuming, especially when analysts are trying to identify whether a pattern is benign or potentially malicious.
Prompt: Analyze the following logs and identify suspicious activity, notable indicators, possible attacker behavior, and recommended next steps for investigation: [paste/attach log data].
This prompt can help L1 analysts identify unusual authentication attempts, repeated failures, odd process launches, suspicious domains, unusual geographic access, or signs of command-and-control (C2) activity.
While the analyst still needs to validate the output, ChatGPT or AI agents can help reduce initial review time and point the analyst in the right direction.
3. Create a triage checklist for an alert
ChatGPT can help L1 analysts approach unfamiliar alerts with a more consistent, repeatable investigative process when no incident response playbooks exist.
Prompt: Act like an experienced L1 SOC analyst. Based on this alert, create a step-by-step triage checklist I should follow, including what to validate, what evidence to collect, and when to escalate: [paste/attach alert details].
This prompt is helpful for L1 analysts for alerts such as suspicious PowerShell activity, impossible travel alerts, phishing, or unusual outbound traffic, as it provides a structured starting point for investigation.
4. Draft professional case notes or ticket updates
Documentation is a core part of SOC work. Analysts are expected to write clear ticket notes that explain what was investigated, what evidence was reviewed, what actions were taken, and the current status. Poor documentation can make handoffs difficult and create confusion later during escalation or deeper incident review.
Prompt: Write a professional SOC case note based on the following investigation details. Keep it concise, clear, and suitable for an incident ticket. Include findings, actions taken, and current status: [paste/attach notes].
This prompt helps analysts turn rough notes into cleaner, more professional case documentation. It can improve consistency in case handling and save time, especially when analysts are juggling multiple tickets at once.
5. Draft an escalation message to Tier 2 or incident response
Not every alert can be resolved at the initial triage stage. When an L1 analyst identifies something — such as possible credential compromise, malware execution, suspicious administrative behavior, or ransomware activity — they often need to escalate quickly and clearly.
Prompt: Draft a concise escalation message for a L2/L3 analyst based on the following alert and findings. Include what was observed, why it is concerning, what has already been validated, and recommended next actions: [paste/attach findings].
This helps L1 analysts communicate the information without burying the important details in extra words. Good communication for ticket escalation should reduce back-and-forth and make it easier for the next team members to pick up the investigation efficiently.
6. Analyze suspected phishing emails
Phishing is a common issue that requires analysts to assess email content, sender details, headers, links, and social engineering cues to determine whether a message is malicious or simply spam.
Prompt: Analyze this suspected phishing email and identify red flags, likely attacker tactics, suspicious indicators, and recommended response actions. Include whether this appears to be credential harvesting, malware delivery, business email compromise, or simply spam: [paste/attach email headers/body/URLs].
This prompt can help L1 analysts identify spoofing indicators, suspicious domains, urgency language, impersonation tactics, attachment risks, and possible malicious links. It can also help junior analysts better understand how phishing attacks are structured.
7. Map activity to the MITRE ATT&CK framework
The MITRE ATT&CK framework is used to help classify threat actor tactics, techniques, and procedures (TTPs). L1 analysts often need to understand where suspicious activity fits within the broader attack lifecycle.
Prompt: Map the following observed activity to likely MITRE ATT&CK tactics and techniques. Explain why each mapping fits and what additional evidence would help confirm it: [paste findings or event summary].
This helps L1 analysts think more strategically about attacker behavior rather than viewing an alert in isolation. It can also improve reporting quality, threat hunting, and communication with other teams.
8. Generate threat hunting ideas
SOC analysts are not limited to reactive alert handling. In more mature environments, analysts are often expected to proactively hunt for signs of compromise using indicators of compromise (IOCs).
While threat hunting is traditionally done by more senior-level analysts, the use of AI agents in SOCs is freeing L1 analysts to upskill for threat hunting and deeper threat intelligence activities.
Prompt: Based on this alert or suspicious behavior, suggest 10 threat hunting hypotheses and what data sources or queries I should use to investigate further: [paste/attach alert, IOC, or behavior].
This type of prompt can help L1 analysts expand an isolated detection into broader hunting activity across the environment.
For example, if a suspicious PowerShell command appears on one host, AI can help suggest ways to look for similar execution elsewhere in endpoint, authentication, proxy, or DNS logs.
9. Improve or create SIEM detection ideas
Although not every L1 SOC analyst is formally part of a detection engineering team, some identify control gaps while investigating incidents.
AI tools, like ChatGPT, can help analysts think through how suspicious behaviors could be translated into better detection logic.
Prompt: Help me create or improve a SIEM detection for the following suspicious behavior. Include detection logic ideas, key fields to monitor, false positive considerations, and tuning recommendations: [describe activity].
This can be useful for developing ideas around detections of brute-force activity, suspicious PowerShell usage, privilege escalation, unusual service creation, lateral movement, or abnormal authentication patterns.
It also helps L1 analysts think more like defenders who improve visibility and tune their detections, rather than just responding to alerts.
10. Write an executive-friendly incident summary
One of the more overlooked SOC skills is communication.
Analysts are often asked to explain incidents to stakeholders who are not deeply technical, such as managers, compliance teams, legal teams, or executives. Technical jargon that makes sense to analysts may be confusing or unhelpful to business stakeholders.
Prompt: Write a non-technical incident summary for a manager or executive audience based on the following investigation details. Explain what happened, business impact, current status, and recommended next steps without using heavy technical jargon: [paste/attach incident details].
This prompt helps analysts practice translating technical findings into business language. That is an important skill because incidents are not just technical events — they often have operational, financial, reputational, and compliance implications as well.
Important reminder: Do not paste sensitive data into public AI tools
While tools like ChatGPT can be useful for SOC workflows, they must be used responsibly.
SOC analysts should never paste sensitive, confidential, regulated, or internal/proprietary security data into a public AI tool unless their organization has explicitly approved that usage.
Examples of data that should not be pasted into unapproved public AI systems include:
- Customer or employee personal data
- Credentials or secrets
- Internal IP addresses or asset inventories
- Proprietary logs
- Sensitive incident details
- Regulated or classified information
- Internal investigation notes containing identifiable system or user data
A safer approach is to sanitize or redact information before using AI. That may include removing usernames, hostnames, domains, email addresses, IP addresses, and file hashes associated with internal systems, as well as any information that could expose the organization or its users.
Ideally, SOC teams should use only approved enterprise AI tools that align with the organization’s legal, privacy, and security requirements.
Bottom line
ChatGPT and other AI tools can be practical productivity aids for SOC analysts when used appropriately.
They help analysts summarize, structure, document, interpret, and communicate security information more efficiently, reducing repetitive work, improving consistency, and enabling greater focus on detection engineering, threat hunting, and threat intelligence.
However, the value of AI in the SOC depends on how it is used. Analysts still need to validate findings, apply critical thinking, and follow internal procedures/playbooks.
AI can speed up the work, but it should never fully replace humans. For L1 SOC analysts looking to improve efficiency, these prompts offer a practical starting point for integrating AI into daily workflows.
Editor’s note: This article originally appeared on our sister publication, eSecurityPlanet.
