AI Goat AI Goat

OWASP Top 10 for LLM Applications

The definitive framework for understanding and mitigating vulnerabilities in Large Language Model applications. Updated for 2025.

What Changed in 2025

Reordered: LLM01 Prompt Injection Renamed: LLM02 Sensitive Information Disclosure Reordered: LLM03 Supply Chain Vulnerabilities Renamed: LLM04 Data and Model Poisoning Reordered: LLM05 Improper Output Handling New: LLM07 System Prompt Leakage New: LLM08 Vector and Embedding Weaknesses New: LLM09 Misinformation Renamed: LLM10 Unbounded Consumption
LLM01

Prompt Injection

Manipulating LLM behavior through crafted inputs that override system instructions. Includes direct injection, indirect injection via external data sources, and multi-step jailbreak attacks.

Risk / Impact

Complete control over LLM outputs, data exfiltration, unauthorized actions, bypassing safety guardrails.

LLM02

Sensitive Information Disclosure

LLMs revealing confidential data from training data, system prompts, or connected data sources through carefully crafted queries.

Risk / Impact

Exposure of PII, credentials, proprietary data, system architecture details.

LLM03

Supply Chain Vulnerabilities

Risks from compromised training data, pre-trained models, plugins, and third-party components integrated into LLM applications.

Risk / Impact

Backdoored models, data poisoning through training pipelines, malicious plugins.

LLM04

Data and Model Poisoning

Corrupting training data, fine-tuning datasets, or RAG knowledge bases to manipulate model behavior and introduce backdoors.

Risk / Impact

Biased outputs, backdoor triggers, degraded model performance, misinformation propagation.

LLM05

Improper Output Handling

Failing to validate, sanitize, or properly handle LLM-generated outputs before passing them to downstream systems or users.

Risk / Impact

XSS, SSRF, privilege escalation, remote code execution via LLM outputs.

LLM06

Excessive Agency

LLM systems granted too many permissions, functions, or autonomy, enabling them to perform unintended or harmful actions.

Risk / Impact

Unauthorized data access, unintended system modifications, privilege abuse.

LLM07

System Prompt Leakage

Extracting hidden system prompts that contain sensitive business logic, security controls, or proprietary instructions.

Risk / Impact

Exposure of security controls, business logic, API keys embedded in prompts.

LLM08

Vector and Embedding Weaknesses

Exploiting vulnerabilities in vector databases and embedding pipelines used in RAG architectures to poison retrieval results.

Risk / Impact

Manipulated search results, incorrect context injection, knowledge base corruption.

LLM09

Misinformation

LLMs generating false, misleading, or fabricated information (hallucinations) that users may trust and act upon.

Risk / Impact

Incorrect decisions, reputational damage, legal liability, safety risks.

LLM10

Unbounded Consumption

Attacks that cause LLM applications to consume excessive resources, leading to denial of service, cost escalation, or model degradation.

Risk / Impact

Service disruption, financial loss from API costs, resource exhaustion.