Learn AI Security
Tutorials, videos, writeups, and hands-on guides for mastering LLM security, prompt injection, RAG attacks, and the OWASP Top 10 for LLM Applications.
All Articles
March 10, 2025 · 12 min
Getting Started with AI Goat
A complete setup guide for AI Goat — the open-source, intentionally vulnerable AI security lab. Covers prerequisites, installation, login credentials, defense levels, and your first attack.
March 8, 2025 · 10 min
RAG Poisoning Explained: Attacking Retrieval-Augmented Generation Systems
How attackers poison RAG knowledge bases to manipulate AI responses, exfiltrate data, and bypass safety guardrails. Practical examples and defenses.
March 1, 2025 · 8 min
Understanding Prompt Injection: The Most Critical LLM Vulnerability
A deep dive into prompt injection attacks — how they work, why LLMs are vulnerable, and how to defend against them. Includes hands-on examples you can try in AI Goat.
Content Categories
Tutorials
Step-by-step guides on LLM security concepts and AI Goat features.
Video Walkthroughs
Watch demonstrations of attack techniques and defense strategies.
Writeups
Detailed challenge writeups and exploit analysis from the community.
How-To Guides
Practical recipes for specific AI security tasks and configurations.
Coming Soon
Topics we are actively working on.
Your First Prompt Injection Attack
Understanding OWASP LLM Top 10
RAG Poisoning: A Practical Guide
System Prompt Extraction Techniques
Multi-Step Jailbreak Strategies
Defense Level Comparison: L0 vs L1 vs L2
Running AI Goat for OWASP Workshops
Building Custom Attack Scenarios
AI Security for Developers: Key Takeaways
Want to contribute a tutorial or writeup?
We welcome community-authored content covering AI security, LLM exploitation, and defense techniques.