Prompt Injection is a security vulnerability where attackers craft malicious inputs designed to manipulate an AI system's behavior. These crafted prompts can override or bypass the system's intended constraints, potentially leading to unintended outputs, leakage of confidential information, or malicious actions. It highlights the importance of input sanitization and robust validation to ensure AI systems operate securely and as intended.