Prompt Leaking

Prompt leaking occurs when attackers trick AI models into revealing sensitive information from their training data or system prompts. This technique exploits model vulnerabilities to extract confidential details, potentially compromising privacy and security of AI systems.

Visit the following resources to learn more: