Hacker News
- Gandalf – Game to make an LLM reveal a secret password https://gandalf.lakera.ai/ 351 comments
- Gandalf – Lakera – Prompt Injection Challenge https://gandalf.lakera.ai/ 8 comments
Linking pages
- Edition 21: A framework to securely use LLMs in companies - Part 1: Overview of Risks https://boringappsec.substack.com/p/edition-21-a-framework-to-securely 25 comments
- Gandalf AI game reveals how anyone can trick ChatGPT into performing evil acts | Evening Standard https://www.standard.co.uk/tech/gandalf-ai-chatgpt-openai-cybersecurity-lakera-prompt-b1082927.html 7 comments
- [Guest post] Edition 24: Pentesting LLM apps 101 https://boringappsec.substack.com/p/guest-post-edition-24-pentesting 1 comment
- Reverse-engineering GPTs for fun and data https://andrei.fyi/blog/reverse-engineering-gpts/ 1 comment
- Generally AI Episode 1: Large Language Models https://www.infoq.com/podcasts/large-language-models/ 0 comments
- 7 methods to secure LLM apps from prompt injections and jailbreaks [Guest] https://artificialintelligencemadesimple.substack.com/p/mitigate-prompt-attacks 0 comments
- 7 methods to secure LLM apps from prompt injections and jailbreaks [Guest] http://mitigate-prompt-attacks.aitidbits.ai 0 comments
Related searches:
Search whole site: site:gandalf.lakera.ai
Search title: Gandalf | Lakera - Prompt Injection
See how to search.