Hacker News
- The Dual LLM pattern for building AI assistants that can resist prompt injection https://simonwillison.net/2023/Apr/25/dual-llm-pattern/ 109 comments
Lobsters
- The Dual LLM pattern for building AI assistants that can resist prompt injection https://simonwillison.net/2023/Apr/25/dual-llm-pattern/ 7 comments ai , security
Linking pages
- GitHub - paulpierre/RasaGPT: 💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram https://github.com/paulpierre/RasaGPT 120 comments
- AI Agents #3: Practical Approaches to AI Agent Security https://laiyer.substack.com/p/ai-agents-3-practical-ai-agent-security 3 comments
- How We Broke LLMs: Indirect Prompt Injection https://kai-greshake.de/posts/llm-malware/ 1 comment
- Taming the Wild Genius of Large Language Models https://danshiebler.com/2023-05-15-taming-large-language-models/ 1 comment
- ChatGPT Plugins: Data Exfiltration via Images & Cross Plugin Request Forgery · Embrace The Red https://embracethered.com/blog/posts/2023/chatgpt-webpilot-data-exfil-via-markdown-injection/ 0 comments
- ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data · Embrace The Red https://embracethered.com/blog/posts/2023/chatgpt-cross-plugin-request-forgery-and-prompt-injection./ 0 comments
- GitHub - jthack/PIPE: Prompt Injection Primer for Engineers https://github.com/jthack/PIPE 0 comments
- GitHub - tldrsec/prompt-injection-defenses: Every practical and proposed defense against prompt injection. https://github.com/tldrsec/prompt-injection-defenses 0 comments
- Automatic Tool Invocation when Browsing with ChatGPT - Threats and Mitigations · Embrace The Red https://embracethered.com/blog/posts/2024/llm-apps-automatic-tool-invocations/ 0 comments
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.
Related searches:
Search whole site: site:simonwillison.net
Search title: The Dual LLM pattern for building AI assistants that can resist prompt injection
See how to search.