Linking pages
Linked pages
- Prompt Injections are bad, mkay? https://greshake.github.io/ 158 comments
- The Dual LLM pattern for building AI assistants that can resist prompt injection https://simonwillison.net/2023/Apr/25/dual-llm-pattern/ 116 comments
- Johann Rehberger on Twitter: "👉 Let ChatGPT visit a website and have your email stolen. Plugins, Prompt Injection and Cross Plug-in Request Forgery. Not sharing “shell code” but… 🤯 Why no human in the loop? @openai Would mitigate the CPRF at least #OPENAI #ChatGPT #plugins #infosec #ai #humanintheloop https://t.co/w3xtpyexn3" / Twitter https://twitter.com/wunderwuzzi23/status/1659411665853779971 55 comments
- ChatGPT Vulnerable to Prompt Injection via YouTube Transcripts | Tom's Hardware https://www.tomshardware.com/news/chatgpt-vulnerable-to-youtube-prompt-injection 9 comments
- Confused deputy problem - Wikipedia https://en.wikipedia.org/wiki/Confused_deputy_problem 2 comments
- Indirect Prompt Injection via YouTube Transcripts · Embrace The Red https://embracethered.com/blog/posts/2023/chatgpt-plugin-youtube-indirect-prompt-injection/ 1 comment
- AI Injections: Untrusted LLM responses and why context matters · Embrace The Red https://embracethered.com/blog/posts/2023/ai-injections-threats-context-matters/ 0 comments
- ChatGPT Plugins: Data Exfiltration via Images & Cross Plugin Request Forgery · Embrace The Red https://embracethered.com/blog/posts/2023/chatgpt-webpilot-data-exfil-via-markdown-injection/ 0 comments
Related searches:
Search whole site: site:embracethered.com
Search title: ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data · Embrace The Red
See how to search.