Hacker News
- Indirect Prompt Injection on Bing Chat https://greshake.github.io/ 143 comments
- Prompt Injections are bad, mkay? https://greshake.github.io/ 2 comments
- Indirect Prompt Injection on Bing Chat https://greshake.github.io/ 2 comments artificial
- Indirect Prompt Injection on Bing Chat https://greshake.github.io/ 9 comments netsec
Linking pages
- Researchers create AI worms that can spread from one system to another | Ars Technica https://arstechnica.com/ai/2024/03/researchers-create-ai-worms-that-can-spread-from-one-system-to-another/ 69 comments
- GitHub - greshake/llm-security: New ways of breaking app-integrated LLMs https://github.com/greshake/llm-security 53 comments
- The Dark Side of LLMs | Medium https://medium.com/@kgreshake/the-dark-side-of-llms-we-need-to-rethink-large-language-models-now-6212aca0581a 10 comments
- Here Come the AI Worms | WIRED https://www.wired.com/story/here-come-the-ai-worms/ 9 comments
- The Hacking of ChatGPT Is Just Getting Started | WIRED https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/ 3 comments
- 🎁 Your guide to AI: April 2023 https://nathanbenaich.substack.com/p/your-guide-to-ai-april-2023 1 comment
- The Hacking of ChatGPT Is Just Getting Started | WIRED UK https://www.wired.co.uk/article/chatgpt-jailbreak-generative-ai-hacking 1 comment
- How We Broke LLMs: Indirect Prompt Injection https://kai-greshake.de/posts/llm-malware/ 1 comment
- In Escalating Order of Stupidity https://kai-greshake.de/posts/in-escalating-order-of-stupidity/ 1 comment
- LLMs’ Data-Control Path Insecurity - Schneier on Security https://www.schneier.com/blog/archives/2024/05/llms-data-control-path-insecurity.html 1 comment
- GitHub - rzhade3/adversarial-ai-reading-list: Reading list of more resources to learn about Adversarial Attacks on AI Systems https://github.com/rzhade3/adversarial-ai-reading-list 0 comments
- The Dark Side of LLMs | Better Programming https://betterprogramming.pub/the-dark-side-of-llms-we-need-to-rethink-large-language-models-now-6212aca0581a 0 comments
- GitHub - newhouseb/clownfish: Constrained Decoding for LLMs against JSON Schema https://github.com/newhouseb/clownfish 0 comments
- clownfish/README.md at main · newhouseb/clownfish · GitHub https://github.com/newhouseb/clownfish/blob/main/README.md 0 comments
- The AI-risk of a synthetic Theory of Mind - by René Walter https://goodinternet.substack.com/p/the-ai-risk-of-a-synthetic-theory 0 comments
- The Dark Side of LLMs | Better Programming https://betterprogramming.pub/the-dark-side-of-llms-we-need-to-rethink-large-language-models-now-6212aca0581a?gi=a0021bd1e5d3 0 comments
- ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data · Embrace The Red https://embracethered.com/blog/posts/2023/chatgpt-cross-plugin-request-forgery-and-prompt-injection./ 0 comments
Related searches:
Search whole site: site:greshake.github.io
Search title: Prompt Injections are bad, mkay?
See how to search.