Hacker News
- Exploiting GPT-3 prompts that order the model to ignore previous directions https://twitter.com/goodside/status/1569128808308957185 15 comments
Linking pages
- The Anatomy of Autonomy: Why Agents are the next AI Killer App after ChatGPT https://www.latent.space/p/agents 153 comments
- Reverse Prompt Engineering for Fun and (no) Profit https://lspace.swyx.io/p/reverse-prompt-eng 82 comments
- TechScape: AI’s dark arts come into their own | Artificial intelligence (AI) | The Guardian https://www.theguardian.com/technology/2022/sep/21/ais-dark-arts-come-into-their-own 2 comments
- Prompt injection: GPT-3 has a serious security flaw https://the-decoder.com/prompt-injection-gpt-3-has-a-serious-security-flaw/ 1 comment
- Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack | Ars Technica https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack/ 0 comments
- Reverse Prompt Engineering for Fun and (no) Profit https://www.latent.space/p/reverse-prompt-eng 0 comments
- 7 methods to secure LLM apps from prompt injections and jailbreaks [Guest] https://artificialintelligencemadesimple.substack.com/p/mitigate-prompt-attacks 0 comments
- 7 methods to secure LLM apps from prompt injections and jailbreaks [Guest] http://mitigate-prompt-attacks.aitidbits.ai 0 comments