Hacker News
Linking pages
- Inject My PDF: Prompt Injection for your Resume https://kai-greshake.de/posts/inject-my-pdf/ 13 comments
- AI Hacking Games (Jailbreak CTFs) – Security Café https://securitycafe.ro/2023/05/15/ai-hacking-games-jailbreak-ctfs/ 5 comments
- The Hacking of ChatGPT Is Just Getting Started | WIRED https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/ 3 comments
- Guide for AI Newcomers | AI.JSX https://docs.ai-jsx.com/guides/brand-new 3 comments
- What's New in Generative AI - 2023-02-27 - by Scott Swigart https://scottswigart.substack.com/p/whats-new-in-generative-ai-2023-02 1 comment
- Hacker demonstrates security flaws in GPT-4 just one day after launch | VentureBeat https://venturebeat.com/security/hacker-demonstrates-security-flaws-in-gpt-4-just-one-day-after-launch/ 1 comment
- Discussions about AI: A curated, live and occasionally updated "no comment" list about AI, its applications and the discussion it generates. https://bayindirh.lists.sh/Discussions%20about%20AI 1 comment
- The Hacking of ChatGPT Is Just Getting Started | WIRED UK https://www.wired.co.uk/article/chatgpt-jailbreak-generative-ai-hacking 1 comment
- In Escalating Order of Stupidity https://kai-greshake.de/posts/in-escalating-order-of-stupidity/ 1 comment
- GitHub - 0xeb/TheBigPromptLibrary: A collection of prompts, system prompts and LLM instructions https://github.com/0xeb/TheBigPromptLibrary 1 comment
- Viral Anger and the Teenage Mental Health Crisis https://goodinternet.substack.com/p/viral-anger-and-the-teenage-mental 0 comments
- GPT-4 vs GPT-3: What you should know. https://explodinggradients.com/gpt-4-vs-gpt-3-what-you-should-know 0 comments
- Uncertain Simulators Don't Always Simulate Uncertain Agents | Daniel D. Johnson https://www.danieldjohnson.com/2023/03/27/uncertain_simulators/ 0 comments
- The Doomsday Clock and A Power Too Great to Understand https://turquoisesound.substack.com/p/the-doomsday-clock-and-a-power-too 0 comments
- Adversarial Attacks on LLMs | Lil'Log https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/ 0 comments
- 7 methods to secure LLM apps from prompt injections and jailbreaks [Guest] https://artificialintelligencemadesimple.substack.com/p/mitigate-prompt-attacks 0 comments
- 7 methods to secure LLM apps from prompt injections and jailbreaks [Guest] http://mitigate-prompt-attacks.aitidbits.ai 0 comments
Related searches:
Search whole site: site:www.jailbreakchat.com
Search title: Jailbreak Chat
See how to search.