Hacker News
- New hack uses prompt injection to corrupt Gemini's long-term memory https://arstechnica.com/security/2025/02/new-hack-uses-prompt-injection-to-corrupt-geminis-long-term-memory/ 2 comments
- New hack uses prompt injection to corrupt Gemini’s long-term memory https://arstechnica.com/security/2025/02/new-hack-uses-prompt-injection-to-corrupt-geminis-long-term-memory/ 4 comments artificial
Linked pages
- Hacker plants false memories in ChatGPT to steal user data in perpetuity | Ars Technica https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/ 406 comments
- Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information · Embrace The Red https://embracethered.com/blog/posts/2024/m365-copilot-prompt-injection-tool-invocation-and-data-exfil-using-ascii-smuggling/ 6 comments
Related searches:
Search whole site: site:arstechnica.com
Search title: New hack uses prompt injection to corrupt Gemini’s long-term memory
See how to search.