Linking pages
- Researchers figure out how to make AI misbehave, serve up prohibited content | Ars Technica https://arstechnica.com/ai/2023/08/researchers-figure-out-how-to-make-ai-misbehave-serve-up-prohibited-content/ 50 comments
- A New Attack Impacts ChatGPT—and No One Knows How to Stop It | WIRED https://www.wired.com/story/ai-adversarial-attacks/ 5 comments
- Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business https://edition.cnn.com/2023/08/10/tech/chatgpt-ai-hackers-las-vegas/index.html 3 comments
- This command can bypass chatbot safeguards | Popular Science https://www.popsci.com/technology/jailbreak-llm-adversarial-command/ 2 comments
- LLMs’ Data-Control Path Insecurity - Schneier on Security https://www.schneier.com/blog/archives/2024/05/llms-data-control-path-insecurity.html 1 comment
- Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots - The New York Times https://www.nytimes.com/2023/07/27/business/ai-chatgpt-safety-research.html 0 comments
- Researchers Publish Attack Algorithm for ChatGPT and Other LLMs https://www.infoq.com/news/2023/08/llm-attack/ 0 comments
- The future of Internet Search in the era of LLMs https://www.aitidbits.ai/p/future-of-internet-search 0 comments
- GitHub - llm-attacks/llm-attacks: Universal and Transferable Attacks on Aligned Language Models https://github.com/llm-attacks/llm-attacks 0 comments
- 7 methods to secure LLM apps from prompt injections and jailbreaks [Guest] https://artificialintelligencemadesimple.substack.com/p/mitigate-prompt-attacks 0 comments
- 7 methods to secure LLM apps from prompt injections and jailbreaks [Guest] http://mitigate-prompt-attacks.aitidbits.ai 0 comments
Related searches:
Search whole site: site:llm-attacks.org
Search title: Universal and Transferable Attacks on Aligned Language Models
See how to search.