- Why You Should Treat Large Language Models as Potential Attackers https://www.cyberark.com/resources/threat-research-blog/ai-treason-the-enemy-within 2 comments netsec
Linked pages
- Prompt injection and jailbreaking are not the same thing https://simonwillison.net/2024/Mar/5/prompt-injection-jailbreaking/ 11 comments
- Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality | LMSYS Org https://lmsys.org/blog/2023-03-30-vicuna/ 7 comments
- [2404.02151] Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks https://arxiv.org/abs/2404.02151 1 comment
- [2309.02926] Demystifying RCE Vulnerabilities in LLM-Integrated Apps https://arxiv.org/abs/2309.02926 0 comments
Related searches:
Search whole site: site:www.cyberark.com
Search title: AI Treason: The Enemy Within
See how to search.