- The safety techniques used to limit AI are so weak that other AI programs can figure out how to break them 42.5% of the time. https://www.scientificamerican.com/article/jailbroken-ai-chatbots-can-jailbreak-other-chatbots/?mc_cid=4841a47cba 122 comments futurology
Linked pages
- Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day - The Verge http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist 174 comments
- [2311.03348] Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation https://arxiv.org/abs/2311.03348 0 comments
Related searches:
Search whole site: site:www.scientificamerican.com
Search title: Jailbroken AI Chatbots Can Jailbreak Other Chatbots | Scientific American
See how to search.