- I Created the biggest Open Source Project for Jailbreaking LLMs https://github.com/General-Analysis/GA 7 comments opensource
Linking pages
Linked pages
- [2307.15043] Universal and Transferable Adversarial Attacks on Aligned Language Models https://arxiv.org/abs/2307.15043 3 comments
- [2312.02119] Tree of Attacks: Jailbreaking Black-Box LLMs Automatically https://arxiv.org/abs/2312.02119 2 comments
- Google Colab https://colab.research.google.com/github/General-Analysis/GA/blob/main/notebooks/General_Analysis_TAP_Jailbreak.ipynb 0 comments
- Google Colab https://colab.research.google.com/github/General-Analysis/GA/blob/main/notebooks/General_Analysis_AutoDAN_Turbo_Jailbreak.ipynb 0 comments
Related searches:
Search whole site: site:github.com
Search title: GitHub - General-Analysis/GA: An encyclopedia of jailbreaking techniques to make AI models safer.
See how to search.