- [R] [P] New ways of breaking app-integrated LLMs with prompt injection https://github.com/greshake/lm-safety 9 comments machinelearning
Linked pages
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.
Related searches:
Search whole site: site:github.com
Search title: GitHub - greshake/lm-safety: New ways of breaking app-integrated LLMs
See how to search.