- OpenAI research paper presents de-anonymization and government-level gpu restrictions unless 'personhood' is proved as ways to fight AI built misinformation... They want you to be on a watch list if you buy a GPU. https://cdn.openai.com/papers/forecasting-misuse.pdf 4 comments privacy
Linking pages
- Weapons of Mass Disruption: Artificial Intelligence and the Production of Extremist Propaganda – GNET https://gnet-research.org/2023/02/17/weapons-of-mass-disruption-artificial-intelligence-and-the-production-of-extremist-propaganda/ 33 comments
- OpenAI, Georgetown, Stanford study finds LLMs can boost public opinion manipulation | VentureBeat https://venturebeat.com/ai/openai-georgetown-stanford-study-finds-llms-can-boost-public-opinion-manipulation/ 15 comments
- Researchers: Large language models will revolutionize digital propaganda campaigns - CyberScoop https://www.cyberscoop.com/large-language-models-influence-operatio/ 2 comments
- Forecasting potential misuses of language models for disinformation campaigns—and how to reduce risk | FSI https://cyber.fsi.stanford.edu/io/news/forecasting-potential-misuses-language-models-disinformation-campaigns-and-how-reduce-risk 1 comment
- 🔥 Your guide to AI: February 2023 - Guide to AI https://nathanbenaich.substack.com/p/your-guide-to-ai-february-2023 1 comment
- Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations | FSI https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and 0 comments
- GitHub - filipecalegario/awesome-generative-ai: A curated list of Generative AI tools, works, models, and references https://github.com/filipecalegario/awesome-generative-ai 0 comments