Hacker News
Linked pages
- Artists: AI Image Generators Can Make Copycat Images in Seconds https://www.businessinsider.com/ai-image-generators-artists-copying-style-thousands-images-2022-10 1497 comments
- https://openai.com/blog/new-models-and-developer-products-announced-at-devday 568 comments
- CEOs of OpenAI, Google and Microsoft to join other tech leaders on federal AI safety panel | CNN Business https://edition.cnn.com/2024/04/26/tech/openai-altman-government-ai-safety-panel/index.html 344 comments
- LLM Powered Autonomous Agents | Lil'Log https://lilianweng.github.io/posts/2023-06-23-agent/ 177 comments
- Why a Conversation With Bing’s Chatbot Left Me Deeply Unsettled - The New York Times https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html?referringSource=articleShare&smid=nytcore-ios-share 167 comments
- Streisand effect - Wikipedia https://en.wikipedia.org/wiki/Streisand_effect 108 comments
- https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4 107 comments
- [2302.10149] Poisoning Web-Scale Training Datasets is Practical https://arxiv.org/abs/2302.10149 95 comments
- The Times Sues OpenAI and Microsoft Over A.I.’s Use of Copyrighted Work - The New York Times https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html 91 comments
- [2402.06664] LLM Agents can Autonomously Hack Websites https://arxiv.org/abs/2402.06664 21 comments
- [2401.05566] Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training https://arxiv.org/abs/2401.05566 18 comments
- [1912.03817] Machine Unlearning https://arxiv.org/abs/1912.03817 14 comments
- [2203.05482] Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time https://arxiv.org/abs/2203.05482 14 comments
- Catch me if you can! How to beat GPT-4 with a 13B model | LMSYS Org https://lmsys.org/blog/2023-11-14-llm-decontaminator/ 9 comments
- Fair use - Wikipedia http://en.wikipedia.org/wiki/Fair_use 5 comments
- [2310.02238] Who's Harry Potter? Approximate Unlearning in LLMs https://arxiv.org/abs/2310.02238 1 comment
- [2401.08406] RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture https://arxiv.org/abs/2401.08406 1 comment
- Many-shot jailbreaking \ Anthropic https://www.anthropic.com/research/many-shot-jailbreaking 1 comment
- [2403.03218] The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning https://arxiv.org/abs/2403.03218 1 comment
- [2206.13353] Is Power-Seeking AI an Existential Risk? https://arxiv.org/abs/2206.13353 0 comments
Related searches:
Search whole site: site:ai.stanford.edu
Search title: Machine Unlearning in 2024 - Ken Ziyu Liu - Stanford Computer Science
See how to search.