Hacker News
- AI text generator not released for concerns about implications https://blog.openai.com/better-language-models/#sample5 50 comments
- Better Language Models and Their Implications https://blog.openai.com/better-language-models/ 2 comments
- Better Language Models and Their Implications https://blog.openai.com/better-language-models/ 130 comments
Lobsters
- Better Language Models and Their Implications https://blog.openai.com/better-language-models/ 2 comments ai
- New AI from OpenAI can generate a story from only 2 sentences of context. https://blog.openai.com/better-language-models/ 15 comments science
- Has anyone seen this yet? https://blog.openai.com/better-language-models/#sample8 5 comments deeplearning
- Big advance in unsupervised language modeling from OpenAI, impressive generated stories https://blog.openai.com/better-language-models/ 3 comments artificial
Linking pages
- Dear OpenAI: Please Open Source Your Language Model https://thegradient.pub/openai-please-open-source-your-language-model/ 124 comments
- OpenAI built a text generator so good, it's considered too dangerous to release | TechCrunch https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/ 59 comments
- GitHub - xenova/transformers.js: State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server! https://github.com/xenova/transformers.js 55 comments
- Researchers, scared by their own work, hold back “deepfakes for text” AI | Ars Technica https://arstechnica.com/information-technology/2019/02/researchers-scared-by-their-own-work-hold-back-deepfakes-for-text-ai/ 50 comments
- Lessons from the GPT-4Chan Controversy https://thegradient.pub/gpt-4chan-lessons/ 28 comments
- GitHub - huggingface/transformers: 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. https://github.com/huggingface/transformers 26 comments
- Researchers create 'malicious' writing AI - BBC News https://www.bbc.com/news/technology-47249163 18 comments
- How Transformers Work. Transformers are a type of neural… | by Giuliano Giacaglia | Towards Data Science https://medium.com/@giacaglia/transformers-141e32e69591 17 comments
- GitHub - andrewt3000/DL4NLP: Deep Learning for NLP resources https://github.com/andrewt3000/DL4NLP 14 comments
- Introducing GPipe, an Open Source Library for Efficiently Training Large-scale Neural Network Models – Google AI Blog http://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html 12 comments
- Generating Fake Conversations by fine-tunning OpenAI’s GPT-2 on data from Facebook Messenger | Svilen Todorov http://tenoke.github.io/blog/gpt-finetune 7 comments
- How Transformers Work. Transformers are a type of neural… | by Giuliano Giacaglia | Towards Data Science https://towardsdatascience.com/transformers-141e32e69591 6 comments
- Angry Metal Album Art Neural Networks | Angry Metal Guy https://www.angrymetalguy.com/angry-metal-album-art-neural-networks/ 5 comments
- GitHub - openai/gpt-2: Code for the paper "Language Models are Unsupervised Multitask Learners" https://github.com/openai/gpt-2 2 comments
- GitHub - Hellisotherpeople/CX_DB8: a contextual, biasable, word-or-sentence-or-paragraph extractive summarizer powered by the latest in text embeddings (Bert, Universal Sentence Encoder, Flair) https://github.com/Hellisotherpeople/CX_DB8 2 comments
- OpenAI unveils multitalented AI that writes, translates, and slanders - The Verge https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-openai-gpt2 1 comment
- About OpenAI’s unsupervised language model, or unicorns in South America | Flancia https://flancia.org/posts/about-unicorns-in-south-america/ 1 comment
- OpenAI’s GPT-2: the model, the hype, and the controversy | by Ryan Lowe | Towards Data Science https://medium.com/@lowe.ryan.t/openais-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8 1 comment
- The unreasonable effectiveness of recipe generation with the GPT-2 sample model | Peter Krantz https://www.peterkrantz.com/2019/recipes-with-gpt2/ 0 comments
- GitHub - jcpeterson/openwebtext: Open clone of OpenAI's unreleased WebText dataset scraper. This version uses pushshift.io files instead of the API for speed. https://github.com/jcpeterson/openwebtext 0 comments
Linked pages
- Preparing for malicious uses of AI https://blog.openai.com/preparing-for-malicious-uses-of-ai/ 228 comments
- OpenAI Status https://status.openai.com 195 comments
- OpenAI Charter https://blog.openai.com/openai-charter/ 177 comments
- [1706.03762] Attention Is All You Need https://arxiv.org/abs/1706.03762 145 comments
- Improving language understanding with unsupervised learning https://blog.openai.com/language-unsupervised/ 18 comments
- ICLR 2023 https://iclr.cc 10 comments
- Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security by Robert Chesney, Danielle Keats Citron :: SSRN https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954 9 comments
- GitHub - openai/gpt-2: Code for the paper "Language Models are Unsupervised Multitask Learners" https://github.com/openai/gpt-2 2 comments
- [1901.11373] Learning and Evaluating General Linguistic Intelligence https://arxiv.org/abs/1901.11373 0 comments
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.
Related searches:
Search whole site: site:blog.openai.com
Search title: Better Language Models and Their Implications
See how to search.