Hacker News
- BloombergGPT https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/ 13 comments
- BloombergGPT https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/ 8 comments wallstreetbets
- Introducing BloombergGPT 50 billion parameter model for Finance https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance 28 comments technology
- BloombergGPT, Bloomberg’s 50-billion parameter large language model, purpose-built from scratch for finance https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/ 16 comments artificial
Linking pages
- The LLama Effect: How an Accidental Leak Sparked a Series of Impressive Open Source Alternatives to ChatGPT https://thesequence.substack.com/p/the-llama-effect-how-an-accidental 504 comments
- These books are being used to train AI. No one told the authors | CNN https://edition.cnn.com/2023/10/08/style/ai-books3-authors-nora-roberts-cec/index.html 153 comments
- Revealed: The Authors Whose Pirated Books Are Powering Generative AI - The Atlantic https://www.theatlantic.com/technology/archive/2023/08/books3-ai-meta-llama-pirated-books/675063/ 86 comments
- Why You (Probably) Don't Need to Fine-tune an LLM - Tidepool by Aquarium https://www.tidepool.so/2023/08/17/why-you-probably-dont-need-to-fine-tune-an-llm/ 73 comments
- Virtual humans - H+ Weekly - Issue #409 - by Conrad Gray https://www.hplusweekly.com/p/issue-409 33 comments
- One LLM won't rule them all https://generatingconversation.substack.com/p/one-llm-wont-rule-them-all 5 comments
- AI Fundamentals: Datasets 101 - Latent Space https://www.latent.space/p/datasets-101 1 comment
- Large Language Models and the Future of the ML Infrastructure Stack | Outerbounds https://outerbounds.com/blog/llm-infrastructure-stack/ 0 comments
- Should publishers go to war over AI? - by Brian Morrissey https://www.therebooting.com/p/should-publishers-go-to-war-over 0 comments
- Reinforcement learning is all you need, for next generation language models. https://yuxili.substack.com/p/reinforcement-learning-is-all-you 0 comments
- GitHub - nlpfromscratch/nlp-llms-resources: Master list of curated resources on NLP and LLMs https://github.com/nlpfromscratch/nlp-llms-resources 0 comments
Related searches:
Search whole site: site:www.bloomberg.com
Search title: Bloomberg - Are you a robot?
See how to search.