Linking pages
- Why Google failed to make GPT-3 + why Multimodality for Knowledge Work is the path to AGI - with David Luan of Adept https://www.latent.space/p/adept 40 comments
- The Winds of AI Winter - Latent Space https://www.latent.space/p/mar-jun-2024 2 comments
- Worthwhile Research for building SOTA LLMs (Jan 2024 Recap) https://www.latent.space/p/jan-2024 0 comments
- One standard to deploy them all - with Ben Firshman of Replicate https://www.latent.space/p/replicate 0 comments
- Open Source AI is AI we can Trust — with Soumith Chintala of Meta AI https://www.latent.space/p/soumith 0 comments
- Making Transformers Sing - with Mikey Shulman of Suno https://www.latent.space/p/suno 0 comments
- Why Google failed and couldn’t Adept - with David Luan of Adept https://www.latent.space/i/142817627/why-google-couldnt-make-gpt 0 comments
- The Unbundling of ChatGPT (Feb 2024 Recap) - Latent Space https://www.latent.space/p/feb-2024 0 comments
Linked pages
- Paving the way to efficient architectures: StripedHyena-7B, open source models offering a glimpse into a world beyond Transformers https://www.together.ai/blog/stripedhyena-7b 72 comments
- RedPajama-Data-v2: An open dataset with 30 trillion tokens for training large language models https://together.ai/blog/redpajama-data-v2 60 comments
- Together AI â Fast Inference, Fine-Tuning & Training https://together.ai 27 comments
- SlimPajama: A 627B token cleaned and deduplicated version of RedPajama - Cerebras https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama 7 comments
- The Accidental AI Canvas - with Steve Ruiz of tldraw https://www.latent.space/p/tldraw 2 comments
- Voyage AI | Home https://www.voyageai.com/ 1 comment
- Introducing the Together Embeddings endpoint â Higher accuracy, longer context, and lower cost https://www.together.ai/blog/embeddings-endpoint-release 1 comment
- Why StackOverflow usage is down 50% — with David Hsu of Retool https://www.latent.space/p/retool 1 comment
- Latent Space | swyx | Substack https://www.latent.space/ 0 comments
- FlashAttention 2: making Transformers 800% faster w/o approximation - with Tri Dao of Together AI https://www.latent.space/p/flashattention 0 comments
- Inference Race To The Bottom - Make It Up On Volume? https://www.semianalysis.com/p/inference-race-to-the-bottom-make 0 comments
- NeurIPS 2023 Recap — Best Papers - by swyx - Latent Space https://www.latent.space/p/neurips-2023-papers 0 comments
- RLHF 201 - with Nathan Lambert of AI2 and Interconnects https://www.latent.space/p/rlhf-201 0 comments
- How to train your own Large Multimodal Model — with Hugo Laurençon & Leo Tronchon of HuggingFace M4 Research https://www.latent.space/p/idefics 0 comments
Related searches:
Search whole site: site:latent.space
Search title: Cloud Intelligence at the speed of 5000 tok/s - with Ce Zhang and Vipul Ved Prakash of Together AI
See how to search.