Linking pages
- Licensing is neither feasible nor effective for addressing AI risks https://aisnakeoil.substack.com/p/licensing-is-neither-feasible-nor 157 comments
- GitHub - xenova/transformers.js: State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server! https://github.com/xenova/transformers.js 55 comments
- AMD AI Software Solved – MI300X Pricing, Performance, PyTorch 2.0, Flash Attention, OpenAI Triton https://www.semianalysis.com/p/amd-ai-software-solved-mi300x-pricing 26 comments
- GitHub - AI4Finance-Foundation/FinGPT: FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace. https://github.com/AI4Finance-Foundation/FinGPT 3 comments
- Local Large Language Models - beginners guide - int8.io int8.io https://int8.io/local-large-language-models-beginners-guide/ 2 comments
- Aman's AI Journal • Primers • Overview of Large Language Models https://aman.ai/primers/ai/LLM/ 1 comment
- The History of Open-Source LLMs: Better Base Models (Part Two) https://cameronrwolfe.substack.com/p/the-history-of-open-source-llms-better 1 comment
- GitHub - ritchieng/the-incredible-pytorch: The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch. https://github.com/ritchieng/the-incredible-pytorch 0 comments
- MPT-7B and The Beginning of Context=Infinity — with Jonathan Frankle and Abhinav Venigalla of MosaicML https://www.latent.space/p/mosaic-mpt-7b 0 comments
- GitHub - AIoT-MLSys-Lab/Efficient-LLMs-Survey: Efficient Large Language Models: A Survey https://github.com/AIoT-MLSys-Lab/Efficient-LLMs-Survey 0 comments
- Replit — Building LLMs for Code Repair https://blog.replit.com/code-repair 0 comments
Linked pages
- Benchmarking Large Language Models on NVIDIA H100 GPUs with CoreWeave (Part 1) https://www.mosaicml.com/blog/coreweave-nvidia-h100-part-1 15 comments
- Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs https://www.mosaicml.com/blog/mpt-7b 11 comments
- Mosaic LLMs (Part 2): GPT-3 quality for <$500k https://www.mosaicml.com/blog/gpt-3-quality-for-500k 7 comments
- GitHub - HazyResearch/flash-attention: Fast and memory-efficient exact attention https://github.com/HazyResearch/flash-attention 3 comments
- Mosaic LLMs (Part 1): Billion-Parameter GPT Training Made Easy https://www.mosaicml.com/blog/billion-parameter-gpt-training-made-easy 0 comments
- mosaicml/mpt-7b-storywriter · Hugging Face https://huggingface.co/mosaicml/mpt-7b-storywriter 0 comments
Related searches:
Search whole site: site:github.com
Search title: GitHub - mosaicml/llm-foundry
See how to search.