Lobsters
- LLaMA2-Accessory: An Open-source Toolkit for LLM Development https://github.com/Alpha-VLLM/LLaMA2-Accessory 3 comments ai
Linked pages
- Microsoft · GitHub https://github.com/Microsoft 1168 comments
- Before you continue https://bard.google.com/ 795 comments
- GitHub - karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs. https://github.com/karpathy/nanoGPT 366 comments
- LAION-5B: A NEW ERA OF OPEN LARGE-SCALE MULTI-MODAL DATASETS | LAION https://laion.ai/blog/laion-5b/ 104 comments
- GitHub - InternLM/InternLM: Official release of InternLM2.5 base and chat models. 1M context support https://github.com/InternLM/InternLM 89 comments
- Google · GitHub https://github.com/google/ 85 comments
- GitHub - Lightning-AI/lit-llama: Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. https://github.com/Lightning-AI/lit-llama 69 comments
- GitHub - facebookresearch/ImageBind: ImageBind One Embedding Space to Bind Them All https://github.com/facebookresearch/ImageBind 44 comments
- [2305.11206] LIMA: Less Is More for Alignment https://arxiv.org/abs/2305.11206 44 comments
- GitHub - allenai/mmc4: MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text. https://github.com/allenai/mmc4 20 comments
- GitHub - tloen/alpaca-lora: Instruct-tune LLaMA on consumer hardware https://github.com/tloen/alpaca-lora 11 comments
- GitHub - ShishirPatil/gorilla: Gorilla: An API store for LLMs https://github.com/ShishirPatil/gorilla 8 comments
- GitHub - artidoro/qlora: QLoRA: Efficient Finetuning of Quantized LLMs https://github.com/artidoro/qlora 5 comments
- GitHub - lm-sys/FastChat: An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena. https://github.com/lm-sys/FastChat 4 comments
- GitHub - bigcode-project/starcoder: Home of StarCoder: fine-tuning & inference! https://github.com/bigcode-project/starcoder 3 comments
- Fully Sharded Data Parallel: faster AI training with fewer GPUs Engineering at Meta - https://engineering.fb.com/2021/07/15/open-source/fsdp/ 2 comments
- GitHub - tatsu-lab/stanford_alpaca https://github.com/tatsu-lab/stanford_alpaca 2 comments
- theblackcat102/evol-codealpaca-v1 · Datasets at Hugging Face https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1 2 comments
- GitHub - kakaobrain/coyo-dataset: COYO-700M: Large-scale Image-Text Pair Dataset https://github.com/kakaobrain/coyo-dataset 1 comment
- GitHub - microsoft/DeepSpeed: DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. https://github.com/microsoft/DeepSpeed 1 comment
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.
Related searches:
Search whole site: site:github.com
Search title: GitHub - Alpha-VLLM/LLaMA2-Accessory: An Open-source Toolkit for LLM Development
See how to search.