Hacker News
- Show HN: A fully open-source (Apache 2.0)implementation of llama https://github.com/Lightning-AI/lit-llama 52 comments
- Lit-llama is a language model that claims to run on consumer devices https://github.com/Lightning-AI/lit-llama 2 comments deeplearning
- [P] An implementation of LLaMA based on nanoGPT https://github.com/Lightning-AI/lit-llama 6 comments machinelearning
- [D] Do model weights have the same license as the modem architecture? https://github.com/Lightning-AI/lit-llama 9 comments machinelearning
Linking pages
- AI and Open Source in 2023 - by Sebastian Raschka, PhD https://magazine.sebastianraschka.com/p/ai-and-open-source-in-2023 67 comments
- Understanding Parameter-Efficient Finetuning of Large Language Models: From Prefix Tuning to LLaMA-Adapters https://sebastianraschka.com/blog/2023/llm-finetuning-llama-adapter.html 4 comments
- Ahead of AI #9: LLM Tuning & Dataset Perspectives https://magazine.sebastianraschka.com/p/ahead-of-ai-9-llm-tuning-and-dataset 4 comments
- GitHub - Alpha-VLLM/LLaMA2-Accessory: An Open-source Toolkit for LLM Development https://github.com/Alpha-VLLM/LLaMA2-Accessory 3 comments
- awesome-marketing-datascience/awesome-ai.md at master · underlines/awesome-marketing-datascience · GitHub https://github.com/underlines/awesome-marketing-datascience/blob/master/awesome-ai.md 1 comment
- State of LLaMA 2023/Q1. Here’s a mind map for AI/ML ChatGPT… | by katopz | Apr, 2023 | Better Programming https://betterprogramming.pub/state-of-llama-2023-q1-663905c37a5e 0 comments
- Ahead of AI #7: Large Language Models 3.0 https://magazine.sebastianraschka.com/p/ahead-of-ai-7-large-language-models 0 comments
- Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA) - Lightning AI https://lightning.ai/pages/community/tutorial/lora-llm/ 0 comments
- Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA) https://sebastianraschka.com/blog/2023/llm-finetuning-lora.html 0 comments
- Accelerating Large Language Models with Mixed-Precision Techniques - Lightning AI https://lightning.ai/pages/community/tutorial/accelerating-large-language-models-with-mixed-precision-techniques/ 0 comments
- Accelerating Large Language Models with Mixed-Precision Techniques https://sebastianraschka.com/blog/2023/llm-mixed-precision.html 0 comments
- GitHub - Hannibal046/Awesome-LLM: Awesome-LLM: a curated list of Large Language Model https://github.com/Hannibal046/Awesome-LLM 0 comments
- Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch - Lightning AI https://lightning.ai/pages/community/tutorial/pytorch-memory-vit-llm/ 0 comments
Linked pages
- Microsoft · GitHub https://github.com/Microsoft 1168 comments
- GitHub - karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs. https://github.com/karpathy/nanoGPT 366 comments
- GitHub - microsoft/LoRA: Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models" https://github.com/microsoft/LoRA 156 comments
- PyTorch http://pytorch.org/ 100 comments
- [2303.16199] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention https://arxiv.org/abs/2303.16199 52 comments
- [2106.09685] LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 8 comments
- GitHub - tatsu-lab/stanford_alpaca https://github.com/tatsu-lab/stanford_alpaca 2 comments
- GitHub - TimDettmers/bitsandbytes: 8-bit CUDA functions for PyTorch https://github.com/TimDettmers/bitsandbytes 0 comments
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.