Linking pages
Linked pages
- LAION-5B: A NEW ERA OF OPEN LARGE-SCALE MULTI-MODAL DATASETS | LAION https://laion.ai/blog/laion-5b/ 104 comments
- [2106.09685] LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 8 comments
- Training checkpoints | TensorFlow Core https://www.tensorflow.org/guide/checkpoint#object_tracking 3 comments
- [2106.09226] Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning https://arxiv.org/abs/2106.09226 3 comments
- DreamBooth https://dreambooth.github.io/ 1 comment
- GitHub - XavierXiao/Dreambooth-Stable-Diffusion: Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion https://github.com/XavierXiao/Dreambooth-Stable-Diffusion 0 comments
- [1412.6980] Adam: A Method for Stochastic Optimization http://arxiv.org/abs/1412.6980 0 comments
- Parameter-Efficient Fine-Tuning using 🤗 PEFT https://huggingface.co/blog/peft 0 comments
- ChatGPT and Generative AI are booming, but at a very expensive price https://www.cnbc.com/2023/03/13/chatgpt-and-generative-ai-are-booming-but-at-a-very-expensive-price.html 0 comments
- What are Foundation Models? - Foundation Models in Generative AI Explained - AWS https://aws.amazon.com/what-is/foundation-models/ 0 comments
Related searches:
Search whole site: site:www.unite.ai
Search title: The Damage From Fine-Tuning an AI Model Can Easily Be Recovered, Research Finds - Unite.AI
See how to search.