Linking pages
- GitHub - eric-mitchell/direct-preference-optimization: Reference implementation for DPO (Direct Preference Optimization) https://github.com/eric-mitchell/direct-preference-optimization 0 comments
- GitHub - BobaZooba/xllm: 🦖 X—LLM: Cutting Edge & Easy LLM Finetuning https://github.com/BobaZooba/xllm 0 comments
Linked pages
- GitHub - psf/black: The uncompromising Python code formatter https://github.com/psf/black 8 comments
- GitHub - microsoft/DeepSpeed: DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. https://github.com/microsoft/DeepSpeed 1 comment
- GitHub - alpa-projects/alpa: Training and serving large-scale neural networks https://github.com/alpa-projects/alpa 1 comment
- isort https://pycqa.github.io/isort/ 0 comments
- tensor_parallel int8 LLM | Kaggle https://www.kaggle.com/code/blacksamorez/tensor-parallel-int8-llm 0 comments
Related searches:
Search whole site: site:github.com
Search title: GitHub - BlackSamorez/tensor_parallel: Automatically split your PyTorch models on multiple GPUs for training & inference
See how to search.