Linking pages
- GitHub - bentoml/OpenLLM: Operating LLMs in production https://github.com/bentoml/OpenLLM 172 comments
- GitHub - brexhq/prompt-engineering: Tips and tricks for working with Large Language Models like OpenAI's GPT-4. https://github.com/brexhq/prompt-engineering 105 comments
- OpenAI vs Open Source LLM Comparison for Document Q&A | Jou-ching (George) Sung https://georgesung.github.io/ai/llm-qa-eval-wikipedia/ 13 comments
- GitHub - kir-gadjello/zipslicer: A library for incremental loading of large PyTorch checkpoints https://github.com/kir-gadjello/zipslicer 8 comments
- GitHub - eugeneyan/open-llms: 🤖 A list of open LLMs available for commercial use. https://github.com/eugeneyan/open-llms 2 comments
- Is Google Flan-T5 better than OpenAI GPT-3? | by Daniel Avila | Feb, 2023 | Medium https://medium.com/@dan.avila7/is-google-flan-t5-better-than-openai-gpt-3-187fdaccf3a6 0 comments
- GitHub - promptslab/Awesome-Prompt-Engineering: This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc https://github.com/promptslab/Awesome-Prompt-Engineering 0 comments
- LLM Collection | Prompt Engineering Guide https://www.promptingguide.ai/models/collection 0 comments
- ML pipelines for fine-tuning LLMs | Dagster Blog https://dagster.io/blog/finetuning-llms 0 comments
- Vanilla GPT-3 quality from an open source model on a single machine: GLM-130B - The Full Stack https://fullstackdeeplearning.com/blog/posts/running-llm-glm-130b/ 0 comments
- Selecting GPUs for LLM serving on GKE | Google Cloud Blog https://cloud.google.com/blog/products/ai-machine-learning/selecting-gpus-for-llm-serving-on-gke/ 0 comments
Related searches:
Search whole site: site:huggingface.co
Search title: google/flan-t5-xxl · Hugging Face
See how to search.