Linking pages
- GitHub - IDEA-Research/Grounded-Segment-Anything: Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything https://github.com/IDEA-Research/Grounded-Segment-Anything 15 comments
- GitHub - OpenGVLab/InternChat: InternChat allows you to interact with ChatGPT by clicking, dragging and drawing using a pointing device. https://github.com/OpenGVLab/InternChat 1 comment
- GitHub - ttengwang/Caption-Anything: Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with diverse controls for user preferences. https://github.com/ttengwang/Caption-Anything 0 comments
Linked pages
- GitHub - hwchase17/langchain: ⚡ Building applications with LLMs through composability ⚡ https://github.com/hwchase17/langchain 77 comments
- GitHub - IDEA-Research/GroundingDINO: The official implementation of "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection" https://github.com/IDEA-Research/GroundingDINO 6 comments
- GitHub - CompVis/stable-diffusion: A latent text-to-image diffusion model https://github.com/CompVis/stable-diffusion 4 comments
- GitHub - lllyasviel/ControlNet: Let us control diffusion models https://github.com/lllyasviel/ControlNet 1 comment
- [2303.04671] Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models https://arxiv.org/abs/2303.04671 0 comments
- GitHub - facebookresearch/segment-anything: The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model. https://github.com/facebookresearch/segment-anything 0 comments
Related searches:
Search whole site: site:github.com
Search title: GitHub - microsoft/TaskMatrix
See how to search.