- Run Nous-Hermes-2 Mistral MoE locally + across devices with a full portable AI inference App https://www.secondstate.io/articles/nous-hermes-2-mixtral-8x7b-sft/ 2 comments learnmachinelearning
Linked pages
- Fast and Portable Llama2 Inference on the Heterogeneous Edge https://www.secondstate.io/articles/fast-llm-inference/ 98 comments
- GitHub - WasmEdge/WasmEdge: WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices. https://github.com/WasmEdge/WasmEdge 33 comments
- https://localhost:8080/ 26 comments
- GitHub - second-state/LlamaEdge: The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge https://github.com/second-state/LlamaEdge 5 comments
Related searches:
Search whole site: site:secondstate.io
Search title: Getting Started with Nous-Hermes-2-Mixtral-8x7B SFT
See how to search.