Hacker News
- WebLLM: Llama2 in the Browser https://webllm.mlc.ai/ 31 comments
- Run Llama2-70B in Web Browser with WebGPU Acceleration https://webllm.mlc.ai 6 comments
Linking pages
- GitHub - mlc-ai/mlc-llm: Enable everyone to develop, optimize and deploy AI models natively on everyone's devices. https://github.com/mlc-ai/mlc-llm 228 comments
- Web LLM lets you run LLMs natively in your frontend using the new WebGPU standard. | Monarch Wadia https://www.monarchwadia.com/2024/02/23/running-llms-in-the-browser.html 4 comments
- LLMs Everywhere: Running 70B models in browsers and iPhones using MLC — with Tianqi Chen of CMU / OctoML https://www.latent.space/p/llms-everywhere#details 1 comment
- How to build privacy-protecting AI | Proton https://proton.me/blog/how-to-build-privacy-first-ai 0 comments
- GitHub - sudoghut/trio: Welcome to the TRIO Web App, a tool for performing multiple text tasks using AI-driven offline models. It supports text rewriting, cleaning, summarization, and more, with options to send results to an external API. The project uses WebLLM to download and run models locally in the browser. https://github.com/sudoghut/trio 0 comments
Related searches:
Search whole site: site:webllm.mlc.ai
Search title: WebLLM | Home
See how to search.