Hacker News
- Llama.cpp guide – Running LLMs locally on any hardware, from scratch https://steelph0enix.github.io/posts/llama-cpp-guide/ 86 comments
Linked pages
- GitHub - ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ https://github.com/ggerganov/llama.cpp 286 comments
- LM Studio - Discover, download, and run local LLMs https://lmstudio.ai 165 comments
- MSYS2 https://www.msys2.org/ 80 comments
- Hugging Face – The AI community building the future. https://huggingface.co/ 57 comments
- Disqus – The #1 way to build your audience https://disqus.com 32 comments
- mmap(2) - Linux manual page https://man7.org/linux/man-pages/man2/mmap.2.html 9 comments
- Download Visual Studio Tools - Install Free for Windows, Mac, Linux https://visualstudio.microsoft.com/downloads/ 8 comments
- CMake - Upgrade Your Software Build System https://cmake.org/ 4 comments
- What is Flash Attention? - Hopsworks https://www.hopsworks.ai/dictionary/flash-attention 1 comment
- [2407.01082] Min P Sampling: Balancing Creativity and Coherence at High Temperature https://arxiv.org/abs/2407.01082 1 comment
- Ninja, a small build system with a focus on speed https://ninja-build.org/ 0 comments
- Explore and Compare the Parameters of the Top-Performing Large Language Models (LLMs) https://llm.extractum.io/ 0 comments
- Your settings are (probably) hurting your model - Why sampler settings matter : LocalLLaMA https://old.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/ 0 comments
- Exclude Top Choices (XTC): A sampler that boosts creativity, breaks writing clichés, and inhibits non-verbatim repetition by p-e-w · Pull Request #6335 · oobabooga/text-generation-webui · GitHub https://github.com/oobabooga/text-generation-webui/pull/6335 0 comments
Related searches:
Search whole site: site:steelph0enix.github.io
Search title: llama.cpp guide - Running LLMs locally, on any hardware, from scratch ::
See how to search.