Hacker News
- Browser-first analytics in natural language with DuckDB and Ollama https://tobilg.com/chat-with-a-duck 0 comments
- Show HN: Local RAG with Ollama, Gemma and RETSim https://elie.net/blog/ai/wingardium-trivia-osa-on-device-sorting-hatbot-powered-by-gemma-ollama-usearch-and-retsim 0 comments
- Firebase Genkit: Ollama https://firebase.google.com/docs/genkit/plugins/ollama 0 comments
- Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B https://github.com/ollama/ollama/releases/tag/v0.1.33-rc5 64 comments
- Ollama: Acknowledge the work done by Georgi and team https://github.com/ollama/ollama/commit/9755cf9173152047030b6d080c29c829bb050a15 2 comments
- Ollama 0.1.32: WizardLM 2, Mixtral 8x22B, macOS CPU/GPU model split https://github.com/ollama/ollama/releases/tag/v0.1.32 55 comments
- npm i ollama https://github.com/ollama/ollama-js 3 comments
- Show HN: Cloud-native Stack for Ollama - Build locally and push to deploy https://github.com/ollama-cloud/get-started 4 comments
- Ollama now supports AMD graphics cards https://ollama.com/blog/amd-preview 224 comments
- Show HN: Cross-platform Chat App for OpenAI, Ollama et al. https://chatkit.app 2 comments
- Gemma, Ollama and LangChainGo https://eli.thegreenplace.net/2024/gemma-ollama-and-langchaingo/ 22 comments
- Show HN: NotesOllama – I added local LLM support to Apple Notes (through Ollama) https://smallest.app/notesollama/ 31 comments
- Open WebUI: ChatGPT-Style WebUI for Ollama https://github.com/open-webui/open-webui 3 comments
- Ollama is now available on Windows in preview https://ollama.com/blog/windows-preview 153 comments
- Ollama releases Python and JavaScript Libraries https://ollama.ai/blog/python-javascript-libraries 150 comments
- Ollama AI code completion plugin for VSCode, 100% free and 100% private https://github.com/rjmacarthy/twinny 2 comments
- Ollama is now available as an official Docker image https://ollama.ai/blog/ollama-is-now-available-as-an-official-docker-image 48 comments
- Ollama for Linux – Run LLMs on Linux with GPU Acceleration https://github.com/jmorganca/ollama/releases/tag/v0.1.0 54 comments
- Ollama – Easily Run LLMs on your laptop https://ollama.ai/ 2 comments
- Show HN: Ollama – Run LLMs on your Mac https://github.com/jmorganca/ollama 94 comments
Lobsters
- ollama-bot: Bridge IRC to LMMs running locally https://2mb.codes/~cmb/ollama-bot 2 comments ai , show
- Introducing the Ollama-Laravel Package: Seamless Integration with the Ollama API for Laravel Developers https://packagist.org/packages/cloudstudio/ollama-laravel 2 comments laravel
- CrayEye (mobile multimodal lab) now supports local/FOSS models! (like those hosted via Ollama) https://github.com/alexdredmon/crayeye?tab=readme-ov-file#local-models 0 comments artificial
- FREE AI WEBINAR from our Partners: 'How to Build Local LLM Apps with Ollama & SingleStore for Max Security' [May 20, 2024 | 10:00am PDT] https://pxl.to/6t3vpqx 0 comments machinelearningnews
- Alpaca: an ollama client to interact with LLMs locally or remotely https://flathub.org/apps/com.jeffser.Alpaca 2 comments linux
- Run Phi-3 SLM on your machine with C# Semantic Kernel and Ollama https://laurentkempe.com/2024/05/01/run-phi-3-slm-on-your-machine-with-csharp-semantic-kernel-and-ollama/ 2 comments csharp
- Build your own AI ChatGPT/Copilot with Ollama AI and Docker and integrate it with vscode https://youtu.be/OUz--MUBp2A?si=RiY69PQOkBGgpYDc 20 comments selfhosted
- tlm - using Ollama to create a GitHub Copilot CLI alternative for command line interface intelligence. https://github.com/yusufcanb/tlm 7 comments opensource
- Ollama now supports AMD graphics cards https://ollama.com/blog/amd-preview 14 comments amd
- Ollama now supports AMD graphics cards https://ollama.com/blog/amd-preview 19 comments selfhosted
- tlm - using Ollama to create a GitHub Copilot CLI alternative for command line interface intelligence. https://github.com/yusufcanb/tlm 20 comments selfhosted
- Ollama Shell Helper (osh) : English to Unix-like Shell Commands translation using Local LLMs with Ollama https://github.com/charyan/osh 3 comments commandline
- I made simple changes to my Racket AI book code for using any local Ollama LLM model https://github.com/mark-watson/Racket-AI-book-code/tree/main 4 comments racket
- Aichat: A CLI version of the chatbot, supporting GPT-4(V), Gemini, LocalAI, Ollama, and other LLMs https://github.com/sigoden/aichat 2 comments commandline
- Ollama - super easy to host local LLM https://github.com/jmorganca/ollama 11 comments selfhosted
- Now that OpenAI is destabilizing, I made an Ollama demo gist for Colab https://gist.github.com/newsbubbles/0490cbe8603690711c3403aa589231fb 2 comments artificial
- Tutorial: Different models for different tasks. Local AI with Ollama https://lommix.de/article/ai_on_steriods 2 comments neovim
- Ollama for Linux https://github.com/jmorganca/ollama/releases/tag/v0.1.0 2 comments linux
- Ollama - a project to package and run large language models https://github.com/jmorganca/ollama 2 comments devops
- Ollama: open source tool built in Go for running and packaging ML models (Currently for mac; Windows/Linux coming soon) https://github.com/jmorganca/ollama 2 comments golang