767 results for “topic:local-ai”
Privacy first, AI meeting assistant with 4x faster Parakeet/Whisper live transcription, speaker diarization, and Ollama summarization built on Rust. 100% local processing. no cloud required. Meetily (Meetly Ai - https://meetily.ai) is the #1 Self-hosted, Open-source Ai meeting note taker for macOS & Windows.
Use OCR in Windows quickly and easily with Text Grab. With optional background process and notifications.
Olares: An Open-Source Personal Cloud to Reclaim Your Data
⚡ Python-free Rust inference server — OpenAI-API compatible. GGUF + SafeTensors, hot model swap, auto-discovery, single binary. FREE now, FREE forever.
Open-Source AI Camera Skills Platform, AI NVR & CCTV Surveillance. Local VLM video analysis with Qwen, DeepSeek, SmolVLM, LLaVA, MiniMax. LLM-powered agentic security camera agent — watches, understands, remembers & guards your home via Telegram, Discord or Slack. Pluggable AI skills. OpenAI, Google, Anthropic or local AI. Runs on Mac Mini & AI PC.
MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.
A curated list of awesome platforms, tools, practices and resources that helps run LLMs locally
A minimal LLM chat app that runs entirely in your browser
A privacy-preserving home security camera that uses end-to-end encryption. (Secluso was previously named Privastead.)
NativeMind: Your fully private, open-source, on-device AI assistant
🎙️ AI Dictation App - Open Source and Local-first ⚡ Type 3x faster, no keyboard needed. 🆓 Powered by open source models, works offline, fast and accurate.
The Swiss Army Knife of Offline AI. Chat, Speak, and Generate Images - Privacy First, Zero Internet. Download an LLM and use it on your mobile device. No data ever leaves your phone. Supports text-to-text, vision, text-to-image
Open‑WebUI Tools is a modular toolkit designed to extend and enrich your Open WebUI instance, turning it into a powerful AI workstation. With a suite of over 15 specialized tools, function pipelines, and filters, this project supports academic research, agentic autonomy, multimodal creativity, workflows, and more
MemoryCache is an experimental development project to turn a local desktop environment into an on-device AI agent
Shinkai is a two click install App that allows you to create Local AI agents in 5 minutes or less using a simple UI. Supports: MCPs, Remote and Local AI, Crypto and Payments.
🦙 Ollama Telegram bot, with advanced configuration
On-device AI for Android — LLM chat (GGUF/llama.cpp), vision models (VLM), image generation (Stable Diffusion), tool calling, AI personas, RAG knowledge packs, TTS/STT. Fully offline, zero subscriptions, open-source.
Blueprint by Mozilla.ai for generating podcasts from documents using local AI
High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model discovery across local and remote inference backends.
Like ChatGPT's voice conversations with an AI, but entirely offline/private/trade-secret-friendly, using local AI models such as LLama 2 and Whisper
One command to a fully local AI stack — LLM inference, chat UI, voice, agents, workflows, RAG, and image generation. No cloud, no subscriptions.
🤖 Visual AI agent workflow automation platform with local LLM integration - build intelligent workflows using drag-and-drop interface, no cloud dependencies required.
Lilium AI: The ultimate personal AI agent framework for autonomous computer control. Featuring browser automation, shell execution, and multi-channel integration (WeChat/Telegram/Discord).
lcpp is a dart implementation of llama.cpp used by the mobile artificial intelligence distribution (maid)
AgC is the open-core platform that powers Open Agentic Compute — a new compute substrate purpose-built for deploying, running, and orchestrating AI agents at scale.
Talk to your Mac, query your docs, no cloud required. On-device voice AI + RAG
MVP of an idea using multiple local LLM models to simulate and play D&D
Encapsulate the local Ollama into an OpenAI-compatible API to bypass the fixed model limitations in IDEs like TRAE.
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
No description provided.