52 results for “topic:private-ai”
Open framework for confidential AI
⚠️ Archived — BrainDrive is building a new system on personalaiarchitecture.org
Priveedly: A django-based content reader and recommender for personal and private use
emotional AI Companions for personal relationships
Obrew Server: A self-hostable machine learning engine. Build agents and schedule workflows private to you.
🔒 100% Private RAG Stack with EmbeddingGemma, SQLite-vec & Ollama - Zero Cost, Offline Capable
Run IBM Granite 4.0 locally on Raspberry Pi 5 with Ollama.This is a privacy-first AI. Your data never leaves your device because it runs 100% locally. There are no cloud uploads and no third-party tracking.
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
A modern, RAG-powered AI chat application that integrates with Ollama for local AI inference. Chat with various Ollama models while leveraging your own documents for context-aware, intelligent responses.
The Private AI Setup Dream Guide for Demos automates the installation of the software needed for a local private AI setup, utilizing AI models (LLMs and diffusion models) for use cases such as general assistance, business ideas, coding, image generation, systems administration, marketing, planning, and more.
Deploy a complete, self-hosted AI stack for private LLMs, agentic workflows, and content generation. One-command Docker Compose deployment on any cloud.
Apache 2.0-licensed open source operations stack for private AI inference with open models. Run LLMs (7B-70B) locally with vLLM, OpenAI-compatible API, web dashboard, chat UI, admin panel, and hardware monitoring.
Local LLM integration for Odoo 18 - chat with AI directly in Odoo using Ollama, LM Studio, or any OpenAI-compatible API.
Meta-Agentic Ai Orchestration Platform For Developers and Data Scientists Developing Full-Cycle(E2E) AI Projects
Record system audio and mic on your Mac to generate diarized trancripts and meeting notes.
SnapDoc AI processes everything on-device, ensuring your sensitive information never leaves your control. Use voice and text on-device processing in organizations.
Offline AI journaling app that gives insights based on your entries and runs locally with no cloud or data sharing.
Ready-to-use Blueprint for the Casual Character Chat app for private use - free, uncensored, private, independent.
🤖🗜️⚡️ Compress local LLMs once, run them forever at sub-second load times. OpenAI + Ollama drop-in for Apple Silicon — statistically identical accuracy, 54× faster cold starts.
This is on-going research regarding the implementation of homomorphic encryption and federated learning for the use case of electric utility infrastructure defect detection using an object detection model in a Private AI framework.
Private AI Infrastructure - Deploy OpenWebUI + Ollama + Qdrant in one command. Local LLMs, vector search, 12 AI tools. No cloud, full privacy.
INTENTIO is a local-first, private cognitive environment for designing how AI pays attention. Not a chatbot or cloud service, but an interpretive system that operates inside a bounded, intentional knowledge space you define.
A complete, menu-driven AI model interface for Windows that simplifies running local GGUF language models with llama.cpp. This tool automatically manages dependencies, provides multiple interaction modes, and prioritizes user privacy through fully offline operation.
Complete guide to deploying private, on-premise AI and LLMs: hardware selection, model comparison (ollama vs vLLM vs llama.cpp), security hardening, and AI governance policy templates. By Petronella Technology Group.
Open source Node.js runtime for local LLM inference, on-device AI, and private model execution.
Local-first cognitive engine with asymmetric dual-model inference, offline RL routing, chunk stability filtering, and sub-token consensus generation.
Internship Project at Stratigus: Cybersecurity and Privacy Challenges in the Age of Generative AI
A powerful, self-hosted AI chat interface with local RAG (Retrieval-Augmented Generation), file uploads, and support for Google Gemini, OpenAI, Anthropic, and OpenRouter models. Includes session-specific context and persistent history.
Self-hosted AI chat interface with RAG, long-term memory, and admin controls. Works with TabbyAPI, Ollama, vLLM, and any OpenAI-compatible API.
Fast, private Android chat front-end for Ollama. Engineered with a cohesive UI to be the most reliable, confusion-free local AI experience available for mobile.