12 results for “topic:clip-embeddings”
PostgreSQL-native semantic search engine with multi-modal capabilities. Add AI-powered search to your existing database without separate vector databases, vendor fees, or complex setup. Features text + image search using CLIP embeddings, native SQL joins, and 10-minute Docker deployment.
Semantic Art Search – Explore art through meaning-driven search
Web interface for querying the LAION-5B dataset using CLIP embeddings.
AI-powered fashion recommendation engine combining CLIP (visual) and SBERT (text) embeddings with FAISS HNSW search. Features a FastAPI backend, multimodal fusion for 44k+ products.
The engine is designed to analyze video and image content, identify fashion items, and find similar products from a catalog.
AI-powered fashion visual search engine using multimodal embeddings (CLIP + SentenceTransformer), FAISS indexing, and Gemma-3 LLM for intelligent outfit recommendations via Flask REST API
A distributed system for large-scale image data processing with CLIP embeddings and FAISS indexing. Built on a five-node AlmaLinux cluster with SLURM, Ansible, and NFS. Supports modular embedding, FAISS shard merging, and capacity benchmarking with Prometheus.
This repo contains an Integrated Framework for Cross-Border Digital Marketing Automation that Establish an AI-driven system that integrates multimodal content generation, dynamic cross-platform allocation, and ROI prediction to address inefficiencies in multilingual creative production and delayed strategy adaptation.
Multi-modal Hybrid Recommender system for product recommendation on Amazon Fashion Metadata dataset
This project is a real-time multimodal recommendation system built on top of Reddit data. It processes image-caption pairs using CLIP to create joint embeddings, stores them in Qdrant, and supports semantic retrieval based on text or image input.
AI Image Search
🧠 NeuraLens Backend – Fast, Scalable, and Secure API powering intelligent image analysis.