97 results for “topic:openai-clip”
Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.
:dart: Task-oriented embedding tuning for BERT, CLIP, etc.
Simple implementation of OpenAI CLIP model in PyTorch.
Official implementation for "Blended Diffusion for Text-driven Editing of Natural Images" [CVPR 2022]
A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
Sort a folder of images according to their similarity with provided text in your browser (uses a browser-ported version of OpenAI's CLIP model and the web's new File System Access API)
KoCLIP: Korean port of OpenAI CLIP, in Flax
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.
No description provided.
CLIPfa: Connecting Farsi Text and Images
CLIP as a service - Embed image and sentences, object recognition, visual reasoning, image classification and reverse image search
CLIP-MoE: Mixture of Experts for CLIP
Zero-shot object detection with CLIP, utilizing Faster R-CNN for region proposals.
code for studying OpenAI's CLIP explainability
Text to Image & Reverse Image Search Engine built upon Vector Similarity Search utilizing CLIP VL-Transformer for Semantic Embeddings & Qdrant as the Vector-Store
A dead-simple image search / retrieval and image-text matching system for Bangla using CLIP
Run CLIP inference on the ImageNet dataset and use these inferences as labels to train other models and again evaluate the trained model on Imagenet validation dataset using original labels or CLIP labels
CLIP (Contrastive Language–Image Pre-training) for Bangla.
An experiment with movie scenes and contrastive learning
Powerful CLIP-based computer vision system for natural language-driven object and scene localization in images. Features smart query expansion, adaptive detection, and interactive web UI.
A list of projects that use OpenAI's CLIP model.
🚀 ClipServe: A fast API server for embedding text, images, and performing zero-shot classification using OpenAI’s CLIP model. Powered by FastAPI, Redis, and CUDA for lightning-fast, scalable AI applications. Transform texts and images into embeddings or classify images with custom labels—all through easy-to-use endpoints. 🌐📊
SpaceVector is a platform for semantic search on satellite images using state of the art AI. It aims to support the use of satellite images.
CLIP & SigLIP model training from scratch
Open AI Clip + Faiss Image Semantic search
Visual Search with OpenAI Clip
CLIFS (CLIP-based Frame Selection) is a Python function that takes in a video file and a text prompt as input, and uses the CLIP (Contrastive Language-Image Pre-training) model to find the frame in the video that is most similar to the given text prompt.
🏍️ A clustering tool providing exact and near de-duplication of images using vector embeddings.