12 results for “topic:sbert-implementation”
Powerful document clustering models are essential as they can efficiently process large sets of documents. These models can be helpful in many fields, including general research. Searching through large corpora of publications can be a slow and tedious task; such models can significantly reduce this time.
https://matthieuvion.github.io/lmd_viz/ 236k comments of Le Monde on Ukraine. A proxy to measure people' engagement. Semantic search & SBERT models testing via Sentence-Transformer / Faiss
Created Recommender systems using TMDB movie dataset by leveraging the concepts of Content Based Systems and Collaborative Filtering.
ko-sentence-BERT implementation and experiment about fine-tuning strategies
Search relevancy algorithm for news articles using Sentence-BERT model and ANNOY library along with deployment on AWS using Docker.
Evaluating-STS-benchmark-datset-using-SBERT
Country Data Source Finder is a search engine web app that helps you finding the best data sources for the country-level statistics that you seek.
This is a NLP project, where we try to score the genre's for a movie (from a list of genres), using the summary of the movie.
L'outil analyse les expériences d'un candidat, comparaison vectoriellement sur un référentiel de compétences Data (120+ skills), et détermine le profil métier idéal (Data Engineer, Data Scientist, etc.) avec un plan de progression généré par IA
Sockpuppet detection app using Flask and machine learning. Analyzes Wikipedia comments with S-BERT embeddings, sentiment analysis, and a RandomForest model to identify deceptive online identities.
In this demo, we illustrate the the possibility of using Semantic Search + Recognising Textual Entailment with Gradio to build an automated fact checking tool
sbert implemented with jittor