81 results for “topic:distilbert-model”
Sentiment analysis using the distilbert-base-uncased model using the movies dataset.
This paper describes Humor Analysis using Ensembles of Simple Transformers, the winning submission at the Humor Analysis based on Human Annotation (HAHA) task at IberLEF 2021.
The official repository for the PSYCHIC model
Deep learning for Natural Language Processing (FNNs, RNNs, BERT)
This project classifies Internet Hinglish memes using multimodal learning. It combines text and image analysis to categorize memes by sentiment and emotion, leveraging the Memotion 3.0 dataset.
This repository contains my work on the prevention and anonymization of dox content on Twitter. It contains python code and demo of the proposed solution.
A Deep Learning Based Voice Analytics toolkit
No description provided.
Fine tuning pre-trained transformer models in TensorFlow and in PyTorch for question answering
Using BERT models to perform sentiment analysis on women's clothing
This app searches reddit posts and comments to determine if a product or service has a positive or negative sentiment and predicts top product mentions using Named Entity Recognition
Analyzes emotions in text chunks per chapter using a sentiment analysis model, visualizing scores across chunks as line graphs. Includes pie charts showing dominant emotions per chapter, enhancing understanding of emotional variations in text chunks. Developed using Transformers library.
Successfully developed a fine-tuned DistilBERT transformer model which can accurately predict the overall sentiment of a piece of financial news up to an accuracy of nearly 81.5%.
Public validation of Collapse Index (CI) on SST-2 dataset: 42.8% flip rate, AUC 0.698. Reveals model brittleness beyond 90%+ accuracy under perturbations!
This project analyzes and compares the Wikipedia articles of Xi Jinping and Vladimir Putin over 20 years, uncovering differences in portrayal, sentiment, and biases to measure public perception of each leader.
Sentiment analysis using Transformers (DistilBERT) from Hugging Face.
Finetuning the Bert-based LLM to predict whether the tweet is toxic or not
Performing named entity extraction task using Huggingface Transformers
This project is designed to streamline the recruitment process by providing a job and resume matching system and a chatbot for applicants. The key functionalities include: Job and Resume Matching and LLM powered chatbot
No description provided.
Custom Italian-language chatbot that chats about weather, news, finance, manages TODOs, and even tells jokes.
A reinforcement learning-based system designed to detect and prevent jailbreak attempts in AI models, ensuring safe and controlled model behavior under adversarial conditions.
Manning Live Project: Sentiment Analysis and Natural Language Processing for Marketing
A classification project that detects hate speech in social media comments using fine-tuned transformer models, zero-shot learning, and baseline classifiers on the UC Berkeley D-Lab dataset.
Successfully fine-tuned a pretrained DistilBERT transformer model that can classify social media text data into one of 4 cyberbullying labels i.e. ethnicity/race, gender/sexual, religion and not cyberbullying with a remarkable accuracy of 99%.
We explored recent studies in Question Answering System. Then tried out 3 different QA models(BERT and DistilBERT) for the sake of learning.
Finetune the Transformer model 'DistilBERT' with PyTorch framework . Then inference on a dataset by using this fine-tuned model with the help of Pipeline.
No description provided.
Developing a feedback theory-informed natural language processing (NLP) model to enable large-scale evaluation of written feedback, and analysing a large set of feedback extracted from Moodle using this model to understand the presence of student-centred feedback elements, the commonality and differences in feedback provision across disciplines.
Fine tune bert on a question answering dataset that is further finetuned on finance data to answer questions posed by senior leadership