219 results for “topic:affective-computing”
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
:computer: :robot: A summary on our attempts at using Deep Learning approaches for Emotional Text to Speech :speaker:
Official implementation of the paper "Estimation of continuous valence and arousal levels from faces in naturalistic conditions", Antoine Toisoul, Jean Kossaifi, Adrian Bulat, Georgios Tzimiropoulos and Maja Pantic, Nature Machine Intelligence, 2021
Learning to ground explanations of affect for visual art.
Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model
A curated list of awesome affective computing 🤖❤️ papers, software, open-source projects, and resources
This is my reading list for my PhD in AI, NLP, Deep Learning and more.
A machine learning application for emotion recognition from speech
This repository contains the source code for our paper: "Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition". For more details, please refer to our paper at https://arxiv.org/abs/2209.15182.
From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction
😎 Awesome lists about Speech Emotion Recognition
🚀 Pre-process, annotate, evaluate, and train your Affect Computing (e.g., Multimodal Emotion Recognition, Sentiment Analysis) datasets ALL within MER-Factory! (LangGraph Based Agent Workflow)
Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
Toolbox for Emotion Analysis using Physiological signals
This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning".
ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition
personal repository
Self-supervised ECG Representation Learning - ICASSP 2020 and IEEE T-AFFC
IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"
Multimodal Deep Learning Framework for Mental Disorder Recognition @ FG'20
FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition
This is the official implementation of the paper "Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents".
ABAW6 (CVPR-W) We achieved second place in the valence arousal challenge of ABAW6
EmoInt provides a high level wrapper to combine various word embeddings and creating ensembles from multiple trained models
Using deep recurrent networks to recognize horses' pain expressions in video.
IEEE Transactions on Affective Computing, 2022
VAD analysis of text using some affective lexicon (ANEW, SENTIWORDNET, and VADER)
Diploma thesis analyzing emotion recognition in conversations exploiting physiological signals (ECG, HRV, GSR, TEMP) and an Attention-based LSTM network
Supplementary codes for the K-EmoCon dataset
PyTorch code for "M³T: Multi-Modal Multi-Task Learning for Continuous Valence-Arousal Estimation"