23 results for “topic:gesture-generation”
[CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation
Awesome Gesture Generation
DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 (ICMI 2023, Reproducibility Award)
[CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation
The official implementation for ICMI 2020 Best Paper Award "Gesticulator: A framework for semantically-aware speech-driven gesture generation"
This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".
[NeurlPS-2024] The official code of MambaTalk: Efficient Holistic Gesture Synthesis with Selective State Space Models
PATS Dataset. Aligned Pose-Audio-Transcripts and Style for co-speech gesture research
This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning".
Deep Non-Adversarial Gesture Generation
Code for CVPR 2024 paper: ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis
This is the official implementation of the paper "Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents".
Official Repository for the paper Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach published in ECCV 2020 (https://arxiv.org/abs/2007.12553)
This is an official PyTorch implementation of "Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture Generation" (IROS 2022).
This repository contains the gesture generation model from the paper "Moving Fast and Slow" (https://www.tandfonline.com/doi/full/10.1080/10447318.2021.1883883) trained on the English dataset
No description provided.
Scripts for numerical evaluations for the GENEA Gesture Generation Challenge
This fork adapts Gesticulator, the semantically-aware speech-driven gesture generation model, for integration with conversational agents in Unity.
DeepGesture Unity
Thesis Project: A multimodal transformer-based generative model that creates listener avatars conditioned on personality traits to produce realistic non-verbal responses (facial expressions, body and hand gestures), during dyadic conversations. Built with PyTorch, and trained on UDIVA dataset, achieving state-of-the-art FID/P-FID performance.
Parcel project to visualization gesture using three.js
GestureScore
https://genea.pages.dev/