51 results for “topic:human-action-recognition”
Real-Time Spatio-Temporally Localized Activity Detection by Tracking Body Keypoints
deep learning sex position classifier
Human action classification system with pose-based (MediaPipe) and video-based (3D CNN) models. Features 100+ architectures for real-time pose classification and temporal models pretrained on UCF-101/HMDB51.
Keras implementation of Human Action Recognition for the data set State Farm Distracted Driver Detection (Kaggle)
Source code for "Learning Graph Convolutional Network for Skeleton-based Human Action Recognition by Neural Searching", AAAI2020
Surveillance Perspective Human Action Recognition Dataset: 7759 Videos from 14 Action Classes, aggregated from multiple sources, all cropped spatio-temporally and filmed from a surveillance-camera like position.
A Comprehensive Tutorial on Video Modeling
This repository contains the MPOSE2021 Dataset for short-time pose-based Human Action Recognition (HAR).
[AAAI-2024] HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors
This repository provides implementation of a baseline method and our proposed methods for efficient Skeleton-based Human Action Recognition.
Computer Vision Project : Action Recognition on UCF101 Dataset
[TPAMI 2020] "Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset" by Zhenyu Wu, Haotao Wang, Zhaowen Wang, Hailin Jin, and Zhangyang Wang
MSR Action Recognition Datasets and Codes
A real-time inferencing of multistreaming YOWOv3(Spatio Temporal Action Detection task) using (UCF101-24) dataset. The repo is extension of https://github.com/Hope1337/YOWOv3, https://arxiv.org/pdf/2408.02623
Activity Recognition using Temporal Optical Flow Convolutional Features and Multi-Layer LSTM
Deep learning model that predicts human action in a given video feed using pose estimation
机器学习实现基于手机六轴数据的人体动作识别和计数功能。并利用云服务器和微信小程序在手机上实现。 Use machine learning to achieve human activity recognition and counting function based on cell phone six-axis data. Achieve it on phone using ECS and WeChat mini-program.
Human Activity Recognition Research Repository
Code for HAR-GCNN: Deep Graph CNNs for Human Activity Recognition From Highly Unlabeled Mobile Sensor Data, IEEE PerCom CoMoRea 2022
This includes a novel method to measure the quality of the actions performed in Olympic weightlifting using human action recognition in videos. Human action recognition is a well-studied problem in computer vision and on the other hand action quality assessment is researched and experimented comparatively low. This is due to the lack of datasets that can be used to assess the quality of actions. In this research, we introduce a method to assess player techniques in weightlifting by using skeleton-based human action recognition. Furthermore, we introduce a new video dataset for action recognition in weightlifting which is annotated to frame level. We intended to develop a viable automated scoring system through action recognition that would be beneficial in the sports industry.
Synthetically Generated Surveillance Perspective Human Action Recognition Dataset: 6901 Videos from 10 action classes, made by a 3D Simulation, all cropped spatio-temporally and filmed from a surveillance-camera like position.
Repository for the paper Accuracy Comparison of CNN, LSTM, and Transformer for Activity Recognition Using IMU and Visual Markers, containing all the datasets and Jupyter notebooks used for experiments
Implementation of some popular skeleton based Human Action Recognition methods basis on Deep Neural Networks.
Source code of experiments performed in paper: Human Action Recognition in Videos Based on Spatiotemporal Features and Bag-of-Poses
A human action dataset collected from Elder Scrolls V: Skyrim
Implementation of ST-GCN for continuous inference and a novel lightweight realtime RT-ST-GCN
An AI-powered Human Action Recognition system that classifies 15 common human activities using deep learning and computer vision. Built with TensorFlow, Keras, and OpenCV, the system supports real-time predictions from live camera feeds or uploaded images/videos through a user-friendly PyQt interface.
Human Activity Detection with TensorFlow and Python.
[ECCV 2024]Temporary code for "Ad-HGformer: An Adaptive HyperGraph Transformer for Skeletal Action Recognition"
A Human Action Recognition (HAR) model combining 3D CNN and LSTM networks to accurately recognize actions in videos using spatial-temporal feature extraction. Trained on UCF-50 and outperforming existing architectures.