1,593 results for “topic:self-supervised-learning”
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
OpenMMLab Pre-training Toolbox and Benchmark
A python library for self-supervised learning on images.
OpenMMLab Self-Supervised Learning Toolbox and Benchmark
Self-Supervised Speech Pre-training and Representation Learning Toolkit
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
A library for graph deep learning research
The official repo for [NeurIPS'22] "ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation" and [TPAMI'23] "ViTPose++: Vision Transformer for Generic Body Pose Estimation"
An all-in-one toolkit for computer vision
Papers about pretraining and self-supervised learning on Graph Neural Networks (GNN).
[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightning
A Semantic Controllable Self-Supervised Learning Framework to learn general human representations from massive unlabeled human images, which can benefit downstream human-centric tasks to the maximum extent
SCAN: Learning to Classify Images without Labels, incl. SimCLR. [ECCV 2020]
Code for TKDE paper "Self-supervised learning on graphs: Contrastive, generative, or predictive"
[ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
A PyTorch-based library for semi-supervised learning (NeurIPS'21)
All-in-one training for vision models (YOLO, ViTs, RT-DETR, DINOv3): pretraining, fine-tuning, distillation.
A comprehensive list of awesome contrastive self-supervised learning papers.
Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™
Bio-Computing Platform Featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".
Awesome Deep Graph Clustering is a collection of SOTA, novel deep graph clustering methods (papers, codes, and datasets).
A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).
DIPY is the paragon 3D/4D+ medical imaging library in Python. Contains generic methods for spatial normalization, signal processing, machine learning, statistical analysis and visualization of medical images. Additionally, it contains specialized methods for computational anatomy including diffusion, perfusion and structural imaging.
[MICCAI 2019 Young Scientist Award] [MEDIA 2020 Best Paper Award] Models Genesis