515 results for “topic:perception”
Cross-platform, customizable ML solutions for live and streaming media.
GTSAM is a library of C++ classes that implement smoothing and mapping (SAM) in robotics and vision, using factor graphs and Bayes networks as the underlying computing paradigm rather than sparse matrices.
Python sample codes and documents about Autonomous vehicle control algorithm. This project can be used as a technical guide book to study the algorithms and the software architectures for beginners.
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
[CVPR'23] Universal Instance Perception as Object Discovery and Retrieval
Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments
⚡️The spatial perception framework for rapidly building smart robots and spaces
Perception toolkit for sim2real training and validation in Unity
OpenEMMA, a permissively licensed open source "reproduction" of Waymo’s EMMA model.
An open-source computer vision framework to build and deploy apps in minutes
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
Notebook-based book "Introduction to Robotics and Perception" by Frank Dellaert and Seth Hutchinson
An OpenAI gym wrapper for CARLA simulator
Platform for Situated Intelligence
Fast, efficient and accurate multi-resolution, multi-sensor 3D occupancy mapping
Modular autonomous driving platform running on the CARLA simulator and real-world vehicles.
A curated list of awesome papers on Embodied AI and related research/industry-driven resources.
A LiDAR SLAM system that just works
Simulations for TurtleBot3
(CVPR 2022) A minimalist, mapless, end-to-end self-driving stack for joint perception, prediction, planning and control.
LV-DOT: LiDAR-Visual Dynamic Obstacle Detection and Tracking (C++/Python/ROS)
[ECCV2024 Oral🔥] Official Implementation of "GiT: Towards Generalist Vision Transformer through Universal Language Interface"
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
Platform for General Robot Intelligence Development
Unity's privacy-preserving human-centric synthetic data generator
Object detection / tracking / fusion based on Apollo r3.0.0 perception module in ROS
PillarNeXt: Rethinking Network Designs for 3D Object Detection in LiDAR Point Clouds (CVPR 2023)
Artificial Intelligence for Kinematics, Dynamics, and Optimization
Perception-Aware Trajectory Planner in Dynamic Environments