38 results for “topic:lidar-camera-fusion”
FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry
A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
A Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual Odometry (LIVO).
A Collection of LiDAR-Camera-Calibration Papers, Toolboxes and Notes
Xtreme1 is an all-in-one data labeling and annotation platform for multimodal data training and supports 3D LiDAR point cloud, image, and LLM.
LIV-Eye: A Low-Cost LiDAR-Inertial-Visual Fusion 3D Sensor for Robotics and Embodied AI.
The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.
[CVPR2023] LoGoNet: Towards Accurate 3D Object Detection with Local-to-Global Cross-Modal Fusion
[CVPR 2023] MSMDFusion: Fusing LiDAR and Camera at Multiple Scales with Multi-Depth Seeds for 3D Object Detection
ROS package to calibrate the extrinsic parameters between LiDAR and Camera.
auto-calibration of lidar and camera based on maximization of intensity mutual information
[T-RO 2022] Official Implementation for "LiCaS3: A Simple LiDAR–Camera Self-Supervised Synchronization Method," in IEEE Transactions on Robotics, doi: 10.1109/TRO.2022.3167455.
[IV2024] MultiCorrupt: A benchmark for robust multi-modal 3D object detection, evaluating LiDAR-Camera fusion models in autonomous driving. Includes diverse corruption types (e.g., misalignment, miscalibration, weather) and severity levels. Assess model performance under challenging conditions.
This repository uses a ROS node to subscribe to camera (hikvision) and lidar (livox) data. After the node merges the data, it publishes the colored point cloud and displays it in rviz.
ADAS Car - with Collision Avoidance System (CAS) - on Indian Roads using LIDAR-Camera Low-Level Sensor Fusion. DIY Gadget built with Raspberry Pi, RP LIDAR A1, Pi Cam V2, LED SHIM, NCS 2 and accessories like speaker, power bank etc
Lidar Camera Manual Target-less Calibration Software
[Information Fusion 2025] CoreNet: Conflict Resolution Network for point-pixel misalignment and sub-task suppression of 3D LiDAR-camera object detection
BIM-based AI-supported LiDAR-Camera Pose Refinement
Project: Generating overhead birds-eye-view occupancy grid map with semantic information from lidar and camera data.
Extrinsic Calibration of Monocular Camera and Lidar using Planar Point To Plane Constraint
This package introduces the concept of optimizing target shape to remove pose ambiguity for LiDAR point clouds. Both the simulation and the experimental results confirm that by using the optimal shape and the global solver, we achieve centimeter error in translation and a few degrees in rotation even when a partially illuminated target is placed 30 meters away!
[JAG 2026] The official implementation of the paper "Dense 3D displacement estimation for landslide monitoring via fusion of TLS point clouds and embedded RGB images".
This a simple implementation of V-LOAM
Awesome Multi-Modal 3D Detection
A research-purposed, GUI-powered, Python-based framework that allows easy development of dynamic point-cloud (and accompanying image) data processing pipelines.
This repository contains tools for deriving supraglacial lake bathymetry from ATM aerial imagery using the Ames Stereo Pipeline.
ROS2 package that integrates L3CAM sensors using L3CAM SDK
Projecting LiDAR pointcloud onto equirectangular images and colorize pointcloud
simple visualization toolbox for 3d vision sensors/tasks
A research-purposed, GUI-powered, Python-based framework that allows easy development of dynamic point-cloud (and accompanying image) data processing pipelines.