FastGS: Training 3D Gaussian Splatting in 100 Seconds
๐ What Makes FastGS Special?
FastGS is a general acceleration framework that supercharges 3D Gaussian Splatting training while maintaining Comparable rendering quality. Our method stands out with:
- โก Blazing Fast Training: Achieve SOTA results within 100 seconds. 3.32ร faster than DashGaussian on Mip-NeRF 360 dataset. 15.45ร acceleration vs vanilla 3DGS on Deep Blending.
- โก High fidelity: Comparable rendering quality with SOTA methods
- ๐ฏ Easy Integration: Seamlessly integrates with various backbones (Vanilla 3DGS, Scaffold-GS, Mip-splatting, etc.)
- ๐ ๏ธ Multi-Task Ready: Proven effective across dynamic scenes, surface reconstruction, sparse-view, large-scale, and SLAM tasks
- ๐ก Memory-Efficient: Low GPU Memory requirements make it accessible for various hardware setups
- ๐ง Easy Deployment: Simple post-training tool for feedforward 3DGS that works out-of-the-box
๐ข Latest Updates
๐ฅ [2025.11.16] Code Released - Get Started Now! ๐
๐ฏ Coming Soon
- [2025.12.31] ๐ฏ Multi-Task Expansion:
- Dynamic scenes Reconstruction: Deformable-3D-Gaussians
- Autonomus Driving scene: street_gaussians
- Surface reconstruction: PGSR
- Sparse-view Reconstruction: DropGaussian
- Large-scale Reconstruction: OctreeGS
- SLAM: Photo-SLAM
- [2025.12.31] ๐ Backbone Enhancing: popular 3DGS variants (Vanilla 3DGS, Scaffold-GS, Mip-splatting, Taming-3DGS)
๐ Training Framework
Our training pipeline leverages PyTorch and optimized CUDA extensions to efficiently produce high-quality trained models in record time.
๐ป Hardware Requirements
- GPU: CUDA-ready GPU with Compute Capability 7.0+
- Memory: 24 GB VRAM (for paper-quality results; we recommend NVIDIA RTX4090)
๐ฆ Software Requirements
- Conda (recommended for streamlined setup)
- C++ Compiler compatible with PyTorch extensions
- CUDA SDK 11 (or compatible version)
โ ๏ธ Important: Ensure C++ Compiler and CUDA SDK versions are compatible
๐ Quick Start
๐ฅ Clone the Repository
git clone https://github.com/fastgs/FastGS.git --recursive
cd FastGSโ๏ธ Environment Setup
We provide a streamlined setup using Conda:
# Windows only
SET DISTUTILS_USE_SDK=1
# Create and activate environment
conda env create --file environment.yml
conda activate fastgs๐ Dataset Organization
Organize your datasets in the following structure:
datasets/
โโโ mipnerf360/
โ โโโ bicycle/
โ โโโ flowers/
โ โโโ ...
โโโ db/
โ โโโ playroom/
โ โโโ ...
โโโ tanksandtemples/
โโโ truck/
โโโ ...The MipNeRF360 scenes are hosted by the paper authors here. You can find our SfM data sets for Tanks&Temples and Deep Blending here.
๐ฏ Training & Evaluation
โก FastGS (Standard)
Train the base model with optimal speed and quality balance:
bash train_base.sh๐จ FastGS-Big (High Quality)
For enhanced quality with slightly longer training time:
bash train_big.sh๐ Advanced: Command Line Arguments for train.py
--loss_thresh
--grad_abs_thresh
--highfeature_lr
--lowfeature_lr
--grad_thresh
--dense
--mult
Multiplier for the compact box to control the tile number of each splat
--source_path / -s
Path to the source directory containing a COLMAP or Synthetic NeRF data set.
--model_path / -m
Path where the trained model should be stored (output/<random> by default).
--images / -i
Alternative subdirectory for COLMAP images (images by default).
--eval
Add this flag to use a MipNeRF360-style training/test split for evaluation.
--resolution / -r
Specifies resolution of the loaded images before training. If provided 1, 2, 4 or 8, uses original, 1/2, 1/4 or 1/8 resolution, respectively. For all other values, rescales the width to the given number while maintaining image aspect. If not set and input image width exceeds 1.6K pixels, inputs are automatically rescaled to this target.
--data_device
Specifies where to put the source image data, cuda by default, recommended to use cpu if training on large/high-resolution dataset, will reduce VRAM consumption, but slightly slow down training. Thanks to HrsPythonix.
--white_background / -w
Add this flag to use white background instead of black (default), e.g., for evaluation of NeRF Synthetic dataset.
--sh_degree
Order of spherical harmonics to be used (no larger than 3). 3 by default.
--convert_SHs_python
Flag to make pipeline compute forward and backward of SHs with PyTorch instead of ours.
--convert_cov3D_python
Flag to make pipeline compute forward and backward of the 3D covariance with PyTorch instead of ours.
--debug
Enables debug mode if you experience erros. If the rasterizer fails, a dump file is created that you may forward to us in an issue so we can take a look.
--debug_from
Debugging is slow. You may specify an iteration (starting from 0) after which the above debugging becomes active.
--iterations
Number of total iterations to train for, 30_000 by default.
--ip
IP to start GUI server on, 127.0.0.1 by default.
--port
Port to use for GUI server, 6009 by default.
--test_iterations
Space-separated iterations at which the training script computes L1 and PSNR over test set, 7000 30000 by default.
--save_iterations
Space-separated iterations at which the training script saves the Gaussian model, 7000 30000 <iterations> by default.
--checkpoint_iterations
Space-separated iterations at which to store a checkpoint for continuing later, saved in the model directory.
--start_checkpoint
Path to a saved checkpoint to continue training from.
--quiet
Flag to omit any text written to standard out pipe.
--feature_lr
Spherical harmonics features learning rate, 0.0025 by default.
--opacity_lr
Opacity learning rate, 0.05 by default.
--scaling_lr
Scaling learning rate, 0.005 by default.
--rotation_lr
Rotation learning rate, 0.001 by default.
--position_lr_max_steps
Number of steps (from 0) where position learning rate goes from initial to final. 30_000 by default.
--position_lr_init
Initial 3D position learning rate, 0.00016 by default.
--position_lr_final
Final 3D position learning rate, 0.0000016 by default.
--position_lr_delay_mult
Position learning rate multiplier (cf. Plenoxels), 0.01 by default.
--densify_from_iter
Iteration where densification starts, 500 by default.
--densify_until_iter
Iteration where densification stops, 15_000 by default.
--densify_grad_threshold
Limit that decides if points should be densified based on 2D position gradient, 0.0002 by default.
--densification_interval
How frequently to densify, 100 (every 100 iterations) by default.
--opacity_reset_interval
How frequently to reset opacity, 3_000 by default.
--lambda_dssim
Influence of SSIM on total loss from 0 to 1, 0.2 by default.
--percent_dense
Percentage of scene extent (0--1) a point must exceed to be forcibly densified, 0.01 by default.
Note that similar to MipNeRF360 and vanilla 3DGS, we target images at resolutions in the 1-1.6K pixel range. For convenience, arbitrary-size inputs can be passed and will be automatically resized if their width exceeds 1600 pixels. We recommend to keep this behavior, but you may force training to use your higher-resolution images by setting -r 1.
๐ฌ Interactive Viewers
Our 3DGS representation is identical to vanilla 3DGS, so you can use the official SIBR viewer for interactive visualization.
๐ฏ Quick Facts
| Feature | FastGS | Previous Methods |
|---|---|---|
| Training Time | 100 seconds | 5-30 minutes |
| Gaussian Efficiency | โ Strict Control | โ Redundant Growth |
| Memory Usage | โ Low Footprint | โ High Demand |
| Task Versatility | โ 6 Domains | โ Limited Scope |
๐ Acknowledgements
This project is built upon 3DGS, Taming-3DGS, and Speedy-Splat. We extend our gratitude to all the authors for their outstanding contributions and excellent repositories!
License: Please adhere to the licenses of 3DGS, Taming-3DGS, and Speedy-Splat.
Special thanks to the authors of DashGaussian for their generous support!
Citation
If you find this repo useful, please cite:
@article{ren2025fastgs,
title={FastGS: Training 3D Gaussian Splatting in 100 Seconds},
author={Ren, Shiwei and Wen, Tianci and Fang, Yongchun and Lu, Biao},
journal={arXiv preprint arXiv:2511.04283},
year={2025}
}
โญ Star this repo to get notified when we release the code!
FastGS: Training 3D Gaussian Splatting in 100 Seconds
Note: This is a preview README. Full documentation and code examples will be available upon release.
