OlgaChaganova/non-blind-deconvolution-benchmark
A benchmark for non-blind deconvolution methods: classical algorithms vs SOTA neural models
Non-blind deconvolution methods benchmark
A benchmark for non-blind deconvolution methods: classical algorithms vs SOTA neural models.
Installation
- Install requirements (python >= 3.9):
make install
- Download prepared data:
TODO
or
Download raw data:
TODO
and unpack it:
make prepare_raw_data
Validation
Just simply run:
make test
Sources of data
Kernels:
-
Motion blur:
1.1 Levin et al. Understanding and evaluating blind deconvolution algorithms. Paper, data source. Total: 8 kernels.
1.2 Sun et al. Edge-based Blur Kernel Estimation Using Patch Priors. Paper & data source. Total: 8 kernels (img_1_kernel{i}_OurNat5x5_kernel.png).
1.3 Generated with simulator taken from RGDN. Source code:
src/data/generate/motion_blur.py. -
Eye PSF:
2.1 90 kernels (30 big, 30 medium, 30 small) taken from SCA-2023 dataset.
-
Gauss:
3.1 Generated with this script.
Ground truth images
-
SCA-2023 dataset (539 images in 6 categories: animals, city, faces, texts, icons, nature).
Discretization
There only two types of image in these datasets: PNG with floating points and JPEG with uint8 dtype. Both are stored in sRGB.
To properly model the blurring process, the convolution with PSF must be done in linear space, so the first step is to convert the sRGB to floating linear. The following pipeline is described here.
Models and algorithms
-
Wiener filter (as baseline): source code in
src/deconv/classic/wiener.py; -
USRNet: source code in
src/deconv/neural/usrnet; -
DWDN: source code in
src/deconv/neural/dwdn; -
KerUnc: source code in
src/deconv/neural/kerunc; -
RGDN: source code in
src/deconv/neural/rgdn.
Example of each model inference can be found here.
Testing robustness to kernels errors
Testing was done with an algorithm from a paper
Deep Learning for Handling Kernel/model Uncertainty in Image Deconvolution:
Tips
SQL
-
If you work in VS Code, you can use this extention for SQLLite to make your work easier.
-
To calculate statistics (e.g. std and median), this extention is used here. Just download precompiled binaries suitable for your OS and unpack them to a folder (
sqleanin my case). That's it! -
SQL queries to analyze benchmarking results can be found here.
Running the code
- If old torch version (we use 1.7.1 since we took the source code for neural models as is) is not compatible with your CUDA version, you can run this code in Docker container. Instructures are below.
How to run docker container
- Build image:
make build
- Run container:
make run
- Execute inside the container:
make exec
- Run inside the container:
make test
Benchmarking results
TBA
