Autoencoder
This repository is to do convolutional autoencoder with SetNet based on Cars Dataset from Stanford.
Dependencies
- Python 3.5
- PyTorch 0.4
Dataset
We use the Cars Dataset, which contains 16,185 images of 196 classes of cars. The data is split into 8,144 training images and 8,041 testing images, where each class has been split roughly in a 50-50 split.
You can get it from Cars Dataset:
$ cd Autoencoder/data
$ wget http://imagenet.stanford.edu/internal/car196/cars_train.tgz
$ wget http://imagenet.stanford.edu/internal/car196/cars_test.tgz
$ wget --no-check-certificate https://ai.stanford.edu/~jkrause/cars/car_devkit.tgzArchitecture
Usage
Data Pre-processing
Extract 8,144 training images, and split them by 80:20 rule (6,515 for training, 1,629 for validation):
$ python pre_process.pyTrain
$ python train.pyDemo
Download pre-trained model weights into "models" folder then run:
$ python demo.pyThen check results in images folder, something like:
| Input | Output |
|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
On this page
Languages
Python100.0%
Latest Release
v1.0October 10, 2018Apache License 2.0
Created October 8, 2018
Updated February 12, 2026





















