Rust Lighter
This project was started as my RUST exercise to abstract the Rust minimalist ML framework Candle (https://github.com/huggingface/candle) and introduce a more convenient way of programming neural network machine learning models.
The behaviour is inspired by Python KERAS (https://keras.io) and the initial step based on the Rust-Keras-like code (https://github.com/AhmedBoin/Rust-Keras-Like).
So let's call the project Candle Lighter ๐ฏ, because it helps to turn on the candle light and is even easier to implement.
Examples can be found below the lib/examples/ directory.
To use it as library just call 'cargo add candlelighter'
MAINTAINERS AND CONTRIBUTORS ARE HIGHLY WELCOME
Note: It is by far not production ready and is only used for own training purposes. No warranty and liability is given. I am a private person and not targeting any commercial benefits.
Supported Layer types
| Meta Layer | Type | State | Example |
|---|---|---|---|
| Sequential model | - | โ | |
| - | Feature scaling | ๐ | DNN and TNN |
| - | Dense | โ | DNN |
| - | Convolution | โ | CNN |
| - | Pooling | โ | - |
| - | Normalization | โ | - |
| - | Flatten | โ | - |
| - | Recurrent | โ | RNN 1st throw |
| - | Regulation | โ | - |
| - | Recurrent | โ | RNN 1st throw |
| - | Autoencoder | ๐ | - |
| - | Feature embedding | โ | S2S 1st throw |
| - | Attention | ๐ | TNN 1st throw |
| - | Mixture of Experts | ๐ | ENN 1st throw |
| - | Feature masking and -quantization | ๐ | - |
| - | KAN-Dense | ๐ | - |
| Model fine tuning (PEFT) | - | ๐ | In development: DNN2 & DNN3 |
| Parallel model (in sense of split) | - | ๐ | PNN 1st throw |
| Parallel model | Merging | ๐ | PNN 1st throw |
| Transformer models | see | ๐ | |
| * BERT | Text similarity | โ | LLM |
| * LLAMA | Completion (Chat) | โ | LLM2 |
| Reinforcement models | see | ๐ |
License
Tripple-licensed to be compatible with the Rust project and the source roots.
Licensed under the MPL 2.0, MIT license or the Apache license, Version 2.0 at your option.