htyu/tritonbench
Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.
TritonBench
TritonBench is a collection of PyTorch operators used to evaluation the performance of Triton,
and its integration with PyTorch.
Installation
The benchmark suite should be self-contained of its dependencies. To install, follow the steps below.
Step 1: clone the repository and checkout all submodules
git clone https://github.com/meta-pytorch/tritonbench.git
cd tritonbench
git submodule update --init --recursiveStep 2: run install.py
python install.pyBy default, it will install the latest PyTorch nightly release and use the Triton version bundled with it.
Basic Usage
To benchmark an operator, run the following command:
python run.py --op gemmInstall as a library
To install as a library:
pip install -e .In your own benchmark script:
import tritonbench
from tritonbench.utils import parser
op_args = parser.parse_args()
addmm_bench = tritonbench.load_opbench_by_name("addmm")(op_args)
addmm_bench.run()Submodules
We depend on the following projects as a source of customized Triton or CUTLASS kernels:
- (CUDA, HIP) generative-recommenders
- (CUDA, HIP) Liger-Kernel
- (CUDA, HIP) tilelang
- (CUDA) xformers
- (CUDA) flash-attention
- (CUDA) FBGEMM
- (CUDA) ThunderKittens
- (HIP) AITer
License
TritonBench is BSD 3-Clause licensed, as found in the LICENSE file.