Instructions for adding distributed benchmarks to continuous run:
-
You can add your benchmark file under
tensorflow/benchmarks/scripts directory. The benchmark should accepttask_index,job_name,ps_hostsandworker_hostsflags. You can copy-paste the following flag definitions:tf.app.flags.DEFINE_integer("task_index", None, "Task index, should be >= 0.") tf.app.flags.DEFINE_string("job_name", None, "job name: worker or ps") tf.app.flags.DEFINE_string("ps_hosts", None, "Comma-separated list of hostname:port pairs") tf.app.flags.DEFINE_string("worker_hosts", None, "Comma-separated list of hostname:port pairs")
-
Report benchmark values by calling
store_data_in_jsonfrom your benchmark
code. This function is defined in
benchmark_util.py. -
Create a Dockerfile that sets up dependencies and runs your benchmark. For
example, see Dockerfile.tf_cnn_benchmarks. -
Add the benchmark to
benchmark_configs.yml- Set
benchmark_nameto a descriptive name for your benchmark and make sure
it is unique. - Set
worker_countandps_count. - Set
docker_fileto the Dockerfile path starting withbenchmarks/
directory. - Optionally, you can pass flags to your benchmark by adding
argslist.
- Set
-
Send PR with the changes to annarev.
Currently running benchmarks:
https://benchmarks-dot-tensorflow-testing.appspot.com/
For any questions, please contact annarev@google.com.