GitHunt
MI

mikevoets/benchmarks

Benchmark code

Instructions for adding distributed benchmarks to continuous run:

  1. You can add your benchmark file under
    tensorflow/benchmarks/scripts directory. The benchmark should accept task_index, job_name, ps_hosts and worker_hosts flags. You can copy-paste the following flag definitions:

    tf.app.flags.DEFINE_integer("task_index", None, "Task index, should be >= 0.")
    tf.app.flags.DEFINE_string("job_name", None, "job name: worker or ps")
    tf.app.flags.DEFINE_string("ps_hosts", None, "Comma-separated list of hostname:port pairs")
    tf.app.flags.DEFINE_string("worker_hosts", None, "Comma-separated list of hostname:port pairs")
  2. Report benchmark values by calling store_data_in_json from your benchmark
    code. This function is defined in
    benchmark_util.py.

  3. Create a Dockerfile that sets up dependencies and runs your benchmark. For
    example, see Dockerfile.tf_cnn_benchmarks.

  4. Add the benchmark to
    benchmark_configs.yml

    • Set benchmark_name to a descriptive name for your benchmark and make sure
      it is unique.
    • Set worker_count and ps_count.
    • Set docker_file to the Dockerfile path starting with benchmarks/
      directory.
    • Optionally, you can pass flags to your benchmark by adding args list.
  5. Send PR with the changes to annarev.

Currently running benchmarks:
https://benchmarks-dot-tensorflow-testing.appspot.com/

For any questions, please contact annarev@google.com.

Languages

Python96.7%JavaScript1.4%Shell0.8%HTML0.8%CSS0.3%
Apache License 2.0
Created May 7, 2018
Updated May 10, 2018