From 11b01d344dbd2508fffccf645568dc672ccf4715 Mon Sep 17 00:00:00 2001 From: Alexander Andreev Date: Tue, 15 Apr 2025 12:33:56 +0100 Subject: [PATCH 1/2] Update example commands in README --- README.md | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 7a8c8078..9aa9d8df 100755 --- a/README.md +++ b/README.md @@ -44,22 +44,34 @@ conda env create -n rapids --solver=libmamba -f envs/conda-env-rapids.yml ### Benchmarks Runner -How to run benchmarks using the `sklbench` module and a specific configuration: +How to run sklearnex benchmarks on CPU using the `sklbench` module and regular scope of benchmarking cases: ```bash -python -m sklbench --config configs/sklearn_example.json +python -m sklbench --configs configs/regular \ + --filters algorithm:library=sklearnex algorithm:device=cpu \ + --environment-name ENV_NAME --result-file result_sklearnex_cpu_regular.json +# same command with shorter argument aliases for typing convenience +python -m sklbench -c configs/regular \ + -f algorithm:library=sklearnex algorithm:device=cpu \ + -e ENV_NAME -r result_sklearnex_cpu_regular.json ``` The default output is a file with JSON-formatted results of benchmarking cases. To generate a better human-readable report, use the following command: ```bash -python -m sklbench --config configs/sklearn_example.json --report +python -m sklbench -c configs/regular \ + -f algorithm:library=sklearnex algorithm:device=cpu \ + -e ENV_NAME -r result_sklearnex_cpu_regular.json \ + --report --report-file result-sklearnex-cpu-regular.xlsx ``` -By default, output and report file paths are `result.json` and `report.xlsx`. To specify custom file paths, run: - +In order to optimize datasets downloading and get more verbose output, use `--prefetch-datasets` and `-l INFO` arguments: ```bash -python -m sklbench --config configs/sklearn_example.json --report --result-file result_example.json --report-file report_example.xlsx +python -m sklbench -c configs/regular \ + -f algorithm:library=sklearnex algorithm:device=cpu \ + -e ENV_NAME -r result_sklearnex_cpu_regular.json \ + --report --report-file report-sklearnex-cpu-regular.xlsx \ + --prefetch-datasets -l INFO ``` For a description of all benchmarks runner arguments, refer to [documentation](sklbench/runner/README.md#arguments). @@ -69,7 +81,9 @@ For a description of all benchmarks runner arguments, refer to [documentation](s To combine raw result files gathered from different environments, call the report generator: ```bash -python -m sklbench.report --result-files result_1.json result_2.json --report-file report_example.xlsx +python -m sklbench.report \ + --result-files result_1.json result_2.json \ + --report-file report_example.xlsx ``` For a description of all report generator arguments, refer to [documentation](sklbench/report/README.md#arguments). From 384a26acbd2fb7906965bf4f61c4522cd248e5f0 Mon Sep 17 00:00:00 2001 From: Alexander Andreev Date: Tue, 15 Apr 2025 12:39:25 +0100 Subject: [PATCH 2/2] Add estimator filter example --- README.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 9aa9d8df..c541bc77 100755 --- a/README.md +++ b/README.md @@ -50,7 +50,7 @@ How to run sklearnex benchmarks on CPU using the `sklbench` module and regular s python -m sklbench --configs configs/regular \ --filters algorithm:library=sklearnex algorithm:device=cpu \ --environment-name ENV_NAME --result-file result_sklearnex_cpu_regular.json -# same command with shorter argument aliases for typing convenience +# Same command with shorter argument aliases for typing convenience python -m sklbench -c configs/regular \ -f algorithm:library=sklearnex algorithm:device=cpu \ -e ENV_NAME -r result_sklearnex_cpu_regular.json @@ -74,6 +74,13 @@ python -m sklbench -c configs/regular \ --prefetch-datasets -l INFO ``` +To select measurement for few algorithms only, extend filter (`-f`) argument: +```bash +# ... + -f algorithm:library=sklearnex algorithm:device=cpu algorithm:estimator=PCA,KMeans +# ... +``` + For a description of all benchmarks runner arguments, refer to [documentation](sklbench/runner/README.md#arguments). ### Report Generator