Skip to content

Set up and run performance tests

Miranda Mundt edited this page May 8, 2025 · 1 revision

Set up and run IDAES performance tests

Overview

IDAES has a pre-existing set of performance tests that can be run as a method of profiling current performance. These tests run automatically after every update to the main idaes-pse branch but can also be run locally. Anyone can view results for the automatic tests on the IDAES performance suite page.

Prerequisites

Clone the IDAES repo and install IDAES in development mode. The following commands assume that a Conda environment named my-idaes-env is being used.

conda activate my-idaes-env
git clone https://github.com/IDAES/idaes-pse && cd idaes-pse
pip install -r requirements-dev.txt
cd ..

Clone a copy of the Pyomo repository.

git clone https://github.com/Pyomo/pyomo

NOTE: We assume from hereon that idaes-pse and pyomo are at the same level in the directory tree.

Running performance tests locally

The general usage of the Pyomo performance driver is:

usage: main.py [-h] [-o OUTPUT] [-d OUTPUT_DIR] [-p PROJECTS] [-n REPLICATES] [--with-cython]

options:
  -h, --help            show this help message and exit
  -o OUTPUT, --output OUTPUT
                        Store the test results to the specified file.
  -d OUTPUT_DIR, --dir OUTPUT_DIR
                        Store the test results in the specified directory. If -o is not specified, then a file name is
                        automatically generated based on the first "main project" git branch and hash.
  -p PROJECTS, --project PROJECTS
                        Main project (used for generating and recording SHA and DIFF information)
  -n REPLICATES, --replicates REPLICATES
                        Number of replicates to run.
  --with-cython         Cythonization enabled.

Remaining arguments are passed to pytest

From within the idaes-pse directory:

# The IDAES performance test suite must be run in the root IDAES-pse directory
# due to the necessity for the pytest.ini file.
python \
  ../pyomo/scripts/performance/main.py \
  --project idaes \
  -o {filename}.json \ # can be named anything
  -n 1 -v \
  --junitxml={filename}.xml \ # can be named anything
  . \
  --performance

This will run the entire performance test suite included in IDAES-pse. It will create a JSON file that can be inspected for timing values.

Comparing test runs

In addition to the main performance driver, Pyomo also contains a compare.py script that allows you to compare individual runs for performance differences.

Usage: compare.py <base> <test>
    <base>: comma-separated list of performance JSON files
    <test>: comma-separated list of performance JSON files

In comparing two files, an output such as this will be printed:

test_time build model final solve initialize unit consistency test_name
-----------------------------------------------------------------------
    1.911       0.599       0.064      0.312            0.132 idaes/models/unit_models/tests/test_heat_exchanger_1D.py::Test_HX1D_Performance::test_performance
    2.265       0.402       0.151      1.071               -- idaes/models_extra/gas_solid_contactors/unit_models/tests/test_FB1D.py::Test_FixedBed1D_Performance::test_performance
    4.850       0.166       0.053      3.818            0.051 idaes/models_extra/column_models/tests/test_tray_column.py::Test_TrayColumn_Performance::test_performance
   11.452       1.002       0.304      8.211            1.020 idaes/models_extra/column_models/tests/test_MEAsolvent_column.py::Test_MEAColumn_Performance::test_performance
   50.944       0.287       3.653     30.055           13.288 idaes/models/properties/modular_properties/examples/tests/test_HC_PR.py::Test_HC_PR_Performance::test_performance
   76.140       0.654       0.299      8.823           62.561 idaes/models/properties/modular_properties/transport_properties/tests/test_shell_and_tube_1D_transport.py::TestCubicTransportPerformance::test_performance
-----------------------------------------------------------------------
  147.563       3.110       4.525     52.291           77.052 [ TOTAL ]

time(Δ) time(%) test_time build model final solve initialize unit consistency test_name
---------------------------------------------------------------------------------------
 -1.248   -2.39    50.944      -0.000       0.208     -1.655            0.215 idaes/models/properties/modular_properties/examples/tests/test_HC_PR.py::Test_HC_PR_Performance::test_performance
 -0.365   -6.99     4.850       0.003       0.003     -0.392            0.001 idaes/models_extra/column_models/tests/test_tray_column.py::Test_TrayColumn_Performance::test_performance
 -0.309  -13.92     1.911      -0.125      -0.006     -0.113           -0.039 idaes/models/unit_models/tests/test_heat_exchanger_1D.py::Test_HX1D_Performance::test_performance
  0.039    1.77     2.265       0.003       0.003      0.013               -- idaes/models_extra/gas_solid_contactors/unit_models/tests/test_FB1D.py::Test_FixedBed1D_Performance::test_performance
  0.369    3.33    11.452       0.003      -0.002      0.391            0.006 idaes/models_extra/column_models/tests/test_MEAsolvent_column.py::Test_MEAColumn_Performance::test_performance
  4.544    6.35    76.140      -0.047       0.002     -0.095            4.668 idaes/models/properties/modular_properties/transport_properties/tests/test_shell_and_tube_1D_transport.py::TestCubicTransportPerformance::test_performance
---------------------------------------------------------------------------------------
  3.030    2.10   147.563      -0.164       0.208     -1.852            4.851 [ TOTAL ]
  2.096      **        **      -5.007       4.820     -3.420            6.719 [ %diff ]

Adding new performance tests

Performance tests are marked using pytest. To mark a test to be included in the performance suite, add the @pytest.mark.performance decorator over a test, such as:

@pytest.mark.performance
class Test_HC_PR_Performance(PerformanceBaseClass, unittest.TestCase):
    def build_model(self):
        return build_model()

    def initialize_model(self, model):
        initialize_model(model)
Clone this wiki locally