Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Test]: Benchmark on v0.10.2 #881

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft

[Test]: Benchmark on v0.10.2 #881

wants to merge 3 commits into from

Conversation

datejada
Copy link
Member

@datejada datejada commented Oct 15, 2024

Run the benchmark on version 0.10.2

@datejada datejada added the benchmark PR only - Run benchmark on PR label Oct 15, 2024
Copy link
Contributor

github-actions bot commented Oct 15, 2024

Benchmark Results

Benchmark in progress...

@datejada datejada changed the title [Test]: Dummy change to run the benchmark on main [Test]: Dummy change to run the benchmark on v 0.10.2 Oct 15, 2024
@datejada datejada changed the title [Test]: Dummy change to run the benchmark on v 0.10.2 [Test]: Dummy change to run the benchmark on v0.10.2 Oct 15, 2024
@datejada datejada marked this pull request as draft October 15, 2024 09:01
@datejada datejada changed the title [Test]: Dummy change to run the benchmark on v0.10.2 [Test]: Benchmark on v0.10.2 Oct 15, 2024
@abelsiqueira
Copy link
Member

I don't remember what we did, but the tests run on 20 minutes and create_model is 44s now. It was much longer some time ago, right?

@datejada
Copy link
Member Author

I don't remember what we did, but the tests run on 20 minutes and create_model is 44s now. It was much longer some time ago, right?

It is due to the flexible temporal resolution in files benchmark/EU/assets-rep-periods-partitions.csv and benchmark/EU/flows-rep-periods-partitions.csv

The values in that files ensure we test the flexible temporal resolution in the EU case. However, we might want to benchmark both situations: using flexible temporal resolution and using the default (i.e., hourly for all the assets and flows). This can be done through code. If that makes sense, please let me know and we can update the benchmark files.

@datejada
Copy link
Member Author

Results with flexible temporal resolution (saving for reference):

Benchmark Results

0488653... fe45cd1... 0488653.../fe45cd19d2db1b...
energy_problem/create_model 44.8 ± 0.53 s 44 ± 0.79 s 1.02
energy_problem/input_and_constructor 8.86 ± 0.35 s 9.21 ± 0.2 s 0.963
time_to_load 4.03 ± 0.0061 s 4.07 ± 0.0035 s 0.989
0488653... fe45cd1... 0488653.../fe45cd19d2db1b...
energy_problem/create_model 0.645 G allocs: 22.5 GB 0.645 G allocs: 22.5 GB 1
energy_problem/input_and_constructor 9.56 M allocs: 0.696 GB 9.56 M allocs: 0.696 GB 1
time_to_load 0.157 k allocs: 11.1 kB 0.157 k allocs: 11.1 kB 1

@datejada
Copy link
Member Author

datejada commented Oct 18, 2024

I have run the benchmarks locally in a TNO machine with the following specifications:

  • Processor: Intel(R) Xeon(R) Gold 5215 CPU @ 2.50GHz 2.49 GHz
  • Cores: 10
  • Logical processors: 20
  • RAM: 144 GB (143 GB usable)
  • OS: Windows 10 Enterprise

The results for this branch with the EU case:

  • Flexible temporal resolution:
EnergyProblem:
  - Time creating internal structures (in seconds): 10.3398634
  - Time computing constraints partitions (in seconds): 2.627876
  - Model created!
    - Time for  creating the model (in seconds): 78.1544507
    - Number of variables: 1556419
    - Number of constraints for variable bounds: 1248359
    - Number of structural constraints: 2303880
  - Model not solved!

Total memory allocations from the log file: 27.1GiB

  • Hourly:
EnergyProblem:
  - Time creating internal structures (in seconds): 10.3677841
  - Time computing constraints partitions (in seconds): 15.930762
  - Model created!
    - Time for  creating the model (in seconds): 165.8964277
    - Number of variables: 4020899
    - Number of constraints for variable bounds: 3241259
    - Number of structural constraints: 5939280
  - Model not solved!

Total memory allocations from the log file: 32.7GiB

@datejada
Copy link
Member Author

datejada commented Oct 18, 2024

@abelsiqueira I added the information about running the benchmark locally, but I cannot get the memory usage information from the benchmark. Would you happen to know how to get it when running locally? I got the memory usage from our log report in the model. I add it to the previous comment.

BTW: the changes in this PR work locally to change the data using DuckDB, so I assume that the error running it in the server might be a memory usage limit on GitHub 🤷‍♂️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark PR only - Run benchmark on PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants