-
-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve performance of Arrhenius rate evaluations #1217
base: main
Are you sure you want to change the base?
Conversation
- Use Eigen to vectorize rate evaluations - Only do calculations for rates that are actually temperature-dependent
Codecov Report
@@ Coverage Diff @@
## main #1217 +/- ##
==========================================
+ Coverage 65.43% 65.46% +0.02%
==========================================
Files 320 320
Lines 46249 46308 +59
Branches 19657 19687 +30
==========================================
+ Hits 30265 30315 +50
- Misses 13454 13459 +5
- Partials 2530 2534 +4
📣 Codecov can now indicate which changes are the most critical in Pull Requests. Learn more |
dcdb558
to
46b3f27
Compare
This is an interesting concept. Presumably, vectorization could be done for each/most of the other As an aside, I'm surprised that your speed tests put the CTI/XML ahead of YAML. I currently have YAML with an edge of around 4% on |
@speth … do you think that the speedup arises from vectorization via Eigen or from omitting calculations? If I understand correctly, only the former creates complications for the test you ran in the context of #1211 (comment)? |
As I said in the initial PR description, the bulk of the benefit seems to come from skipping rate updates for rates that are constant. |
Changes proposed in this pull request
If applicable, provide an example illustrating new features this pull request is introducing
Testing this on a couple of different machines shows a 7-9% speed increase for an adiabatic flame speed calculation using GRI 3.0 (using a modified version of
adiabatic_flame.py
, and about a 4% speed increase for thecustom-reactions.py
benchmark:Before:
After
The bulk of the benefit seems to come from skipping rate updates for rates that are constant (for reference, there are 100 such rates in the GRI 3.0 mechanism). I'm not quite sure why vectorizing the the rate evaluation for the remaining reactions has so little impact.
Checklist
scons build
&scons test
) and unit tests address code coverage