Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve reinitialization efficiency for contact #29069

Open
dewenyushu opened this issue Nov 13, 2024 · 5 comments
Open

Improve reinitialization efficiency for contact #29069

dewenyushu opened this issue Nov 13, 2024 · 5 comments
Assignees
Labels
C: Framework T: task An enhancement to the software.

Comments

@dewenyushu
Copy link
Contributor

Motivation

When dealing with problems involving intermittent contact or sliding between material bodies, the elements in contact are continually changing. This leads to frequent alterations in the sparsity patterns of the Jacobian matrix, necessitating frequent reinitialization. Preallocating memory in such cases can be extremely expensive.

To address this, hash table assembly has been developed and implemented in the following PR/MR:

However, hash table assembly may be slower than preallocation when preallocation is accurate, which is often the case for scenarios with minimal relative motion between contact pairs. Therefore, it's preferable to first determine if contact pairs have changed before attempting to reset the hash state.

Design

Initial design:

  • Check for contact pair changes: Determine if any contact pairs have changed to help identify when a reset of the hash state is necessary.

  • Leverage PETSc postcheck callbacks: To efficiently check whether contact pairs have changed, consider utilizing PETSc's postcheck callbacks rather than previous residual evaluation.

Impact

Significantly improve efficiency for contact simulations.

@dewenyushu dewenyushu added the T: task An enhancement to the software. label Nov 13, 2024
@lindsayad lindsayad self-assigned this Nov 13, 2024
@lindsayad lindsayad moved this to Todo in NEAMS MP TA 2025 Nov 13, 2024
@dewenyushu
Copy link
Contributor Author

dewenyushu commented Nov 14, 2024

Here is an example input that is converted from an existing MOOSE contact test. It does not have frequent change of contact pairs during the simulation, which shows computational advantage of preallocation vs the hash table initialization. Specifically, with uniform_refine = 5, the run times are 72s for hash table assembly vs 55s using preallocation on 1 processor. For coarser cases or multiple processors, the preallocation is constantly faster than the hash table initialization, a few seconds difference in runtime.

Inputs.zip

@maxnezdyur
Copy link
Contributor

Few questions.

  1. Does the preallocation problem occur with mortar contact too?
  2. How bad of a memory vs computational speed problem would it be to have the ability for users to ask for "full" preallocation between two contact pairs so that even with large relative motion.

@lindsayad
Copy link
Member

Yes the preallocation issue occurs with mortar contact as well. PETSc automatically shrinks unused sparsity pattern entries. We would have to explicitly insert zeroes for contact pairs uncoupled at any given Jacobian evaluation. I do not know what this means for memory/cpu performance penalty. The performance losses that @dewenyushu shared for the hash table based assembly are much better than what she reported to me on slack. I guess I am not all that sad about 55 -> 72 seconds when I've seen improvements from multiple hours to tens of seconds for Jacobian assembly for an assessment case that @vanwdani pointed me to

@dewenyushu
Copy link
Contributor Author

Yes the preallocation issue occurs with mortar contact as well. PETSc automatically shrinks unused sparsity pattern entries. We would have to explicitly insert zeroes for contact pairs uncoupled at any given Jacobian evaluation. I do not know what this means for memory/cpu performance penalty. The performance losses that @dewenyushu shared for the hash table based assembly are much better than what she reported to me on slack. I guess I am not all that sad about 55 -> 72 seconds when I've seen improvements from multiple hours to tens of seconds for Jacobian assembly for an assessment case that @vanwdani pointed me to

Yeah that's the runtime difference I can get using a existing test case. I can probably share with you privately the actual problem we are looking at, which shows more than 5 times of slowdown (15515s -> 85680s), if you are interested.

@lindsayad
Copy link
Member

Yes I am interested as long as it's checked into git somewhere

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: Framework T: task An enhancement to the software.
Projects
Status: Todo
Development

No branches or pull requests

3 participants