-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Injection of hot electron beam raises error double free or corruption (out)
#349
Comments
Dear @weipengyao, I am studying your issue and so far I could not reproduce your problem. If you use a supercomputer can you show me your launch script (the one you use to launch the simulation) or the exact configuration that you use (number of MPI tasks, OpenMP threads...) Thank you |
Dear @xxirii, Thanks for your time and reply. I checked again with the namelist attached and found that this error occurred (at 200 timestep) with 160 cores, but not with 40 cores (which might happen later). I am running this on the supercomputer Niagara, and I use the
Let me know if you need anything else. Best, |
Thank you, do you use any particular OMP environment variable like a specific SCHEDULER or thread placement? |
I don't think I do. Here's the script I use to compile Smilei on Niagara (hope it can help anyone else using Smilei on Niagara). To save your time from downloading, it reads:
I only have something 'special' for MPI-tag related issues (#307). I checked my Thanks! |
Thank you, I have managed to reproduce the bug using exactly your configuration. It does not appear when you use a hybrid mode with more than 1 OpenMP thread per MPI. I will investigate but you should be able to run your case in hybrid if you need the results soon for science. Moreover, in my case, I have a hdf5 issue when I use the variable |
For instance, using 16 mpi tasks and 10 OpenMP threads per task I am at iteration 3700 after 8 minutes. |
Dear @xxirii, Thanks for the timely reply. For me, I need to use ten times cores, i.e. 1600, with more particles like About the I hope it helps. |
Right, it's surprising to see that it works with 159 MPI tasks and segfault with 160. Very strange. |
Note that the bug only occurs when I use strictly 160 cores. When I use more it seems to work. Have you tried a case with more ppc and more MPI tasks that crashes? |
Yes, I have. Please see this output file for example. HEB2D_dep2_Inj128_Z10_T100_np1_Th1k_FixIon_SBC_Collee.py-4673320.out.txt |
Description
I am using the injection module for hot electron transport in solid target.
When the temperature of the injected electron beam is high, like Te=100 keV, the code runs hundreds of steps and then crashes with the error
double free or corruption (out): 0x0000000003b5f6f0 ***
.While when reduce the temperature, e.g. Te=50 eV, the code runs fine (at least within the simulation time).
Please find the related output files here:
a.out.txt
test.py.txt
a.err.txt
Steps to reproduce the problem
To reproduce the problem, just use the namelist above and compare the two cases with different temperatures.
And this info. about iterator validity might be helpful.
Parameters
make env
gives:The text was updated successfully, but these errors were encountered: