Expected speedup of the LAMMPS plugin #111
-
Let's say we have a system of a few thousand atoms, with a NequIP model that has most of its options set to their default values, and we execute simulations on a machine with 8 cores and a single V100 GPU. I'm trying to get an idea for how much faster the LAMMPS plugin would be as compared to native ASE MD using the NequIP Calculator interface. In theory, the most expensive part of a single force evaluation is the evaluation of the network itself, and since this is PyTorch's business, I don't expect big runtime differences? The only real difference will probably be that in the LAMMPS case, the construction of the neighbor list will be slightly faster because ASE's pure Python implementation is rather slow. Other than that, the evaluation of the network will proceed in much the same manner, right? EDIT: If this is indeed the case, are there any other significant benefits that the LAMMPS plugin has over native ASE? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @svandenhaute, that is correct, yes. For most reasonable settings of network size and most materials, the force call will take longer than the LAMMPS neighborlist + integration step. Therefore, you should not see a massive difference between ASE and LAMMPS. As you said, you will see it in the neighbor lists for large systems. You might also see it for small networks where NequIP inference time can become quite small, but that would be an extreme case. Another note is that our default settings currently lie more on the accuracy side of the efficiency-accuracy tradeoff. If long and large simulations are important to you, we suggest reducing the network to l=1 and smaller feature sizes (something like 16x0e + 16x1e for the feature hidden irreps and a corresponding edge_sh of 0e and 1e with 3 layers often works surprisingly well). This will result in much faster inference and in most cases will still give really good accuracy. But obviously that is a function of the system you're trying to study. The other main benefit from using LAMMPS is that it's a much more versatile, more widely used, and overall a better maintained code than ASE. It has extensive documentation, a large user base and a lot of people are familiar with it. However, if ASE implements your functionality and you know ASE, there is no reason not to use it, that's why we made both plugins available. |
Beta Was this translation helpful? Give feedback.
Hi @svandenhaute, that is correct, yes. For most reasonable settings of network size and most materials, the force call will take longer than the LAMMPS neighborlist + integration step. Therefore, you should not see a massive difference between ASE and LAMMPS. As you said, you will see it in the neighbor lists for large systems. You might also see it for small networks where NequIP inference time can become quite small, but that would be an extreme case.
Another note is that our default settings currently lie more on the accuracy side of the efficiency-accuracy tradeoff. If long and large simulations are important to you, we suggest reducing the network to l=1 and smaller feature sizes (s…