Cache model functions for iterative workflows #7177
ricardoV94
started this conversation in
Ideas
Replies: 3 comments 2 replies
-
I like the options in this order: 1, 2, 3. |
Beta Was this translation helpful? Give feedback.
1 reply
-
I'm a fan of option 1. |
Beta Was this translation helpful? Give feedback.
0 replies
-
One thing that would be nice for deployment is if I could pre-compile a model so that when a user wants to run something it doesn't have to compile at that point. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Users are often mislead to believe that when defining models with MutableData, PyMC will avoid recompiling logp/prior/posterior predictive functions when only the MutableData has changed. This is not the case!
PyMC always recompiles functions when you call
pm.sample
orpm.sample_posterior_predictive
. We used to cache some of the model methods but this was removed because models are not static objects and new variables can be added any time: 983d444#diff-f593a76ecc6f9a5c5abdd7bbd3e9de9add74a068e75e64f66b7b1424a279a0ddThere's a couple approaches we could take to give back to users the caching functionality:
model.register_rv
is called ormodel.rvs_to_transforms
is changed. We could easily forbid the latter now that we have https://www.pymc.io/projects/docs/en/v5.10.3/api/model/generated/pymc.model.transform.conditioning.change_value_transforms.htmlcompiled_pymc
can never be mutated and caching would be safe.We may need to include the contents of
pytensor.config
as part of the caching / freezing so that users can request the same functions in different backends.Linked issues:
pymc.compute_log_likelihood
#7073Beta Was this translation helpful? Give feedback.
All reactions