Replies: 2 comments
-
In this case (and in most other cases :) ) we would recommend using the flexible interface, here' s a tutorial: https://github.com/sbi-dev/sbi/blob/main/tutorials/02_flexible_interface.ipynb. And then you can specify your custom density estimator as described here https://github.com/sbi-dev/sbi/blob/main/tutorials/04_density_estimators.ipynb to turn off normalization, e.g., in |
Beta Was this translation helpful? Give feedback.
0 replies
-
Ahh I see okay thanks for the help! I will have a play with the flexible interface and see if I can grab the jacobian I need. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm trying to use the SNLE implementation to generate a joint likelihood function (analytically intractable) for two correlated observations from two different experiments that I can then sample over with Nested Sampling to get a joint evidence.
My call to the sbi package looks something like
posterior = infer(simulation, prior, method="SNLE", num_simulations=10000)
and I am aware that this is a non-normalised posterior but my prior is uniform so I can just do$\log L(D_1, D_2|\theta) = \log P(\theta|D_1, D_2) - \log \pi(\theta)$ to get back to the joint likelihood.
My question is whether the data and parameters are normalised by default with the SNLE method? and can I turn this off?
This is not immediately clear in the documentation and has an impact on the recovered$\log L$ and corresponding evidence from the NS run. I am aware that normalisation is advisable when training flows but I can implement this myself.
I think if the data and parameters are normalised i.e.$D \rightarrow \tilde{D}$ and $\theta \rightarrow \tilde{\theta}$ then the recovered posterior from $\log P(\tilde{\theta}| \tilde{D_1}, \tilde{D_2}) = \log L(\tilde{D_1}, \tilde{D_2}|\tilde{\theta}) + \log \pi(\tilde{\theta})$ which when rearranged for $L$ and sampled over with NS gives me $\tilde{Z} = P(\tilde{D_1}, \tilde{D_2})$ when I want $Z = P(D_1, D_2)$ .
posterior.log_prob(samples)
isThis should be fixable via some jacobian transformation$\log L(D_1, D_2|\theta) = \log L(\tilde{D_1}, \tilde{D_2}|\tilde{\theta}) + \log det(|\frac{\delta \tilde{D}}{\delta \tilde{\theta}}|)$ or something similar. If the data and parameters are normalised is this jacobian accessible? Note that this isn't the usual normalising flow jacobian but an additional correction.
Thanks for the help!
Beta Was this translation helpful? Give feedback.
All reactions