-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About log operator in LMC equation #5
Comments
Hello @zwxdxcm, |
Thank you! but I still have a question... in line 280, the code is:
here we ignore scale * eps. correction = 1/[Q(x)]^\alpha. netgrad is the gradiant of total loss. How does it equal to grad(Q(x)) / Q(x)? It seems like grad(L)/[1/[Q(x)]^\alpha * L]. Is there anything i ignored? Thank you again ! |
Here is my understanding:
|
Hello, I have the same question about the code in line 280 of "examples/train_ngp_nerf_prop.py". According to the previous code, the Then the I don't know why net_grad = net_grad / ((grad_scaler._scale * (correction * loss_per_pix).unsqueeze(1))+ torch.finfo(net_grad.dtype).eps) Maybe my understanding about the partial of |
Hi,
Thanks for your contribution.
I am wondering that why there is not log operator in the codebase?
Here is code in
lmc.py
:But the equation(10) in paper is:
The text was updated successfully, but these errors were encountered: