You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for such a good research and well organized code!
I have one question about alt_cuda_corr implementation.
In the code, alt_cuda_corr is used as alt_cuda_corr.forward(fmap1,fmap2,coords,radius)
However, backward call is not used in the code.
I saw that multi gpu run also reports unused parameter, which implies that gradient is not fully propagated to the model.
Please tell me if there is something I missed!
Thanks
The text was updated successfully, but these errors were encountered:
@wonozlo I think the alt_cuda_corr is only an efficient implementation that are used for testing memory occupation when processing high-resolution input and the authors did not use this implementation for training so the backward function of alt_cuda is never called. By the way, i remember the backward implementation is wrong.
Hi,
Thanks for such a good research and well organized code!
I have one question about alt_cuda_corr implementation.
In the code, alt_cuda_corr is used as
alt_cuda_corr.forward(fmap1,fmap2,coords,radius)
However, backward call is not used in the code.
I saw that multi gpu run also reports unused parameter, which implies that gradient is not fully propagated to the model.
Please tell me if there is something I missed!
Thanks
The text was updated successfully, but these errors were encountered: