You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the provided code, attn = k - q[:,:,None,:] + pos, attn = self.attn_fc(attn). However, in Fig. 2.a and alg.1, there should not be self.attn_fc component. Could you give an explanation?
The text was updated successfully, but these errors were encountered:
Thank you for pointing it out! Yes, there is an error in our pseudo-code in algorithm. 1 (although fa(.) was defined we never used it). However, our implementation details (in text) do discuss the same (Appendix. B - Memory-efficient Cross-View Attention).
In the provided code, attn = k - q[:,:,None,:] + pos, attn = self.attn_fc(attn). However, in Fig. 2.a and alg.1, there should not be self.attn_fc component. Could you give an explanation?
The text was updated successfully, but these errors were encountered: