You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I read your paper and you mentioned that you used padding to ensure the input and output of convolutional layer have the same length. However, in the code you set padding=int(kernel_size/2), which can not ensure that. For example, the input is 111 and the kernel_size is 4, the input after padding would be 0011100 and after the length of output would be 4.
I googled and found that pytorch seems to have no similar function as tensorflow's 'same' padding for convolutional layer. Is there any workaround to achieve this?
Thanks a lot.
The text was updated successfully, but these errors were encountered:
Yeah, the lack of “same” padding is frustrating - you are right, technically, with this code the convolution output size will only be the same for odd width filters. One solution for even width filters is to use torch.nn.functional.pad, which lets you manually pad zeros on either side of a Variable in any dimension. In practice I’m not sure it will make a noticeable difference though.
Hi there,
I read your paper and you mentioned that you used padding to ensure the input and output of convolutional layer have the same length. However, in the code you set
padding=int(kernel_size/2)
, which can not ensure that. For example, the input is111
and the kernel_size is 4, the input after padding would be0011100
and after the length of output would be 4.I googled and found that pytorch seems to have no similar function as tensorflow's 'same' padding for convolutional layer. Is there any workaround to achieve this?
Thanks a lot.
The text was updated successfully, but these errors were encountered: