Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does Concatenate order matters? #995

Open
ecilay opened this issue Feb 29, 2024 · 1 comment
Open

Does Concatenate order matters? #995

ecilay opened this issue Feb 29, 2024 · 1 comment

Comments

@ecilay
Copy link

ecilay commented Feb 29, 2024

I have below code, which concats the input tensor before conv layer, this is the original corresponding Pytorch code:
x = torch.nn.functional.pad(x, pad, mode="constant", value=0).

In AIT, assume the input tensor shape is (N, H, W, Cin),
Conversion code 1, pad H first followed by W:

T1 = ops.full()(shape=[N, 1, W, Cin], fill_value=0)
R = ops.concatenate()(inputs=[x, T1], dim=1)
T2 = ops.full()(shape=[N, H + 1, 1, Cin], fill_value=0)
x = ops.concatenate()(inputs=[R, T2], dim=2)

and this will give different results than
Conversion code 2, pad W first followed by H:

T1 = ops.full()(shape=[N, H, 1, Cin], fill_value=0)
R = ops.concatenate()(inputs=[x, T1], dim=2)
T2 = ops.full()(shape=[N, 1, W+1, Cin], fill_value=0)
x = ops.concatenate()(inputs=[R, T2], dim=1)

This code is run many times during inference. I find when I use the 1st block of code with H-first concat, the conversion results only works for H>W; the 2nd conversion with W-first concat only works for when W>H. Square inputs works with either approach. Eg dimensions to try for (H, W): (896, 1152), (1152, 896), (1024, 1024)

@kadeng
Copy link
Contributor

kadeng commented Mar 1, 2024

thanks for your report, we will try to reproduce the issue and then get back to you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants