You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that ttnn.reshape is inconsistent when being performed on single device vs multi-device. Specifically, the inconsistency can be seen when doing a [W, X, Y*Z] -> [W, X, Y, Z] reshape on a tensor that is on a single device, vs. a tensor that is replicated across devices.
For single device tensor, the outputs match torch, but for a multi-device tensor, they do not.
FYI The models team has been having a few correctness issues with multidevice reshape/transpose. They are tough to repro, but now Ammar has found that unit tests may not be catching them because they exist on multidevice.
Description
It seems that
ttnn.reshape
is inconsistent when being performed on single device vs multi-device. Specifically, the inconsistency can be seen when doing a[W, X, Y*Z]
->[W, X, Y, Z]
reshape on a tensor that is on a single device, vs. a tensor that is replicated across devices.For single device tensor, the outputs match torch, but for a multi-device tensor, they do not.
Repro steps
avora/reshape_bug
pytest models/demos/t3000/llama2_70b/tests/test_rope_reshape.py
The text was updated successfully, but these errors were encountered: