Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Values in (2^15, 2^23) are not rounded by rounding ops #13544

Open
Tracked by #13795
jdh8 opened this issue Oct 7, 2024 · 2 comments
Open
Tracked by #13795

Values in (2^15, 2^23) are not rounded by rounding ops #13544

jdh8 opened this issue Oct 7, 2024 · 2 comments
Assignees
Labels
bug Something isn't working op_cat: eltwise precision Precision issues pytorch-compiler

Comments

@jdh8
Copy link
Contributor

jdh8 commented Oct 7, 2024

Currently, rounding ops ignore values outside range of i16, i.e. [-215, 215).

v_if (v <= SHRT_MIN || v >= SHRT_MAX){
result = v;
}

Since there are 24 significand bits (23 explicit) in f32, the maximum finite non-integer in f32 is 223 − 0.5. As a result, non-integers with absolute value in (215, 223) are returned as is and not rounded.

Either of the following would solve this issue:

  • Work on i32 instead of i16
  • A second iteration that works on magnitudes > 215
@jdh8 jdh8 added the bug Something isn't working label Oct 7, 2024
@jdh8 jdh8 changed the title Values in (2<sup>15</sup>, 2<sup>23</sup>) are not rounded by rounding ops Values in (2^15, 2^23) are not rounded by rounding ops Oct 7, 2024
@jdh8 jdh8 added op_cat: eltwise precision Precision issues and removed community labels Oct 15, 2024
@mouliraj-mcw mouliraj-mcw self-assigned this Nov 26, 2024
@mouliraj-mcw
Copy link
Contributor

Hi @jdh8
As mentioned in the C kernel, since support for float-to-int32 conversion is currently unavailable, we will wait until this support is added.

Hi @rdjogoTT, @rtawfik01
Is support for float-to-int32 conversion possible?

cc: @eyonland

@rdjogoTT
Copy link
Contributor

@mouliraj-mcw I do not believe support for float-to-int32 is going to become available as a single call similar to float_to_int16, because the underlying instruction does not support int32 as output to the conversion. We do have a manually implemented conversion that I made for the Typecast OP found here: https://github.com/tenstorrent/tt-llk-wh-b0/blob/eadfdb466dcc738803969f9eda76aacbb33b5ad6/common/inc/sfpu/ckernel_sfpu_typecast.h#L62-L97
I wouldn't recommend just dropping in this kernel in place of float_to_int, because that would be very slow. It would be better to combine the two so that there isn't as much of a perf hit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working op_cat: eltwise precision Precision issues pytorch-compiler
Projects
None yet
Development

No branches or pull requests

4 participants