Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Canonicalization of GEPs to i8 #1292

Open
alan-baker opened this issue Jan 29, 2024 · 10 comments
Open

Canonicalization of GEPs to i8 #1292

alan-baker opened this issue Jan 29, 2024 · 10 comments

Comments

@alan-baker
Copy link
Collaborator

alan-baker commented Jan 29, 2024

LLVM now canonicalizes some (constant) GEPs to perform address calculations in terms of bytes. This is part of larger change of moving towards ptradd instead of getelementptr. This will require changes in clspv to keep pace.

At the moment I'm seeing different failures:

  • unit tests with incorrect codegen expectations (~115 per arch)
  • cts failures that look functionally incorrect

For example, in test/CommonBuiltins/min/half2_fmin.cl the loads are broken down into 4 1-byte loads, a vector creation, and a bitcast from v4uchar to v2half. This is problematic for a few reasons:

  1. Not all devices support byte addressing and clspv now requires it.
  2. It relies on downstream drivers to properly coalesce memory accesses. This might be ok, but I don't have performance numbers that are easily accessible one way or the other.
  3. It muddies the IR. This isn't just an aesthetic concern. There is a binary size increase in these cases due to more instructions getting used.

CTS failures:

  • basic/vstore_local (fails char2 and uchar2)
  • relationals/shuffle_copy
  • relationals/shuffle_function_call
  • relationals/shuffle_built_in
  • relationals/shuffle_built_in_dual_input
  • relationals/shuffle_array_cast

The likely solution is a change in type inference that can find a better type than i8 by looking at the GEP's users (right now it stops at any GEP).

@alan-baker
Copy link
Collaborator Author

I have a local hack that undoes this in certain circumstances. That makes the CTS pass, but this is likely uncovering a bug where we are generating the wrong addressing logic already that is just being exposed by the new code generation.

rjodinchr added a commit to rjodinchr/clspv that referenced this issue Feb 1, 2024
- fix GetIdxsForTyFromOffset when called on unsized types
- rework runOnUpgradeableConstantCasts to be more versatile/general
- add a case in runOnImplicitGEP for implicit cast between
  clspvResourceOrLocal and GEP
- fix runOnAllocaNotAliasing to avoid lowering alloca that do not need
  to be lowered
- Update tests
- Some tests need more work and are marked FAIL

Ref google#1292
rjodinchr added a commit to rjodinchr/clspv that referenced this issue Feb 1, 2024
- fix GetIdxsForTyFromOffset when called on unsized types
- rework runOnUpgradeableConstantCasts to be more versatile/general
- add a case in runOnImplicitGEP for implicit cast between
  clspvResourceOrLocal and GEP
- fix runOnAllocaNotAliasing to avoid lowering alloca that do not need
  to be lowered
- Update tests
- Some tests need more work and are marked FAIL

Ref google#1292
rjodinchr added a commit to rjodinchr/clspv that referenced this issue Feb 1, 2024
- fix GetIdxsForTyFromOffset when called on unsized types
- rework runOnUpgradeableConstantCasts to be more versatile/general
- add a case in runOnImplicitGEP for implicit cast between
  clspvResourceOrLocal and GEP
- fix runOnAllocaNotAliasing to avoid lowering alloca that do not need
  to be lowered
- Update tests
- Some tests need more work and are marked FAIL

Ref google#1292
rjodinchr added a commit to rjodinchr/clspv that referenced this issue Feb 1, 2024
- fix GetIdxsForTyFromOffset when called on unsized types
- rework runOnUpgradeableConstantCasts to be more versatile/general
- add a case in runOnImplicitGEP for implicit cast between
  clspvResourceOrLocal and GEP
- fix runOnAllocaNotAliasing to avoid lowering alloca that do not need
  to be lowered
- Update tests
- Some tests need more work and are marked FAIL

Ref google#1292
@rjodinchr
Copy link
Collaborator

This change in LLVM is negatively impacting performance on ChromeOS. Between 10% to 30% regression depending on the various benchmark we have.

rjodinchr added a commit that referenced this issue Feb 1, 2024
* update LLVM & rework runOnUpgradeableConstantCasts

- fix GetIdxsForTyFromOffset when called on unsized types
- rework runOnUpgradeableConstantCasts to be more versatile/general
- add a case in runOnImplicitGEP for implicit cast between
  clspvResourceOrLocal and GEP
- fix runOnAllocaNotAliasing to avoid lowering alloca that do not need
  to be lowered
- Update tests
- Some tests need more work and are marked FAIL

Ref #1292

* remove debug code and add dce to clean a test
dneto0 added a commit to dneto0/clspv that referenced this issue Feb 1, 2024
The new code is correct.  The constant indexing into an array of arrays
of floats collapses into a constant index of an array of floats.

Ref google#1292
alan-baker pushed a commit that referenced this issue Feb 1, 2024
…cl (#1298)

The new code is correct.  The constant indexing into an array of arrays
of floats collapses into a constant index of an array of floats.

Ref #1292
@alan-baker
Copy link
Collaborator Author

alan-baker commented Feb 5, 2024

Clspv will generate invalid SPIR-V some cases due to this issue. Take test/Coherent/frexp.cl for example. When compiled with -cl-native-math the resulting SPIR-V generates the following validation error:

error: line 78: GLSL.std.450 Frexp: expected operand Exp data type to be a 32-bit int scalar or vector type
  %50 = OpExtInst %float %49 Frexp %45 %48

The data + 1 results in gep canonicalization happening and clspv pick i8 as the underlying type for data. This is a pre-existing issue that is triggered more frequently now.

I plan to update the coherent tests so here is the source for reference:

kernel void foo(global int *data, global float* x) {
  float y = data[0];
  barrier(CLK_GLOBAL_MEM_FENCE);
  *x = frexp(y, data + 1);
}

alan-baker added a commit to alan-baker/clspv that referenced this issue Feb 5, 2024
refs google#1292

* Convert Coherent tests from .cl to .ll
* No longer tests the whole flow and instead is targeted at the
  SPIRVProducer only
  * more robust since it is more specific
alan-baker added a commit that referenced this issue Feb 5, 2024
refs #1292

* Convert Coherent tests from .cl to .ll
* No longer tests the whole flow and instead is targeted at the
  SPIRVProducer only
  * more robust since it is more specific
@alan-baker
Copy link
Collaborator Author

Thinking more about a general solution I think clspv needs to distinguish between the two needs for type inference:

  1. Data type of a particular instruction
  2. Data type of the SPIR-V interface

Type inference can't just look past geps in the first case since we need to know how the type is interpreted for address calculations, but for the second case it's clear we need to have a more holistic view of how the interface is used.

alan-baker added a commit to alan-baker/clspv that referenced this issue Feb 6, 2024
refs google#1292

* Rewrite as more targeted tests to reduce potential perturbations
rjodinchr pushed a commit that referenced this issue Feb 6, 2024
refs #1292

* Rewrite as more targeted tests to reduce potential perturbations
rjodinchr added a commit to rjodinchr/clspv that referenced this issue Feb 7, 2024
As the last update of llvm introduced the canonicalization of GEPs to
i8, we have a case where the user of a phi nodes is now a i8. This
lead the inferering of the phi type to be lowered to i8 while it could
have a higher type.

To avoid that, track possible upgrade of GEPs of PHI and upgrade them
if all the user of the PHI can use the new higher type.

This is fixing the performance regression for the ChromeOS workloads.

Ref google#1292
alan-baker added a commit to alan-baker/clspv that referenced this issue Feb 7, 2024
refs google#1292
refs google#1303

* Fixed a bug in remove unused arguments
  * attributes on parameters weren't handled parameters
* rewrite tests to be more targeted at the actual feature
rjodinchr pushed a commit that referenced this issue Feb 8, 2024
refs #1292
refs #1303

* Fixed a bug in remove unused arguments
  * attributes on parameters weren't handled parameters
* rewrite tests to be more targeted at the actual feature
rjodinchr added a commit that referenced this issue Feb 8, 2024
* upgrade GEPs of PHI when possible

As the last update of llvm introduced the canonicalization of GEPs to
i8, we have a case where the user of a phi nodes is now a i8. This
lead the inferering of the phi type to be lowered to i8 while it could
have a higher type.

To avoid that, track possible upgrade of GEPs of PHI and upgrade them
if all the user of the PHI can use the new higher type.

This is fixing the performance regression for the ChromeOS workloads.

Ref #1292

* simplify test

* remove 1292-phi_users.cl

---------

Co-authored-by: Alan Baker <alanbaker@google.com>
alan-baker added a commit to alan-baker/clspv that referenced this issue Feb 8, 2024
ref google#1292

* Update tests to remove XFAILs
rjodinchr pushed a commit that referenced this issue Feb 9, 2024
ref #1292

* Update tests to remove XFAILs
@dneto0
Copy link
Collaborator

dneto0 commented Feb 9, 2024

#1306 has an interesting line here: https://github.com/google/clspv/pull/1306/files#diff-d82932682c3cce860c8c728b57e2a68e6b04593c1edb206934c340665fb71f29L16

kernel void bar(global int *A, global int *B, int n) { apple(A + 1, B, n); }

That PR changes the argument from A+1 to A+n, where n is non-constant.

This is an interesting distinction from the perspective of a general solution to the type inference problem. (Because the constant multiple of the element size is folded into the constant offset, and eventually has to be disentangled from a member offset at a deeper level of the type hierarchy. The folding loses information that has to be recovered somehow, in general.)

Any general solution will need to have test cases that do add a constant to an decayed array in this way.

@alan-baker
Copy link
Collaborator Author

#1306 has an interesting line here: https://github.com/google/clspv/pull/1306/files#diff-d82932682c3cce860c8c728b57e2a68e6b04593c1edb206934c340665fb71f29L16

kernel void bar(global int *A, global int *B, int n) { apple(A + 1, B, n); }

That PR changes the argument from A+1 to A+n, where n is non-constant.

This is an interesting distinction from the perspective of a general solution to the type inference problem. (Because the constant multiple of the element size is folded into the constant offset, and eventually has to be disentangled from a member offset at a deeper level of the type hierarchy. The folding loses information that has to be recovered somehow, in general.)

Any general solution will need to have test cases that do add a constant to an decayed array in this way.

I expect in the future that the solution applied to that test will also fall over and LLVM will instead produce something akin to:

%mul = mul i32 %n, %size_of_A
%gep = getelementptr i8, ptr addrspace(1) %A, %mul

@BukeBeyond
Copy link

BukeBeyond commented Mar 16, 2024

At the beginning of this year, we implemented the experimental i8 Canonicalization of GEPs in our project. Initially, this feature simply converted offsets to i8, maintaining the original sizes of loads and stores without imposing a speed penalty. Additionally, we incorporated the fix from Issue #1269 and a custom optimization sequence, particularly running Clspv-opt with lower-addrspacecast and replace-llvm-intrinsics passes prior to Clspv.

The impact has been remarkable! Since these changes, Clspv has achieved unprecedented stability. We've successfully compiled a wide array of algorithms and higher-level language constructs from c++26 and b, including custom vectors, templated tensors, lambda chains, recursive filters, and ray tracing, all with almost no crashes from Clspv.

However, the main branches of Clspv and LLVM have seen significant changes recently, which initially prevented us from merging the latest commits without issues. Fortunately, this situation has improved, and we are now able to compile most of our projects using the current and unmodified version of Clspv.

Despite these advances, there is a new challenge: a great number of load operations have become fragmented into bytes, leading to an average 15% performance decrease, even on modern Nvidia hardware. This suggests a possible larger impact were it not for the mitigating effects of the drivers' compiler optimizations or hardwares' memory latency reduction mechanisms.

As engineers, we should strive to restore the original sizes of these loads and stores to minimize their instruction sequences. This will yield more succinct Spirv, and faster loading times as well.

Given the talent and dedication in this project, I am confident we will address this issue soon.

@rjodinchr
Copy link
Collaborator

@BukeBeyond At the moment it feels like all identified regressions have been addressed from ChromeOS point-of-view. Meaning that we do not have regressions on the benchmarks we run, nor on the OpenCL-CTS.

If you have some small examples using bytes access while it's clear it could coalesce, I'll be happy to have a look at them to try to solve any potential regression.

@BukeBeyond
Copy link

BukeBeyond commented Mar 17, 2024

Fortunately, Romaric fixed this #1318 case of load fragmentation right away. He was able to do it within minutes, whereas the closer investigation took me a couple of hours. :-)

It was a specific case of i32 loads, where we use them extensively for Unified Addressing (and Pointer) simulation under Vulkan. Now, our ray tracer is back to full speed, and with a slightly smaller Spirv size to boot. :-)

There remains only a handful of cases of load fragmentation on our end, but they will take a bit more time to investigate and simplify.

It seems the problems of i8 GEPs and the upcoming ptradd have been mostly solved. Yet the great benefits of this evolution are already here.

We program very interactively, hundreds of compiles per hour.
When first experimenting with Clspv, we would get a few crashes per Hour.
After Romaric's bug fixes last August, Clspv's crash rate dropped to a few a Day.
After the Canonicalization of GEPs to i8, the crashes reduced to just a Few a Month!

So things are looking quite good for the future of Clspv!

Happy Saint Patrick's Day! May luck's gentle hand guide ye, and, of far greater worth, may the warmth of good kin and comrades be ever by your side! :-) 🍀🌈 🥇☘️

@BukeBeyond
Copy link

And here is the latest fragmentation find #1322.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants