-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Canonicalization of GEPs to i8 #1292
Comments
I have a local hack that undoes this in certain circumstances. That makes the CTS pass, but this is likely uncovering a bug where we are generating the wrong addressing logic already that is just being exposed by the new code generation. |
- fix GetIdxsForTyFromOffset when called on unsized types - rework runOnUpgradeableConstantCasts to be more versatile/general - add a case in runOnImplicitGEP for implicit cast between clspvResourceOrLocal and GEP - fix runOnAllocaNotAliasing to avoid lowering alloca that do not need to be lowered - Update tests - Some tests need more work and are marked FAIL Ref google#1292
- fix GetIdxsForTyFromOffset when called on unsized types - rework runOnUpgradeableConstantCasts to be more versatile/general - add a case in runOnImplicitGEP for implicit cast between clspvResourceOrLocal and GEP - fix runOnAllocaNotAliasing to avoid lowering alloca that do not need to be lowered - Update tests - Some tests need more work and are marked FAIL Ref google#1292
- fix GetIdxsForTyFromOffset when called on unsized types - rework runOnUpgradeableConstantCasts to be more versatile/general - add a case in runOnImplicitGEP for implicit cast between clspvResourceOrLocal and GEP - fix runOnAllocaNotAliasing to avoid lowering alloca that do not need to be lowered - Update tests - Some tests need more work and are marked FAIL Ref google#1292
- fix GetIdxsForTyFromOffset when called on unsized types - rework runOnUpgradeableConstantCasts to be more versatile/general - add a case in runOnImplicitGEP for implicit cast between clspvResourceOrLocal and GEP - fix runOnAllocaNotAliasing to avoid lowering alloca that do not need to be lowered - Update tests - Some tests need more work and are marked FAIL Ref google#1292
This change in LLVM is negatively impacting performance on ChromeOS. Between 10% to 30% regression depending on the various benchmark we have. |
* update LLVM & rework runOnUpgradeableConstantCasts - fix GetIdxsForTyFromOffset when called on unsized types - rework runOnUpgradeableConstantCasts to be more versatile/general - add a case in runOnImplicitGEP for implicit cast between clspvResourceOrLocal and GEP - fix runOnAllocaNotAliasing to avoid lowering alloca that do not need to be lowered - Update tests - Some tests need more work and are marked FAIL Ref #1292 * remove debug code and add dce to clean a test
The new code is correct. The constant indexing into an array of arrays of floats collapses into a constant index of an array of floats. Ref google#1292
Clspv will generate invalid SPIR-V some cases due to this issue. Take
The I plan to update the coherent tests so here is the source for reference:
|
refs google#1292 * Convert Coherent tests from .cl to .ll * No longer tests the whole flow and instead is targeted at the SPIRVProducer only * more robust since it is more specific
refs #1292 * Convert Coherent tests from .cl to .ll * No longer tests the whole flow and instead is targeted at the SPIRVProducer only * more robust since it is more specific
Thinking more about a general solution I think clspv needs to distinguish between the two needs for type inference:
Type inference can't just look past geps in the first case since we need to know how the type is interpreted for address calculations, but for the second case it's clear we need to have a more holistic view of how the interface is used. |
refs google#1292 * Rewrite as more targeted tests to reduce potential perturbations
refs #1292 * Rewrite as more targeted tests to reduce potential perturbations
As the last update of llvm introduced the canonicalization of GEPs to i8, we have a case where the user of a phi nodes is now a i8. This lead the inferering of the phi type to be lowered to i8 while it could have a higher type. To avoid that, track possible upgrade of GEPs of PHI and upgrade them if all the user of the PHI can use the new higher type. This is fixing the performance regression for the ChromeOS workloads. Ref google#1292
refs google#1292 refs google#1303 * Fixed a bug in remove unused arguments * attributes on parameters weren't handled parameters * rewrite tests to be more targeted at the actual feature
* upgrade GEPs of PHI when possible As the last update of llvm introduced the canonicalization of GEPs to i8, we have a case where the user of a phi nodes is now a i8. This lead the inferering of the phi type to be lowered to i8 while it could have a higher type. To avoid that, track possible upgrade of GEPs of PHI and upgrade them if all the user of the PHI can use the new higher type. This is fixing the performance regression for the ChromeOS workloads. Ref #1292 * simplify test * remove 1292-phi_users.cl --------- Co-authored-by: Alan Baker <alanbaker@google.com>
ref google#1292 * Update tests to remove XFAILs
ref #1292 * Update tests to remove XFAILs
#1306 has an interesting line here: https://github.com/google/clspv/pull/1306/files#diff-d82932682c3cce860c8c728b57e2a68e6b04593c1edb206934c340665fb71f29L16
That PR changes the argument from A+1 to A+n, where n is non-constant. This is an interesting distinction from the perspective of a general solution to the type inference problem. (Because the constant multiple of the element size is folded into the constant offset, and eventually has to be disentangled from a member offset at a deeper level of the type hierarchy. The folding loses information that has to be recovered somehow, in general.) Any general solution will need to have test cases that do add a constant to an decayed array in this way. |
I expect in the future that the solution applied to that test will also fall over and LLVM will instead produce something akin to:
|
At the beginning of this year, we implemented the experimental i8 Canonicalization of GEPs in our project. Initially, this feature simply converted offsets to i8, maintaining the original sizes of loads and stores without imposing a speed penalty. Additionally, we incorporated the fix from Issue #1269 and a custom optimization sequence, particularly running Clspv-opt with lower-addrspacecast and replace-llvm-intrinsics passes prior to Clspv. The impact has been remarkable! Since these changes, Clspv has achieved unprecedented stability. We've successfully compiled a wide array of algorithms and higher-level language constructs from c++26 and b, including custom vectors, templated tensors, lambda chains, recursive filters, and ray tracing, all with almost no crashes from Clspv. However, the main branches of Clspv and LLVM have seen significant changes recently, which initially prevented us from merging the latest commits without issues. Fortunately, this situation has improved, and we are now able to compile most of our projects using the current and unmodified version of Clspv. Despite these advances, there is a new challenge: a great number of load operations have become fragmented into bytes, leading to an average 15% performance decrease, even on modern Nvidia hardware. This suggests a possible larger impact were it not for the mitigating effects of the drivers' compiler optimizations or hardwares' memory latency reduction mechanisms. As engineers, we should strive to restore the original sizes of these loads and stores to minimize their instruction sequences. This will yield more succinct Spirv, and faster loading times as well. Given the talent and dedication in this project, I am confident we will address this issue soon. |
@BukeBeyond At the moment it feels like all identified regressions have been addressed from ChromeOS point-of-view. Meaning that we do not have regressions on the benchmarks we run, nor on the OpenCL-CTS. If you have some small examples using bytes access while it's clear it could coalesce, I'll be happy to have a look at them to try to solve any potential regression. |
Fortunately, Romaric fixed this #1318 case of load fragmentation right away. He was able to do it within minutes, whereas the closer investigation took me a couple of hours. :-) It was a specific case of i32 loads, where we use them extensively for Unified Addressing (and Pointer) simulation under Vulkan. Now, our ray tracer is back to full speed, and with a slightly smaller Spirv size to boot. :-) There remains only a handful of cases of load fragmentation on our end, but they will take a bit more time to investigate and simplify. It seems the problems of i8 GEPs and the upcoming ptradd have been mostly solved. Yet the great benefits of this evolution are already here. We program very interactively, hundreds of compiles per hour. So things are looking quite good for the future of Clspv! Happy Saint Patrick's Day! May luck's gentle hand guide ye, and, of far greater worth, may the warmth of good kin and comrades be ever by your side! :-) 🍀🌈 🥇☘️ |
And here is the latest fragmentation find #1322. |
LLVM now canonicalizes some (constant) GEPs to perform address calculations in terms of bytes. This is part of larger change of moving towards
ptradd
instead ofgetelementptr
. This will require changes in clspv to keep pace.At the moment I'm seeing different failures:
For example, in
test/CommonBuiltins/min/half2_fmin.cl
the loads are broken down into 4 1-byte loads, a vector creation, and a bitcast fromv4uchar
tov2half
. This is problematic for a few reasons:CTS failures:
The likely solution is a change in type inference that can find a better type than i8 by looking at the GEP's users (right now it stops at any GEP).
The text was updated successfully, but these errors were encountered: