-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potentially undetected count overflow in do_count() #1325
Comments
I spent the last 15 minutes looking at this, but don't see a clear answer. I see this:
I think your proposal 2. is my (a) and your proposal 3. is my (d)
The question is really "do I feel like a perfectionist today?" and "Do I have the mental energy to do this?" Maybe I should try, and you can review my work? |
I already mostly analyzed all of that in my post above, and my conclusion was that the changes I proposed in (2) will fix the problem:
I forgot to analyze the impact of line 1390 after applying my proposed fix. link-grammar/link-grammar/parse/count.c Line 1390 in c1b15b8
In that case, leftcount is clipped. total is already clipped from the existing call to parse_count_clamp() at the end of the match loop, and count is read from a clipped table entry (or do_count() result, which is clipped). So the max. total is (2^31-1) + ((2^31-1) * (2^31-1) ) < 2^62 .Then the multiplication link-grammar/link-grammar/parse/count.c Line 1428 in c1b15b8
can add to it at most ( (2^31-1) * (2^31-1) ) < 2^62 , so the result is still less than 2^63 .(it's very subtle, I hope I didn't miss something...).
In any case, I can send a PR for my proposal if it seems fine to you. |
Yes, send a pull req. At each location, add comments such as |
I said:
The number |
I didn't implement that, seems just as an unneeded overhead... |
That won't happen. There won't even be a million. However, in the earlier days, I had SQL dicts that had a hundred |
In generations mode, we have ~3M disjuncts per word (in the middle of sentences). However, as we know this causes a vast slowness... The speed can be improved (a WIP), but the real solution is to implement the disjunct sampling method you have suggested. However, this will be for a future version (i.e. not for 5.11.0). |
While running tests on "amy" with ASN/UBSAN, I got this:
This just happens to never occur with the other languages and the various corpus-* files, but theoretically, it could happen due to the current code.
All the calculations are done in signed 64-bit (the storage is in signed 32-bit due to a recent change, but this has no implications here).
It is signed 64-bit due to historical reasons, but it can be a good idea to keep it signed because overflows could then be detected by the compiler's UBSAN checks.
The problem arises from the clipping value. It is
INT_MAX
, which is2^31-1
.Observe the following code:
link-grammar/link-grammar/parse/count.c
Lines 1371 to 1381 in c1b15b8
Each of the 4 do_count() calls may return INT_MAX, and hence
leftcount
(and similarlyrightcount
) can be up to4*(2^31-1)
=2^33-4
.Then we have this code:
link-grammar/link-grammar/parse/count.c
Line 1390 in c1b15b8
We may get here up to (
total
+(2^33-4)
*(2^31-1)
) which may be >2^63
.And we also have this:
link-grammar/link-grammar/parse/count.c
Line 1428 in c1b15b8
So we may get here (
total
+(2^33-4)
*(2^33-4)
) = which is much more than2^63
(andtotal
here may already be near or >2^63
.Possible solutions:
leftcount
andrightcount
before using them in the multiplications.parse_count_clamp()
, clamp to2^29-1
.(2) and (3) will add a slight overhead, but by analyzing the overflow possibilities I found some places in which the efficiency can be improved:
CACHE_COUNT()
, c is unnecessarily too wide, andcount_t
can be used instead. However (I didn't check) maybe the compiler already does such an optimization.parse_count_clamp()
call that has a debug printout "OVERFLOW1" is not needed since the loop can be performed no more than(2^8 * 2)
times, and accumulate no more than(2*31-1) per iteration (max. total ~
2^40)). To the final
parse_count_clamp()("OVERFLOW2"), after potentially accumulation up to additional
2^31-1` is enough.>>>>>>>> For now, I would choose option (2).
@linas,
Please check if my analysis is correct, and I will send a PR (if needed).
The text was updated successfully, but these errors were encountered: