-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trust Boundary Violation test cases are not exploitable #43
Comments
Hey David,
Long time no see :-) I've actually been thinking about this recently as
well. I think we should drop 3 categories of test cases from Benchmark;
1) Trust boundary violations
2) Weak randomness
3) Weak hashing
and then add things like:
a) XXE
b) XPath injection
c) XQuery injection
d) Serialization vulns
I'd like to do this as part of a Benchmark update to more closely align it
with the OWASP Top 10 2017 once that is finalized. I think a change like
this would drop some of the weak/unimportant categories, including the one
you specifically brought up, and add some more important ones that are
missing from our T10 coverage.
What do you think about this idea? Any other suggestions?
Thanks for your input.
…-Dave
On Tue, Nov 7, 2017 at 3:11 PM, thornmaker ***@***.***> wrote:
It is my understanding that test cases are to be fully executable and
exploitable. Trust Boundary Violation issues do not appear to meet this
baseline as they are not exploitable. As such, I'm requesting that this
category of issues be removed. Please find below supporting evidence.
According to CWE-501 - Trust Boundary Violation
<https://cwe.mitre.org/data/definitions/501.html> the negative
consequence of a Trust Boundary Violation is that "it becomes easier for
programmers to mistakenly trust unvalidated data". Should a developer
mistakenly trust the unvalidated in some other part of the application,
then this certainly could lead to an exploitable scenario. However,
"combining trusted and untrusted data in the same data structure" alone is
not something actionable by an attacker and thus not exploitable.
The OWASP website itself has essentially no meaningful information on this
issue.
I could not identify any CVEs associated to Trust Boundary Violations. For
example, a CVE search for such issues
<https://www.cvedetails.com/vulnerability-search.php?f=1&cweid=501>
returns 0 results.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<https://github.com/OWASP/Benchmark/issues/43>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AExbBYAvdmf78IdsF1f4uKcNEsmqQh1Hks5s0LlqgaJpZM4QVYJ4>
.
|
1) Trust boundary violations
Seems reasonable to drop this category to me.
2) Weak randomness
3) Weak hashing
These seem like things that tools can and should find. I suggest keeping them in.
+1 on additional OWASP Top Ten alignment.
…--
Jim Manico
@manicode
On Nov 10, 2017, at 12:43 PM, Dave Wichers ***@***.***> wrote:
Hey David,
Long time no see :-) I've actually been thinking about this recently as
well. I think we should drop 3 categories of test cases from Benchmark;
1) Trust boundary violations
2) Weak randomness
3) Weak hashing
and then add things like:
a) XXE
b) XPath injection
c) XQuery injection
d) Serialization vulns
I'd like to do this as part of a Benchmark update to more closely align it
with the OWASP Top 10 2017 once that is finalized. I think a change like
this would drop some of the weak/unimportant categories, including the one
you specifically brought up, and add some more important ones that are
missing from our T10 coverage.
What do you think about this idea? Any other suggestions?
Thanks for your input.
-Dave
On Tue, Nov 7, 2017 at 3:11 PM, thornmaker ***@***.***> wrote:
> It is my understanding that test cases are to be fully executable and
> exploitable. Trust Boundary Violation issues do not appear to meet this
> baseline as they are not exploitable. As such, I'm requesting that this
> category of issues be removed. Please find below supporting evidence.
>
> According to CWE-501 - Trust Boundary Violation
> <https://cwe.mitre.org/data/definitions/501.html> the negative
> consequence of a Trust Boundary Violation is that "it becomes easier for
> programmers to mistakenly trust unvalidated data". Should a developer
> mistakenly trust the unvalidated in some other part of the application,
> then this certainly could lead to an exploitable scenario. However,
> "combining trusted and untrusted data in the same data structure" alone is
> not something actionable by an attacker and thus not exploitable.
>
> The OWASP website itself has essentially no meaningful information on this
> issue.
>
> I could not identify any CVEs associated to Trust Boundary Violations. For
> example, a CVE search for such issues
> <https://www.cvedetails.com/vulnerability-search.php?f=1&cweid=501>
> returns 0 results.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/OWASP/Benchmark/issues/43>, or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AExbBYAvdmf78IdsF1f4uKcNEsmqQh1Hks5s0LlqgaJpZM4QVYJ4>
> .
>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I have no qualms about revamping the categories. More importantly though, I would like to see more diversity in test cases beyond just [source] + [dataflow pattern] + [sink]. This is getting a bit off-topic for this issue though. For Trust Boundary Violation issues, it seems there is agreement (at least between those who have commented) that this category should be dropped. |
👍 on dropping the trust boundary examples and category. Also, it is not obvious to me that CWE-501 is about preventing unescaped user-data from being stored in sessions, which seems to be the focus of the benchmark examples. The description mentions that mixing of trusted & untrusted data is the issue. So, if some of the keys are HTML-escaped, but some others are not, then there is an issue. Specifically on sessions, I actually tend to think that storing HTML-escaped data there is a bad practice as it does not preserve the original data and makes it specific for a single purpose (HTML) and no longer enables the data to be saved to a DB for example. Instead, I would prefer to consider session data as sources for the other vulnerabilities such as (stored-?)XSS. |
From my understanding the issue with the Trust Boundary Violation test cases remains? |
Yes. That is simply because the project has not released v1.3, which will drop those test cases. So for now, ignore them if you do not care about them. |
Thank you for your answer. What would be the next milestones for v1.3? I have written you on Slack as well, with further questions. |
@davewichers Do you happen to have a timeline for this? Just stumbled over those tests as well with the same questions @DomKoe had. No pressure, mostly curious :) |
@bmuskalla - Release v1.3 is basically ready, but going through some QA. My son @JonathonWichers is actually helping me get this done. I hope to have it out in a few months. |
@davewichers I assume 1.3 is being worked on in private as I don't see any branches or references to it on |
@bmuskalla - That's correct. Its private currently, but I've been thinking about changing that. My idea is to create a v1.3 dev branch and push it out as a preview/release candidate. And I'd periodically update it based in improvements/addressing feedback, until its ready for release, then I'd merge it in. I'm also doing some significant surgery on the Generator, because it's really 3 apps in one: test case generator, the web app the test cases go into, and the scoring machinery. And then the Benchmark itself is a subset of the Generator (the web app part, plus the generated test case, plus the scoring app), but without the generation engine and the configuration/code snippets used to generate the Benchmark. I'd like to release the Scoring engine as a separate app. That way, when multiple versions of Benchmark (Java, dotNet, etc.) come out, all of them could use the separate standalone scoring engine to generate scorecards. (There is an OWASP team working (slowly) on Benchmark dotNet, by the way, in case anyone wants to help them.) If so, just let me know. |
Thanks for the overview of the ongoing work, looking forward to it.
That would be awesome for people to try it out early and give feedback before the merge. Just give me a nudge here and I'll give it a try running it against CodeQL. |
It is my understanding that test cases are to be fully executable and exploitable. Trust Boundary Violation issues do not appear to meet this baseline as they are not exploitable. As such, I'm requesting that this category of issues be removed. Please find below supporting evidence.
According to CWE-501 - Trust Boundary Violation the negative consequence of a Trust Boundary Violation is that "it becomes easier for programmers to mistakenly trust unvalidated data". Should a developer mistakenly trust the unvalidated in some other part of the application, then this certainly could lead to an exploitable scenario. However, "combining trusted and untrusted data in the same data structure" alone is not something actionable by an attacker and thus not exploitable.
The OWASP website itself has essentially no meaningful information on this issue.
I could not identify any CVEs associated to Trust Boundary Violations. For example, a CVE search for such issues returns 0 results.
The text was updated successfully, but these errors were encountered: