Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: remove exception logging during evaluation #347

Merged
merged 2 commits into from
Jul 2, 2024

Conversation

beeme1mr
Copy link
Member

@beeme1mr beeme1mr commented Jul 1, 2024

This PR

  • remove exception logging during evaluation

Impacts

open-telemetry/opentelemetry-demo#1628

Notes

The OTel demo was seeing overly verbose log messages when they disabled a flag. That's because a FLAG_NOT_FOUND error was raised by the provider, and the SDK was logging all exceptions. I've updated the SDK to no longer log OpenFeatureErrors since providers commonly raise these, which can lead to log spam. It's worth noting that this information can still be accessed via a hook or evaluation details.

Signed-off-by: Michael Beemer <beeme1mr@users.noreply.github.com>
Copy link

codecov bot commented Jul 1, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.41%. Comparing base (5c7bd14) to head (c4c3d3f).

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #347      +/-   ##
==========================================
- Coverage   97.41%   97.41%   -0.01%     
==========================================
  Files          26       26              
  Lines        1240     1239       -1     
==========================================
- Hits         1208     1207       -1     
  Misses         32       32              
Flag Coverage Δ
unittests 97.41% <ø> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

openfeature/client.py Outdated Show resolved Hide resolved
@dyladan
Copy link

dyladan commented Jul 2, 2024

Is this affecting only the Python SDK or is it something that should be specified more generally?

@beeme1mr
Copy link
Member Author

beeme1mr commented Jul 2, 2024

Is this affecting only the Python SDK or is it something that should be specified more generally?

Logging isn't covered in the spec. We could consider general guidance but I would be reluctant to go any deeper than that.

Signed-off-by: Michael Beemer <beeme1mr@users.noreply.github.com>
@beeme1mr beeme1mr merged commit 0ed625f into main Jul 2, 2024
14 checks passed
@beeme1mr beeme1mr deleted the remove-evaluation-logging branch July 2, 2024 15:37
@toddbaert
Copy link
Member

Is this affecting only the Python SDK or is it something that should be specified more generally?

After approving, I'm actually considering if we should specify something here. I'm wondering if this is the wrong solution to the problem.

I think we should be logging errors if a flag is not found, and that this change will make debugging harder. I think this is a situation where we'd want log-spam.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants