This repository has been archived by the owner on Jul 12, 2024. It is now read-only.
Releases: credo-ai/credoai_lens
Releases · credo-ai/credoai_lens
v1.1.8
What's Changed
- Dsp 474 make documentation changes based on oleg input by @IanAtCredo in #329
- updated evaluator documentation by @IanAtCredo in #330
- changed installation requirements to not include 3.11 by @IanAtCredo in #332
- Feat/3.7 support by @IanAtCredo in #333
Full Changelog: v1.1.7...v1.1.8
v1.1.7
What's Changed
- Bump rlespinasse/github-slug-action from 4.4.0 to 4.4.1 in /.github/workflows by @dependabot in #321
- fix: Added tolerance to sum of probability check by @fabrizio-credo in #320
- Added support for ks score to classification models by @fabrizio-credo in #319
- Quick fixes for bugs relating to validation of string-containing DFs and default value for empty y_prob by @esherman-credo in #323
- Dsp 467 update multi class definition and criteria in lens by @fabrizio-credo in #324
- Bump tensorflow from 2.10.0 to 2.11.1 by @credo-nate in #327
Full Changelog: v1.1.6...v1.1.7
v1.1.6
What's Changed
- Fixing broken links to platform integration notebook by @esherman-credo in #315
- Fix/model validation by @esherman-credo in #314
- Feat/striate installation by @IanAtCredo in #316
Full Changelog: v1.1.5...v1.1.6
v1.1.5
Summary
- Bugfixes to validation
- Warnings related to sensitive feature
What's Changed
- Fix/documentation link + version by @fabrizio-credo in #312
- Multiple Validation Fixes for DummyClassifiers by @esherman-credo in #311
- Bugfix/equity by @IanAtCredo in #313
Full Changelog: v1.1.4...v1.1.5
v1.1.4
Summary
- Important bug fixes
- New metric - cumulative gain
What's Changed
- Main by @IanAtCredo in #303
- Only run validation if input data is actually provided by @esherman-credo in #304
- Correcting version import by @fabrizio-credo in #307
- Add merge workflow for main->develop by @credo-nate in #308
- Fixing validation for Dummy models which don't specify a prediction function by @esherman-credo in #305
- Added confusion matrix for fairness evaluator by @fabrizio-credo in #306
- Feat/nsf chart by @esherman-credo in #309
- Release/1.1.4 by @IanAtCredo in #310
Full Changelog: v1.1.3...v1.1.4
v1.1.3
Summary of changes
- Joblib for parallelization
- Expanded testing
- Generic functionality to support basic neural network evaluations (e.g., Performance and fairness)
- Separated out functionality to connect python environment with RAI platform using
Connect
What's Changed
- Docs/list metrics expand by @fabrizio-credo in #246
- Docs/update governance notebook by @fabrizio-credo in #236
- Feat/black workflow checker (WIP) by @IanAtCredo in #247
- moved lens to use connect by @IanAtCredo in #232
- Identity verification Update by @amrasekh in #250
- Add joblib support to Lens to enable running evaluators in parallel by @esherman-credo in #251
- Change send_to_governance function to default to overwrite existing evidence by @esherman-credo in #254
- Docs/evaluator pages by @fabrizio-credo in #253
- Modify test_lens to call get_evidence for each test by @esherman-credo in #255
- Add calls to send_to_governance to all test cases by @esherman-credo in #256
- Add functionality to check version of Lens on import by @esherman-credo in #252
- Grammar change in first paragraph of docs by @esherman-credo in #258
- allowed training data to be pulled for fetch_credit_data by @IanAtCredo in #259
- Tests/export json by @esherman-credo in #257
- fixed tag bugs by @IanAtCredo in #263
- Governance set_artifacts expects names, not objects by @esherman-credo in #264
- Create pull_request_template.md by @IanAtCredo in #260
- Docs/format evaluators by @fabrizio-credo in #267
- add connect to doc requirements by @IanAtCredo in #271
- Feat/refactor tests by @fabrizio-credo in #270
- Feat/confusion matrix by @IanAtCredo in #268
- Refactor print_results() function by @esherman-credo in #276
- [SEE PR270 FIRST] Feat/test coverage by @esherman-credo in #272
- Adds support for passing config and assessment plan URL for CI by @credo-nate in #279
- reformated confusion matrix by @IanAtCredo in #278
- Test/integration test by @esherman-credo in #269
- Add test-reports workflow for develop and main with badges by @credo-nate in #284
- Docs/experiment evaluators by @esherman-credo in #281
- fix(): restore env variables that were removed during the cleanup by @credo-nate in #285
- Fix secrets and hopefully pip cache by @credo-nate in #288
- updated index to direct new customers by @IanAtCredo in #287
- Feat/multiclass_tests_expansion by @fabrizio-credo in #286
- Feat/prism by @fabrizio-credo in #248
- Bugfixes cleanup by @IanAtCredo in #289
- add command to pipe results to mark failure by @esherman-credo in #293
- create stats functions, moved outcome distribution out of equity eval… by @IanAtCredo in #295
- added sensitive features to data profiler by @IanAtCredo in #294
- Feat/data fairness simplification by @fabrizio-credo in #292
- Ranking fairness refactoring by @amrasekh in #291
- Updated equity to leverage statistic test evidence by @fabrizio-credo in #297
- Feat/image data stable by @esherman-credo in #290
- updated requirements by @IanAtCredo in #299
- added source to all lens outputs by @IanAtCredo in #296
- Release/1.1.3 by @IanAtCredo in #300
- updated version issues by @IanAtCredo in #302
- Only call check_array if predict or predict proba is provided by @esherman-credo in #301
Full Changelog: v1.1.2...v1.1.3
v1.1.2
What's Changed
- Tests/run quickstart by @esherman-credo in #228
- Identity verification assessment by @amrasekh in #222
- Main by @IanAtCredo in #231
- modify quantization of roc and pr curve interpolation helpers to ensure adequately dense interpolation by @esherman-credo in #233
- Docs/list metrics by @esherman-credo in #234
- Feat/deepchecks by @esherman-credo in #220
- Identity Verification - tests and validations by @amrasekh in #235
- 238 shallow copy issue creating lens objects with the same pipeline object causes the latter object to overwrite the results from the former by @esherman-credo in #239
- Bugfix/docs equity by @IanAtCredo in #241
- Upgrading RankingFairness Evaluator by @amrasekh in #242
- Modify get_model_info and update_functionality so that model frameworks are full string. Enables use of XGBoost within Credo Models by @esherman-credo in #237
- Release/1.1.2 by @IanAtCredo in #243
Full Changelog: v1.1.1...v1.1.2
v1.1.1
Small fix to profiler evidence name
v1.1.0
What's Changed
- Feat/governance update by @IanAtCredo in #221
- Feat/contextual tagging system by @fabrizio-credo in #224
- Fixing function to find all evaluator subclasses by @fabrizio-credo in #225
- Release/1.1.0 by @IanAtCredo in #226
Full Changelog: v1.0.1...v1.1.0
v1.0.1
Note IDs have been removed from pipeline specification. Each pipeline list item should either be an evaluator or a tuple with (evaluator, metadata dictionary))
What's Changed
- Assessment data validation fix by @fabrizio-credo in #204
- Feat/gini coefficient by @esherman-credo in #205
- Bugfix/json by @IanAtCredo in #207
- quick import fix by @IanAtCredo in #208
- Feat/model profiler by @fabrizio-credo in #202
- FEAT/shap_evaluator by @fabrizio-credo in #201
- Feat/feature drift by @fabrizio-credo in #211
- Model zoo test freeze by @esherman-credo in #209
- Bug/210/shap error multi class by @fabrizio-credo in #215
- made 'name' attribute implicitly defined, added PipelineStep class, r… by @IanAtCredo in #214
- Feat/vocalink requirements matching by @fabrizio-credo in #216
- Bugfix/generator by @IanAtCredo in #217
- Updated metrics doc by @amrasekh in #219
- Release/1.0.1 by @IanAtCredo in #218
Full Changelog: v1.0.0...v1.0.1