We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There is currently no easy way to evaluate a trained model. There should be some kind of interface for this, e.g.
from saber import Saber sb = Saber() sb.load('path/to/some/model') sb.evaluate('/path/to/some/dataset/to/evaluate')
and / or
(saber) $ python -m saber.cli.test --pretrained_model path/to/pretrained/model --dataset_folder path/to/datasets/to/evaluate/on
Here is a hack that works for the time being and can serve as inspiration:
from saber.saber import Saber from saber.metrics import Metrics from saber import constants constants.UNK = '<UNK>' constants.PAD = '<PAD>' sb = Saber() sb.load('/home/john/dev/response/pretrained_models/CALBC_100K_blacklisted') sb.load_dataset('/home/john/dev/response/datasets/train_on_BC4CHEMD_test_on_BC5CDR') sb.config.criteria = 'right' evaluation_data = sb.model.prepare_data_for_training()[0] print(sb.datasets[-1].idx_to_tag) metric = Metrics(sb.config, sb.model, evaluation_data, sb.datasets[-1].idx_to_tag, './', model_idx=0) test_scores = metric._evaluate(evaluation_data, partition='test') metric.print_performance_scores(test_scores, title='test')
The text was updated successfully, but these errors were encountered:
JohnGiorgi
No branches or pull requests
There is currently no easy way to evaluate a trained model. There should be some kind of interface for this, e.g.
and / or
Here is a hack that works for the time being and can serve as inspiration:
The text was updated successfully, but these errors were encountered: