You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello all,
I was following this example and adopting this approach to the implicit recommender system. In my original dataset, there are only instances of positive user interactions with items. I randomly subsampled the space of the remaining items to represent the instances of the negative classes for each user in the dataset. Thus, each instance of ELWC contains user_id and user_features in the Context, and the list of item_id and label in the Examples List (e.g., label = [1, 1, 0, 0, 0]).
My main question here is when metrics such as tfr.keras.metrics.RecallMetric(topn=TOP_K, name='Recall@k') is calculated for the validation set, is it 1) scoring and ranking items that exist inside each Examples List for each user_id or 2) scoring and ranking for all item_id that exist in the entire train and test set?
Thanks
The text was updated successfully, but these errors were encountered:
Hello all,
I was following this example and adopting this approach to the implicit recommender system. In my original dataset, there are only instances of positive user interactions with items. I randomly subsampled the space of the remaining items to represent the instances of the negative classes for each user in the dataset. Thus, each instance of ELWC contains
user_id
anduser_features
in the Context, and the list ofitem_id
andlabel
in the Examples List (e.g., label = [1, 1, 0, 0, 0]).My main question here is when metrics such as
tfr.keras.metrics.RecallMetric(topn=TOP_K, name='Recall@k')
is calculated for the validation set, is it 1) scoring and ranking items that exist inside each Examples List for eachuser_id
or 2) scoring and ranking for allitem_id
that exist in the entire train and test set?Thanks
The text was updated successfully, but these errors were encountered: