You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am confused about robustness to noisy interactions in this paper.
Towards this end, we contaminate the training set by adding a certain proportion of adversarial examples (i.e., 5%, 10%, 15%, 20% negative user-item interactions), while keeping the testing set unchanged.
I tried to sample from interations which don't show in train.txt and test.txt. But I didn't find too many differences between LightGCN and SGL. I wonder if it is proper to generate adversarial examples in this way.
Looking forward to your reply.
The text was updated successfully, but these errors were encountered:
I am confused about robustness to noisy interactions in this paper.
Towards this end, we contaminate the training set by adding a certain proportion of adversarial examples (i.e., 5%, 10%, 15%, 20% negative user-item interactions), while keeping the testing set unchanged.
I tried to sample from interations which don't show in train.txt and test.txt. But I didn't find too many differences between LightGCN and SGL. I wonder if it is proper to generate adversarial examples in this way.
Looking forward to your reply.
I have uploaded the code (./add_noise.py) that generates contaminated training data, see the guidance here.
Hi, thanks for your great work.
I am confused about robustness to noisy interactions in this paper.
I tried to sample from interations which don't show in train.txt and test.txt. But I didn't find too many differences between LightGCN and SGL. I wonder if it is proper to generate adversarial examples in this way.
Looking forward to your reply.
The text was updated successfully, but these errors were encountered: