Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dueling - Average #1

Open
msiegenthaler opened this issue Mar 5, 2019 · 1 comment
Open

Dueling - Average #1

msiegenthaler opened this issue Mar 5, 2019 · 1 comment

Comments

@msiegenthaler
Copy link

Hi
This is not really an issue, but since I'm just doing pretty much the same thing as you did in the repo while studying reinforced learning... I found a thing that I myself was not sure about but then decided is not correct:

In your dueling implementation you use return val + adv - val.mean(). I think by doing that your subtracting the average across all samples instead of doing it per sample. I did the same and the code still works and trains, but I think it should be torch.mean(adv, 1, keepdim=True) (or probably adv.mean(1, keepdim=True)).
My network trains a bit better with the new approch, although it does not make that much of a difference.

I'd like you to thank you a lot for putting up this repo including the detailed training analysis, it helps me improve my own implementation and my knowledge. Great work!

@tqjxlm
Copy link
Owner

tqjxlm commented Mar 7, 2019

Thanks for the note. You are right. The subtracted value should be the mean of advantages, val.mean() is nonsense. It is a serious issue theoretically. I'm not sure why it does not matter here, there must be other limitations in the implementation. I'll fix it later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants