Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the evaluation metrics and evaluation protocol. #7

Open
AndrewChiyz opened this issue Feb 24, 2021 · 0 comments
Open

About the evaluation metrics and evaluation protocol. #7

AndrewChiyz opened this issue Feb 24, 2021 · 0 comments

Comments

@AndrewChiyz
Copy link

Hi, Thank you for sharing the DwNet code.

I have a few questions about the evaluation protocol in your paper. Would you please provide me with more details on the evaluation process on the test set of Fashion Dataset? and how to get the quantitative results in Table 1?

To me, a reasonable evaluation process may be, for example, there are 100 videos in the test set of the Fashion dataset. For cross-video motion transfer, we can choose the first frame of one video as the source frame, and another video from the rest 99 videos as the driving video, then 99 videos will be generated. Therefore, there will be 9,900 synthesized videos in total. For self-imitation or intra-video synthesis, 100 videos will be generated by setting the driving video as the same one. Then we can calculate the evaluation metrics on these 10,000 videos. Is this a reasonable evaluation process? or Could you please clarify the evaluation protocol in your paper? For the evaluation metric, is it calculated on each generated video or on all the generated frames?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant