Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discrepancy Between Evaluation Metrics and Paper Results After Running Provided Model #2

Open
chenhaoqcdyq opened this issue Oct 11, 2024 · 0 comments

Comments

@chenhaoqcdyq
Copy link

chenhaoqcdyq commented Oct 11, 2024

motioncraft
gt
paper

Dear Author,

First of all, I would like to thank you for open-sourcing your code. Your work is excellent and truly worth learning from.

However, I encountered a minor issue while running your code. After downloading the model file you provided and running the evaluation code, I found that the evaluation metrics I obtained do not correspond to the metrics reported in your paper. I'm wondering if there might have been an issue with the model I downloaded?

In the attached images, Figure 1 shows the experimental results from running the model you provided, while Figure 2 shows the ground truth evaluation results. These results differ from those reported in your paper, which raises my question: could there have been a problem with the model download? Additionally, I'm puzzled as to why the trained model's results are higher than the ground truth (GT).

Thank you for your time and assistance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant