-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The performance of the model I reproduced does not meet the standards outlined in the paper. #14
Comments
The large performance gap is indeed confusing,belowing is something that may help you: LLaMA version: https://huggingface.co/meta-llama/Llama-2-7b-hf or https://huggingface.co/meta-llama/Llama-2-7b Besides, I haven't met the RuetimeError you provides before, I can provide related GPU, Nivida Driver and CUDA version: A100, 470, 11.7 |
I reproduced again using your mentioned environment A100, Nvidia Driver 470, and CUDA version 11.7. Besides, I downloaded LLaMA version Llama-2-7b-hf from your produced link: https://huggingface.co/meta-llama/Llama-2-7b-hf. However, the performance I got still did not meet the paper. The table below is the comparison:
Do you have any idea about it?
|
It seems that this gap has been reduced a bit, you can try to adjust the historical window (from 5 to 12), this parameter has an impact on the best performance. |
the historical window has already been set to 12 in the previous two reproductions
|
This is my hyper-parameter setting while reproducing the model, what should I modify to improve the performance?
|
I reproduced the Main Result Reproduction on LoRA + InstructERC based on Llama2, and the performance I got did not meet the paper. The table below is the comparision:
Compared to the original code, I only made the following modifications:
data_percent: 1/64 -> 1
set LLaMA2 MODELPATH to my model path, the Llama2 version I use is Llama-2-7b-chat-hf
While reproducing the code, I met an issue: RuntimeError: probability tensor contains either inf, nan or element < 0.
To solve the problem, I added a code to the Llama2 model file:
What else should I modify to reach the performance mentioned in the paper?
The text was updated successfully, but these errors were encountered: