-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inference speed is slow #6
Comments
Hey @tlxnulixuexi , |
I would like to ask you to help me see if there is any problem with some of the codes below. I am using the predict.py file in ewasr, adding the code to calculate fps in it, and using the GPU for inference, but the calculation results after inference are Only 22.7 (average fps). Thanks import time ... def predict(args):
... |
Hey @tlxnulixuexi , Sorry for a late reply. The way you measure FPS is not correct. You need to use I pushed the slightly modified benchmarking code used for the paper to Benchmark should produce lines with latency and FPS like:
It will also visualize the density of the latency measurements of each prediction. Let me know if this works for you. |
Hello @tersekmatija, |
Thanks @tlxnulixuexi , Best, |
Hello Matija Teršek, |
The weights shouldn't affect the speed. If you want, you can load your weights onto the models here.
Depends on the loading/pre-processing speed, but it should not.
Short article that should explain it here: https://www.speechmatics.com/company/articles-and-news/timing-operations-in-pytorch. |
Hello, @tersekmatija,
I'm sorry to bother you.
I have some questions that I'd like to ask you.
When I use the ewasr model to train the LARS data set, the training speed is very fast, but the inference speed is very slow. Compared with wasr, the inference speed does not have a big advantage. It is consistent with your paper. There is a big gap in the ten times speed. What is the reason?
The text was updated successfully, but these errors were encountered: