You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't care how fast your bit crunched model can spit out ****** tokens.
Predictable. We practiced accuracy-driven development where our internal testing infrastructure validates code and model commits on Whisper accuracy evaluation benchmarks comprising librispeech (~2.6k short audio clips, ~5 hours total) and earnings22 (~120 long audio clips, ~120 hours total) datasets. Results of periodic testing are published here. This approach enables us to detect and mitigate quality-of-inference (more on this below) regressions due to code changes in WhisperKit as well as performance and functional regressions from lower levels of the software stack. This helps us improve time-to-detect and time-to-fix most issues with best-effort. Taking it a step further, we offer customer-level SLAs to detect and fix all issues within a maximum time period for specific model and device versions to developers or enterprises.
The above quote from Argmax is spot on, we need to do exactly this.
I want a CI job added that runs all whisper variants and quantization formats across librispeech.
The text was updated successfully, but these errors were encountered:
I don't care how fast your bit crunched model can spit out ****** tokens.
The above quote from Argmax is spot on, we need to do exactly this.
I want a CI job added that runs all whisper variants and quantization formats across librispeech.
The text was updated successfully, but these errors were encountered: