Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] OAI doc recommends parameter max_completion_tokens over max_tokens. Support aloneside with max_tokens. #256

Open
4 tasks done
Originalimoc opened this issue Nov 29, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@Originalimoc
Copy link

Originalimoc commented Nov 29, 2024

OS

Linux

GPU Library

CUDA 12.x

Python version

3.12

Describe the bug

Some client already switched to max_completion_tokens.
Fixed by common\sampling.py:28 to "validation_alias=AliasChoices("max_tokens", "max_length", "max_completion_tokens"),"

Acknowledgements

  • I have looked for similar issues before submitting this one.
  • I have read the disclaimer, and this issue is related to a code bug. If I have a question, I will use the Discord server.
  • I understand that the developers have lives and my issue will be answered when possible.
  • I understand the developers of this program are human, and I will ask my questions politely.
@Originalimoc Originalimoc added the bug Something isn't working label Nov 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant