Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Support for GPT4o-mini or gpt4- models #657

Open
oussamaJmaaa opened this issue Oct 17, 2024 · 0 comments
Open

[Bug]: Support for GPT4o-mini or gpt4- models #657

oussamaJmaaa opened this issue Oct 17, 2024 · 0 comments

Comments

@oussamaJmaaa
Copy link

oussamaJmaaa commented Oct 17, 2024

Hello, I'm having trouble using the gpt-4-o-mini model with LangChain or OpenAI alongside GPTCache.

Any idea how to solve it? Also, could you please share your OpenAI and GPTCache versions?

Here is my current setup:

from langchain.llms import OpenAI
from gptcache.adapter.langchain_models import LangChainLLMs

llm = LangChainLLMs(llm=OpenAI(model="gpt-4-o-mini", temperature=0))

When I run my QA chain, I get this error:
"This is a chat model and not supported in the v1/completions endpoint"

I tried upgrading my OpenAI version to 1.51.2 and then encountered this error:
"module 'openai' has no attribute 'api_base'. Did you mean: 'api_type'?"

Any help would be greatly appreciated! Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant