Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: the OpenAI compatible interface returns an incorrect choices when the 'n' parameter is not supported. #1153

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 66 additions & 1 deletion camel/agents/chat_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -913,7 +913,7 @@ def _structure_output_with_function(
num_tokens,
)

def _step_model_response(
def _get_model_response(
self,
openai_messages: List[OpenAIMessage],
num_tokens: int,
Expand All @@ -936,6 +936,71 @@ def _step_model_response(
output_messages, finish_reasons, usage_dict, response_id = (
self.handle_stream_response(response, num_tokens)
)

return (
response,
output_messages,
finish_reasons,
usage_dict,
response_id,
)

def _step_model_response(
self,
openai_messages: List[OpenAIMessage],
num_tokens: int,
) -> tuple[
Union[ChatCompletion, Stream],
List[BaseMessage],
List[str],
Dict[str, int],
str,
]:
(
response,
output_messages,
finish_reasons,
usage_dict,
response_id,
) = self._get_model_response(openai_messages, num_tokens)

# Some OpenAI compatible interfaces do not support the 'n' parameter.
# We will call the API multiple times until we accumulate the expected number of completion choices.
expected_completion_choices = self.model_backend.model_config_dict.get(
'n', 1
)
Comment on lines +969 to +971
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we also need to check whether the model supports n natively, if it's natively supported, then we shouldn't handle the generation by ourselves

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the model includes an attribute indicating whether it supports n, it can be determined whether the model natively supports it.

In addition, To confirm support for the n parameter in the OAI-compatible interface, it may only be possible at deployment.

# Set the maximum number of attempts to the expected number of completion choices
# to limit the number of model calls
max_attempts = expected_completion_choices
while (
len(output_messages) < expected_completion_choices
and max_attempts > 0
):
try:
logger.warning(
f"{self.role_type}[{self.role_name}] expected {expected_completion_choices} completion choices, "
f"but only {len(output_messages)} were returned. "
f"I will make another call until I accumulate {expected_completion_choices} choices"
)
(
new_response,
new_output_messages,
new_finish_reasons,
new_usage_dict,
new_response_id,
) = self._get_model_response(openai_messages, num_tokens)

# Concatenate the new output messages to the original output messages
output_messages.extend(new_output_messages)
finish_reasons.extend(new_finish_reasons)
for k, v in new_usage_dict.items():
usage_dict[k] = (usage_dict.get(k, 0) or 0) + (
v if v is not None else 0
)
response_id = new_response_id
finally:
max_attempts -= 1

return (
response,
output_messages,
Expand Down
Loading