We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
建议llm api引入litellm,比如我可以通过litellm方便调用本地ollama模型,以下是我的调用代码片段
while retry_count < self.max_retries: try: response = completion( model=f"{config.LLM_MODLE}", messages=[{ "content": ( prompt ), "role": "user" } ], api_base=self.api_url ) # 获取生成的文本 content = response['choices'][0]['message']['content'].strip() logging.info(f"Generated content:\n {content}") # 处理生成的内容 processed_content = self._post_process_single_article(content) return processed_content except (json.JSONDecodeError, KeyError) as e: retry_count += 1 last_error = str(e) logging.warning(f"Generation attempt {retry_count} failed: {last_error}") if retry_count >= self.max_retries: logging.error(f"Reached maximum retry attempts ({self.max_retries}), generation failed") raise Exception(f"Failed to generate podcast content, last error: {last_error}")
只需要引入包 from litellm import completion
from litellm import completion
配置文件如下:
# LLM API settings LLM_API_URL = "http://localhost:11434" LLM_API_TOKEN = "" LLM_MODLE = "ollama/gemma2:latest"
The text was updated successfully, but these errors were encountered:
No branches or pull requests
建议llm api引入litellm,比如我可以通过litellm方便调用本地ollama模型,以下是我的调用代码片段
只需要引入包
from litellm import completion
配置文件如下:
The text was updated successfully, but these errors were encountered: