feat(provider): Integrate OpenAI Responses API with native web search and code interpreter support.#5953
Conversation
… and code interpreter support.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly upgrades the OpenAI provider by integrating the latest Responses API, which unlocks powerful native tools like web search and code interpreter. The changes include core logic for API interaction, message conversion, and tool handling, alongside necessary configuration updates and UI localizations. This enhancement allows users to tap into advanced OpenAI capabilities directly, offering more robust and integrated functionality. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Hey - 我发现了 1 个问题
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="691-692" />
<code_context>
output=completion_tokens,
)
+ def _extract_responses_usage(self, usage: ResponseUsage) -> TokenUsage:
+ cached = usage.input_tokens_details.cached_tokens or 0
+ input_tokens = usage.input_tokens or 0
+ output_tokens = usage.output_tokens or 0
</code_context>
<issue_to_address>
**issue (bug_risk):** 当提取 Responses API 的 token 使用情况时,如果 `usage.input_tokens_details` 为 None,可能会引发 AttributeError。
在 `_extract_responses_usage` 中,`cached = usage.input_tokens_details.cached_tokens or 0` 假设 `input_tokens_details` 一定有值。但某些 Responses API 的负载中,`input_tokens_details` 可能为 `None` 或被省略,这会导致 AttributeError。建议在访问前进行保护,例如:
```python
details = getattr(usage, "input_tokens_details", None) or {}
cached = getattr(details, "cached_tokens", 0) or 0
```
或者在读取 `cached_tokens` 之前显式判断是否为 `None`。
</issue_to_address>帮我变得更有用!请在每条评论上点击 👍 或 👎,我会根据你的反馈改进评审质量。
Original comment in English
Hey - I've found 1 issue
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="691-692" />
<code_context>
output=completion_tokens,
)
+ def _extract_responses_usage(self, usage: ResponseUsage) -> TokenUsage:
+ cached = usage.input_tokens_details.cached_tokens or 0
+ input_tokens = usage.input_tokens or 0
+ output_tokens = usage.output_tokens or 0
</code_context>
<issue_to_address>
**issue (bug_risk):** Potential AttributeError if usage.input_tokens_details is None when extracting Responses API token usage.
In `_extract_responses_usage`, `cached = usage.input_tokens_details.cached_tokens or 0` assumes `input_tokens_details` is always set. Some Responses API payloads may have `input_tokens_details` as `None` or omit it, which would raise an AttributeError. Consider guarding the access, e.g.:
```python
details = getattr(usage, "input_tokens_details", None) or {}
cached = getattr(details, "cached_tokens", 0) or 0
```
or equivalently check for `None` before reading `cached_tokens`.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
本次 PR 集成了 OpenAI 最新的 Responses API,并增加了对原生网页搜索和代码解释器工具的支持。openai_source.py 中的核心逻辑改动结构清晰,妥善地处理了新的 API 端点、消息格式以及流式和非流式请求。新功能也同步更新到了配置和 UI 中。新增的测试用例很全面,覆盖了多种场景。
我有几点建议可以提高代码的可维护性和一致性。具体来说,我指出了默认配置文件中的代码重复问题,一个改进响应解析逻辑中类型检查的方法,以及新测试代码中的一个风格问题。
总体而言,这是一次很棒的功能增强。
astrbot/core/config/default.py
Outdated
| "use_responses_api": False, | ||
| "oa_native_web_search": False, | ||
| "oa_native_code_interpreter": False, |
There was a problem hiding this comment.
这三个新的配置项在此文件中为几乎所有 OpenAI 兼容的 provider 重复添加(约20次),导致了严重的代码重复,使未来的修改变得困难。为了提高可维护性,建议将这些选项定义在一个公共字典中,然后合并到每个 provider 的配置中。这将集中管理默认值,使配置更易于维护。例如:
OPENAI_COMPATIBLE_DEFAULTS = {
"use_responses_api": False,
"oa_native_web_search": False,
"oa_native_code_interpreter": False,
}
# 在定义 provider 配置时
"OpenAI": {
...,
**OPENAI_COMPATIBLE_DEFAULTS,
"custom_headers": {},
},此建议适用于此文件中所有类似的代码块。
| else: | ||
| item_type = getattr(item, "type", "") | ||
| if item_type == "function_call" and tools is not None: | ||
| try: | ||
| args = json.loads(item.arguments) | ||
| except json.JSONDecodeError: | ||
| logger.error( | ||
| "Responses API function_call arguments is not valid JSON: %s", | ||
| item.arguments, | ||
| ) | ||
| raise Exception( | ||
| f"Responses API function_call arguments is not valid JSON: {item.arguments}" | ||
| ) | ||
| tool_args.append(args) | ||
| tool_names.append(item.name) | ||
| tool_ids.append(item.call_id) |
There was a problem hiding this comment.
为了与其他响应项类型(如 ResponseOutputMessage)的处理方式保持一致,并增强类型安全性,建议使用 isinstance 来检查 FunctionCall 类型。
这需要在文件顶部添加以下导入:
from openai.types.responses.function_call import FunctionCall elif isinstance(item, FunctionCall) and tools is not None:
try:
args = json.loads(item.arguments)
except json.JSONDecodeError:
logger.error(
"Responses API function_call arguments is not valid JSON: %s",
item.arguments,
)
raise Exception(
f"Responses API function_call arguments is not valid JSON: {item.arguments}"
)
tool_args.append(args)
tool_names.append(item.name)
tool_ids.append(item.call_id)
tests/test_openai_source.py
Outdated
| import asyncio | ||
|
|
||
| asyncio.run(provider.terminate()) |
There was a problem hiding this comment.
根据 PEP 8 风格指南,导入语句应放在文件的顶部。在 finally 块中导入 asyncio 是不常规的做法,并且会使模块的依赖关系不够清晰。请将 import asyncio 移动到文件的顶部。此评论同样适用于 test_native_tools_force_responses_mode_and_override_function_tools 测试函数中的导入。
| import asyncio | |
| asyncio.run(provider.terminate()) | |
| asyncio.run(provider.terminate()) |
References
- According to PEP 8, import statements should be placed at the top of the file, after any module docstrings and before any module-level global variables and constants. (link)
…e provider defaults
|
代码执行器输出渲染效果不行,明天修一下。 |
支持 OpenAI 新版 Responses API 和网页搜索/代码执行器原生工具。
Modifications / 改动点
核心改动文件有 5 个:
它实现的功能可以概括成三块:
OpenAI Provider 增加 Responses API 支持
在 openai_source.py 里新增了 use_responses_api 开关,并在请求路径上区分:
支持 OpenAI 原生工具
增加了两个能力开关:
启用后会强制走 Responses API,并使用 OpenAI 原生工具,而不是 AstrBot 原本的函数工具链。
配置项和测试补齐
Screenshots or Test Results / 运行截图或测试结果
Checklist / 检查清单
requirements.txt和pyproject.toml文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations inrequirements.txtandpyproject.toml.Summary by Sourcery
在基于 OpenAI 的 provider 栈中新增对 Responses API 的支持,并将其接入现有的查询/流式处理管线,包括在聊天消息格式与 Responses 输入/输出之间进行转换。
新功能:
增强内容:
测试:
Original summary in English
Summary by Sourcery
Add Responses API support to the OpenAI-based provider stack and wire it into the existing query/streaming pipeline, including conversion between chat-style messages and Responses inputs/outputs.
New Features:
Enhancements:
Tests: