Skip to content

Conversation

@sxjeru
Copy link
Contributor

@sxjeru sxjeru commented Oct 19, 2025

💻 Change Type

  • ✨ feat
  • 🐛 fix
  • ♻️ refactor
  • 💄 style
  • 👷 build
  • ⚡️ perf
  • ✅ test
  • 📝 docs
  • 🔨 chore

🔀 Description of Change

  • 支持开关 New API 和 AiHubMix 的 Responses API
  • 支持新建自定义 newapi
  • 支持自定义 openai 和 newapi 开关 Responses API

📝 Additional Information

Summary by Sourcery

Add switchable Responses API mode support for New API and AiHubMix providers by replacing hardcoded payload handling with configurable whitelist settings, updating factory logic to respect user toggles, extending provider configs and UI, and adjusting tests accordingly.

Enhancements:

  • Replace hardcoded handlePayload logic in newapi provider with configurable useResponseModels whitelist
  • Enhance openaiCompatibleFactory to respect payload.apiMode switch and provider-level useResponseModels settings
  • Extend CreateNewProvider UI to automatically enable supportResponsesApi for openai and router custom providers
  • Add supportResponsesApi flag to aihubmix and newapi model provider configurations
  • Include 'New API' router type in custom provider SDK options

Documentation:

  • Refine locale description of Responses API to specify support for OpenAI models only

Tests:

  • Update tests to assert useResponseModels configuration in routers instead of handlePayload

Chores:

  • Add type casts for openai instance in stt and tts route handlers

@vercel
Copy link

vercel bot commented Oct 19, 2025

@sxjeru is attempting to deploy a commit to the LobeHub Community Team on Vercel.

A member of the Team first needs to authorize it.

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Oct 19, 2025
@lobehubbot
Copy link
Member

👍 @sxjeru

Thank you for raising your pull request and contributing to our Community
Please make sure you have followed our contributing guidelines. We will review it as soon as possible.
If you encounter any problems, please feel free to connect with us.

@gru-agent
Copy link
Contributor

gru-agent bot commented Oct 19, 2025

TestGru Assignment

Summary

Link CommitId Status Reason
Detail f7ffab5 ✅ Finished

History Assignment

Files

File Pull Request
packages/model-runtime/src/core/openaiCompatibleFactory/index.ts ❌ Failed (I failed to setup the environment.)

Tip

You can @gru-agent and leave your feedback. TestGru will make adjustments based on your input

@dosubot dosubot bot added the Model Provider Model provider related label Oct 19, 2025
@sxjeru sxjeru changed the title 🔨 chore: New API 🔨 chore: New API support switch Responses API mode Oct 19, 2025
@codecov
Copy link

codecov bot commented Oct 19, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 81.86%. Comparing base (2e2b9c4) to head (aeca6d5).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #9776      +/-   ##
==========================================
- Coverage   83.66%   81.86%   -1.80%     
==========================================
  Files         950      807     -143     
  Lines       65889    54414   -11475     
  Branches     8037     5009    -3028     
==========================================
- Hits        55126    44547   -10579     
+ Misses      10763     9867     -896     
Flag Coverage Δ
app 78.85% <100.00%> (+<0.01%) ⬆️
database 98.41% <ø> (ø)
packages/agent-runtime 98.37% <ø> (ø)
packages/context-engine 93.94% <ø> (ø)
packages/electron-server-ipc 93.76% <ø> (ø)
packages/file-loaders 92.21% <ø> (ø)
packages/model-bank 100.00% <ø> (ø)
packages/model-runtime ?
packages/prompts 77.29% <ø> (ø)
packages/python-interpreter 96.50% <ø> (ø)
packages/utils 94.50% <ø> (ø)
packages/web-crawler 97.07% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Store 74.41% <ø> (ø)
Services 61.47% <ø> (ø)
Server 77.38% <ø> (ø)
Libs 35.68% <ø> (ø)
Utils 81.81% <ø> (-1.22%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Oct 19, 2025

Reviewer's Guide

This PR overhauls the Responses API mode support by removing the legacy handlePayload, introducing a payload.apiMode switch with a useResponseModels whitelist in the OpenAI-compatible factory, and propagating supportResponsesApi flags through UI and provider configurations, alongside updating tests, routes, and localization.

Entity relationship diagram for custom provider SDK options

erDiagram
    CUSTOM_PROVIDER_SDK_OPTIONS {
        label string
        value string
    }
    CUSTOM_PROVIDER_SDK_OPTIONS ||--o| ProviderSettings : "sets sdkType"
    ProviderSettings {
        sdkType string
        supportResponsesApi boolean
    }
Loading

Class diagram for OpenAI-compatible runtime payload processing

classDiagram
    class OpenAICompatibleRuntime {
        +createOpenAICompatibleRuntime<T>()
        -_options
        +chat(payload)
    }
    class ChatPayload {
        +model: string
        +messages: Message[]
        +temperature: number
        +apiMode: string
    }
    OpenAICompatibleRuntime --> ChatPayload: processes
    class ChatCompletionOptions {
        +useResponse: boolean
        +useResponseModels: Array<string | RegExp>
    }
    OpenAICompatibleRuntime --> ChatCompletionOptions: uses
    ChatPayload <|-- ProcessedPayload
    class ProcessedPayload {
        +apiMode: 'responses' | undefined
    }
Loading

Class diagram for provider configuration changes

classDiagram
    class ModelProviderCard {
        +settings: ProviderSettings
        +url: string
    }
    class ProviderSettings {
        +sdkType: string
        +showModelFetcher: boolean
        +supportResponsesApi: boolean
    }
    ModelProviderCard --> ProviderSettings
    class CreateAiProviderParams {
        +settings: ProviderSettings
        +name: string
        +id: string
    }
    CreateAiProviderParams --> ProviderSettings
Loading

File-Level Changes

Change Details Files
Refactor OpenAI-compatible factory to use apiMode switch and whitelist
  • Remove old shouldUseResponses logic
  • Read payload.apiMode to determine switch state
  • Apply or remove apiMode based on useResponseModels whitelist
  • Add detailed logging for switch ON/OFF and whitelist checks
packages/model-runtime/src/core/openaiCompatibleFactory/index.ts
Replace handlePayload with useResponseModels in NewAPI provider
  • Delete handlePayload implementation and import
  • Inject useResponseModels array in params.chatCompletion
  • Update routers test to expect useResponseModels instead of handlePayload
packages/model-runtime/src/providers/newapi/index.ts
packages/model-runtime/src/providers/newapi/index.test.ts
Enable supportResponsesApi in custom provider creation UI
  • Detect openai or router sdkType
  • Add supportResponsesApi: true to final provider settings
src/app/[variants]/(main)/settings/provider/features/CreateNewProvider/index.tsx
Add New API option to custom provider SDK list
  • Append { label: 'New API', value: 'router' } to SDK options
src/app/[variants]/(main)/settings/provider/features/customProviderSdkOptions.ts
Activate Responses API support in provider configs
  • Set supportResponsesApi: true for AiHubMix
  • Set supportResponsesApi: true for NewAPI
src/config/modelProviders/aihubmix.ts
src/config/modelProviders/newapi.ts
Apply type casting fixes in backend routes
  • Cast openaiOrErrResponse as any in STT route
  • Cast openaiOrErrResponse as any in TTS route
src/app/(backend)/webapi/stt/openai/route.ts
src/app/(backend)/webapi/tts/openai/route.ts
Update localization for Responses API description
  • Add '(仅 OpenAI 模型支持)' qualifier to responsesApi.desc
src/locales/default/modelProvider.ts

Possibly linked issues

  • #[Request] Can 'Use Responses API spec' for OpenAI provider be moved to model level: The PR implements a model-level whitelist for the Responses API, addressing the issue's request to prevent conflicts when enabling it for specific models.
  • #配置AI供应商时 选择newapi,模型中如果有例如Gemini 2.5 Flash-Lite等Gemini官方模型ID相同的模型,会自动使用xxx/v1beta/的gemini格式进行使用: The PR fixes the bug by ensuring NewAPI does not apply the OpenAI Responses API mode to Gemini models, preventing incorrect routing to the Gemini API's v1beta endpoint.
  • ♻️ refactor: refactor with new market url #123: The PR enables the Responses API for custom and built-in New API/AiHubMix providers, including gpt- models, which is a prerequisite for the advanced configurations requested in the issue.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Consider extracting the new payload processing (userApiMode + whitelist logic) into a standalone helper function to simplify createOpenAICompatibleRuntime and improve readability.
  • Add targeted unit tests for the new apiMode switch logic (ON/OFF + whitelist behavior) in createOpenAICompatibleRuntime to validate all branching scenarios.
  • The CUSTOM_PROVIDER_SDK_OPTIONS entry using value 'router' for New API could be renamed to a more explicit key (e.g. 'newapi') to avoid confusion with other router types.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider extracting the new payload processing (userApiMode + whitelist logic) into a standalone helper function to simplify createOpenAICompatibleRuntime and improve readability.
- Add targeted unit tests for the new apiMode switch logic (ON/OFF + whitelist behavior) in createOpenAICompatibleRuntime to validate all branching scenarios.
- The CUSTOM_PROVIDER_SDK_OPTIONS entry using value 'router' for New API could be renamed to a more explicit key (e.g. 'newapi') to avoid confusion with other router types.

## Individual Comments

### Comment 1
<location> `packages/model-runtime/src/core/openaiCompatibleFactory/index.ts:238` </location>
<code_context>
+            // Keep apiMode: 'responses'
+          } else {
+            log('using Chat Completions API mode (switch ON but model not in whitelist)');
+            processedPayload = { ...payload, apiMode: undefined } as any;
+          }
         } else {
</code_context>

<issue_to_address>
**suggestion:** Setting apiMode to undefined may not fully remove the property.

Assigning apiMode: undefined keeps the key in the object. If consumers require the property to be absent, use a method to remove it from the payload.

Suggested implementation:

```typescript
            processedPayload = { ...payload } as any;
            delete processedPayload.apiMode;

```

```typescript
          processedPayload = { ...payload } as any;
          delete processedPayload.apiMode;

```
</issue_to_address>

### Comment 2
<location> `src/app/(backend)/webapi/stt/openai/route.ts:43` </location>
<code_context>

-  const res = await createOpenaiAudioTranscriptions({ openai: openaiOrErrResponse, payload });
+  const res = await createOpenaiAudioTranscriptions({
+    openai: openaiOrErrResponse as any,
+    payload,
+  });
</code_context>

<issue_to_address>
**suggestion:** Using 'as any' for openai type may hide type errors.

Refining the type or validating the object's shape before casting will help maintain type safety and prevent hidden errors.

Suggested implementation:

```typescript
  // if resOrOpenAI is a Response, it means there is an error,just return it
  if (openaiOrErrResponse instanceof Response) return openaiOrErrResponse;

  // Validate openaiOrErrResponse shape before passing
  function isOpenAIClient(obj: unknown): obj is OpenAIClient {
    return (
      typeof obj === "object" &&
      obj !== null &&
      // Add checks for required properties/methods of OpenAIClient
      typeof (obj as OpenAIClient).createTranscription === "function"
    );
  }

  if (!isOpenAIClient(openaiOrErrResponse)) {
    return new Response(
      JSON.stringify({ error: "Invalid OpenAI client object" }),
      { status: 400 }
    );
  }

  const res = await createOpenaiAudioTranscriptions({
    openai: openaiOrErrResponse,
    payload,
  });

  return new Response(JSON.stringify(res), {
    headers: {

```

- You may need to define or import the `OpenAIClient` type/interface if it does not already exist in your codebase. Adjust the property/method checks in `isOpenAIClient` to match the actual shape of your OpenAI client.
- If `createTranscription` is not the correct method/property, update the type guard accordingly.
</issue_to_address>

### Comment 3
<location> `src/app/(backend)/webapi/tts/openai/route.ts:37` </location>
<code_context>
   if (openaiOrErrResponse instanceof Response) return openaiOrErrResponse;

-  return await createOpenaiAudioSpeech({ openai: openaiOrErrResponse, payload });
+  return await createOpenaiAudioSpeech({ openai: openaiOrErrResponse as any, payload });
 };
</code_context>

<issue_to_address>
**suggestion:** Casting openaiOrErrResponse to 'any' may reduce type safety.

Consider validating or asserting the type of openaiOrErrResponse before passing it to createOpenaiAudioSpeech to maintain type safety.

Suggested implementation:

```typescript
  // if resOrOpenAI is a Response, it means there is an error,just return it
  if (openaiOrErrResponse instanceof Response) return openaiOrErrResponse;

  // Type guard to check if openaiOrErrResponse is a valid OpenAI instance
  function isOpenAIInstance(obj: unknown): obj is OpenAI {
    // Adjust this check based on your OpenAI class/interface
    return typeof obj === "object" && obj !== null && "audio" in obj;
  }

  if (!isOpenAIInstance(openaiOrErrResponse)) {
    return new Response("Invalid OpenAI instance", { status: 500 });
  }

  return await createOpenaiAudioSpeech({ openai: openaiOrErrResponse, payload });
};

```

- You may need to import or define the `OpenAI` type/interface at the top of the file if it's not already present.
- Adjust the `isOpenAIInstance` logic to match the actual shape of your OpenAI instance.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Nov 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Model Provider Model provider related size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants