Skip to content

Conversation

@KevinZhao
Copy link

@KevinZhao KevinZhao commented Oct 22, 2025

💻 Change Type

  • ✨ feat
  • 🐛 fix
  • ♻️ refactor
  • 💄 style
  • 👷 build
  • ⚡️ perf
  • ✅ test
  • 📝 docs
  • 🔨 chore

🔀 Description of Change

Added comprehensive support for Azure OpenAI GPT-5 series models:

Model Bank & Pricing (packages/model-bank/src/aiModels/azure.ts):

  • GPT-5 Pro: $15/$120 per million tokens (input/output)
  • GPT-5 Codex: $1.25/$10 per million tokens with cache support
  • GPT-5: $1.25/$10 per million tokens with cache support
  • GPT-5 Mini: $0.25/$2 per million tokens with cache support
  • GPT-5 Nano: $0.05/$0.4 per million tokens with cache support
  • GPT-5 Chat: $1.25/$10 per million tokens with cache support

API Constants (packages/const/src/models.ts):

  • Added GPT-5 Pro, GPT-5 Codex to responses API models list

Runtime Provider (packages/model-runtime/src/providers/azureOpenai/index.ts):

  • Updated system message role mapping for GPT-5 models

Note: As per reviewer feedback, removed the model provider configuration from src/config/modelProviders/azure.ts.

📝 Additional Information

The GPT-5 series models are defined in the model bank with complete specifications but are not exposed in the default provider configuration yet. They can be enabled once Azure officially releases these models.

Added comprehensive pricing information for all Azure OpenAI GPT-5 series models:
- GPT-5 Pro: $15/$120 per million tokens (input/output)
- GPT-5 Codex: $1.25/$10 per million tokens with cache support
- GPT-5: $1.25/$10 per million tokens with cache support
- GPT-5 Mini: $0.25/$2 per million tokens with cache support
- GPT-5 Nano: $0.05/$0.4 per million tokens with cache support
- GPT-5 Chat: $1.25/$10 per million tokens with cache support

Pricing aligns with OpenAI official rates and includes cache read pricing where applicable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@vercel
Copy link

vercel bot commented Oct 22, 2025

Someone is attempting to deploy a commit to the LobeHub Community Team on Vercel.

A member of the Team first needs to authorize it.

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Oct 22, 2025
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Oct 22, 2025

Reviewer's Guide

This PR enriches Azure GPT-5 series support by embedding detailed pricing schemes into model definitions, registering the new GPT-5 variants in the provider config and constants list, and extending system-role mapping logic to encompass all GPT-5 models.

Class diagram for new Azure GPT-5 series model cards with pricing

classDiagram
class AIChatModelCard {
  abilities
  config
  contextWindowTokens
  description
  displayName
  enabled
  id
  maxOutput
  pricing
  releasedAt
  type
}
class Pricing {
  units
}
class PricingUnit {
  name
  rate
  strategy
  unit
}
AIChatModelCard --> Pricing
Pricing --> PricingUnit
AIChatModelCard "1" --o "1" Pricing
Pricing "1" --o "*" PricingUnit

class GPT5_Pro {
  abilities: functionCall, reasoning, structuredOutput, vision
  config: deploymentName = 'gpt-5-pro'
  contextWindowTokens: 400_000
  pricing: textInput $15, textOutput $120 per million tokens
}
class GPT5_Codex {
  abilities: functionCall, structuredOutput
  config: deploymentName = 'gpt-5-codex'
  contextWindowTokens: 400_000
  pricing: textInput $1.25, textOutput $10, textInput_cacheRead $0.125 per million tokens
}
class GPT5 {
  abilities: functionCall, reasoning, structuredOutput, vision
  config: deploymentName = 'gpt-5'
  contextWindowTokens: 400_000
  pricing: textInput $1.25, textOutput $10, textInput_cacheRead $0.125 per million tokens
}
class GPT5_Mini {
  abilities: functionCall, reasoning, structuredOutput, vision
  config: deploymentName = 'gpt-5-mini'
  contextWindowTokens: 400_000
  pricing: textInput $0.25, textOutput $2, textInput_cacheRead $0.025 per million tokens
}
class GPT5_Nano {
  abilities: functionCall, reasoning, structuredOutput, vision
  config: deploymentName = 'gpt-5-nano'
  contextWindowTokens: 400_000
  pricing: textInput $0.05, textOutput $0.4, textInput_cacheRead $0.005 per million tokens
}
class GPT5_Chat {
  abilities: vision
  config: deploymentName = 'gpt-5-chat'
  contextWindowTokens: 128_000
  pricing: textInput $1.25, textOutput $10, textInput_cacheRead $0.125 per million tokens
}
AIChatModelCard <|-- GPT5_Pro
AIChatModelCard <|-- GPT5_Codex
AIChatModelCard <|-- GPT5
AIChatModelCard <|-- GPT5_Mini
AIChatModelCard <|-- GPT5_Nano
AIChatModelCard <|-- GPT5_Chat
Loading

Class diagram for updated system-role mapping logic in LobeAzureOpenAI

classDiagram
class LobeAzureOpenAI {
  +sendMessage(message, model)
}
LobeAzureOpenAI : sendMessage(message, model)
LobeAzureOpenAI : // system role conversion logic
LobeAzureOpenAI : // now includes GPT-5 models
Loading

File-Level Changes

Change Details Files
Integrate pricing configurations for new GPT-5 series chat models
  • Added pricing.units structure with input/output rates for GPT-5 Pro, Codex, GPT-5, Mini, Nano and Chat
  • Included cache read pricing entries for applicable models
  • Set fixed millionTokens strategy and rates matching OpenAI official tiers
packages/model-bank/src/aiModels/azure.ts
Register GPT-5 series models in Azure provider config
  • Appended entries for GPT-5 Pro, Codex, GPT-5, Mini, Nano and Chat to Azure.chatModels
  • Configured abilities, contextWindowTokens, deploymentName and maxOutput for each new model
src/config/modelProviders/azure.ts
Update constant model list with GPT-5 identifiers
  • Inserted ‘gpt-5-codex’, ‘gpt-5-pro’ and ‘gpt-5-pro-2025-10-06’ into responsesAPIModels set
packages/const/src/models.ts
Adjust system role mapping to include GPT-5 models
  • Extended system-to-user role conversion logic to detect ‘gpt-5’ in model names
packages/model-runtime/src/providers/azureOpenai/index.ts

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@gru-agent
Copy link
Contributor

gru-agent bot commented Oct 22, 2025

TestGru Assignment

Summary

Link CommitId Status Reason
Detail a3888b2 ✅ Finished

History Assignment

Files

File Pull Request
packages/model-runtime/src/providers/azureOpenai/index.ts ❌ Failed (I failed to setup the environment.)

Tip

You can @gru-agent and leave your feedback. TestGru will make adjustments based on your input

@lobehubbot
Copy link
Member

👍 @KevinZhao

Thank you for raising your pull request and contributing to our Community
Please make sure you have followed our contributing guidelines. We will review it as soon as possible.
If you encounter any problems, please feel free to connect with us.

@dosubot dosubot bot added the Model Provider Model provider related label Oct 22, 2025
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • The GPT-5 Chat entry in modelProviders/azure.ts is missing an enabled: true flag—please add it for consistency with the other models.
  • Several GPT-5 pricing blocks repeat identical cacheRead rates—consider extracting a shared unit definition or confirming the rates are consistent across all variants.
  • The system role mapping uses model.includes('gpt-5'), which may match unintended IDs—consider matching against a specific list or exact patterns for each GPT-5 variant.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The GPT-5 Chat entry in modelProviders/azure.ts is missing an `enabled: true` flag—please add it for consistency with the other models.
- Several GPT-5 pricing blocks repeat identical cacheRead rates—consider extracting a shared unit definition or confirming the rates are consistent across all variants.
- The system role mapping uses `model.includes('gpt-5')`, which may match unintended IDs—consider matching against a specific list or exact patterns for each GPT-5 variant.

## Individual Comments

### Comment 1
<location> `packages/const/src/models.ts:38-40` </location>
<code_context>
   'computer-use-preview-2025-03-11',
+  'gpt-5-codex',
+  'gpt-5-pro',
+  'gpt-5-pro-2025-10-06',
 ]);

</code_context>

<issue_to_address>
**suggestion:** Review the need for both 'gpt-5-pro' and 'gpt-5-pro-2025-10-06' in the model set.

If these are separate models, please clarify their distinctions; if not, consider merging to prevent confusion.

```suggestion
  'gpt-5-codex',
  // 'gpt-5-pro' is the general release, while 'gpt-5-pro-2025-10-06' is a dated preview version.
  'gpt-5-pro',
  'gpt-5-pro-2025-10-06',
```
</issue_to_address>

### Comment 2
<location> `packages/model-runtime/src/providers/azureOpenai/index.ts:54` </location>
<code_context>
       role:
         // Convert 'system' role to 'user' or 'developer' based on the model
-        (model.includes('o1') || model.includes('o3')) && message.role === 'system'
+        (model.includes('o1') || model.includes('o3') || model.includes('gpt-5')) &&
+        message.role === 'system'
           ? [...systemToUserModels].some((sub) => model.includes(sub))
</code_context>

<issue_to_address>
**issue:** Check for unintended matches with 'model.includes('gpt-5')'.

This approach may unintentionally match variants like 'gpt-5-codex' or 'gpt-5-mini'. Use a stricter comparison if you only want the base 'gpt-5' model.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +38 to +40
'gpt-5-codex',
'gpt-5-pro',
'gpt-5-pro-2025-10-06',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Review the need for both 'gpt-5-pro' and 'gpt-5-pro-2025-10-06' in the model set.

If these are separate models, please clarify their distinctions; if not, consider merging to prevent confusion.

Suggested change
'gpt-5-codex',
'gpt-5-pro',
'gpt-5-pro-2025-10-06',
'gpt-5-codex',
// 'gpt-5-pro' is the general release, while 'gpt-5-pro-2025-10-06' is a dated preview version.
'gpt-5-pro',
'gpt-5-pro-2025-10-06',

Removed all GPT-5 series model configurations from Azure provider:
- GPT-5 Pro
- GPT-5 Codex
- GPT-5
- GPT-5 Mini
- GPT-5 Nano
- GPT-5 Chat

These models are not yet officially released by Azure OpenAI.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@codecov
Copy link

codecov bot commented Oct 27, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 84.55%. Comparing base (bbc0379) to head (6dd59bc).
⚠️ Report is 72 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #9833      +/-   ##
==========================================
- Coverage   84.55%   84.55%   -0.01%     
==========================================
  Files         943      944       +1     
  Lines       64465    64508      +43     
  Branches     7604     7901     +297     
==========================================
+ Hits        54508    54542      +34     
- Misses       9957     9966       +9     
Flag Coverage Δ
app 80.21% <ø> (-0.01%) ⬇️
database 98.51% <ø> (ø)
packages/agent-runtime 99.63% <ø> (+<0.01%) ⬆️
packages/context-engine 93.51% <ø> (-0.03%) ⬇️
packages/electron-server-ipc 93.76% <ø> (ø)
packages/file-loaders 92.21% <ø> (ø)
packages/model-bank 100.00% <ø> (ø)
packages/model-runtime 92.16% <100.00%> (+0.01%) ⬆️
packages/prompts 77.21% <ø> (ø)
packages/python-interpreter 96.50% <ø> (ø)
packages/utils 94.50% <ø> (ø)
packages/web-crawler 97.07% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Store 74.88% <ø> (+0.01%) ⬆️
Services 61.64% <ø> (-0.07%) ⬇️
Server 77.39% <ø> (ø)
Libs 50.82% <ø> (ø)
Utils 75.00% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Model Provider Model provider related size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants