-
Couldn't load subscription status.
- Fork 23
✨ Handle llm error messages from workflow #948
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
98cb688 to
1a11306
Compare
WalkthroughThis pull request introduces structured LLM error tracking and handling across the extension. It adds new LLMError type definitions and a createLLMError factory in the shared types, updates the VSCode extension with a new test command and error initialization handling, classifies workflow errors into specific LLM error categories, and extends both the Analysis and Resolutions UI pages to display and dismiss LLM errors. The .gitignore file is also updated with additional VSCode and Eclipse artifact patterns. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes The changes span 9 files with a mix of simple additions (types, .gitignore patterns, initialization) and moderate complexity (error classification logic, state management, filtered UI rendering with callbacks). The changes follow a consistent pattern of threading LLM error support through the codebase, requiring understanding of the error flow from backend through state management to UI display. While individually straightforward, the interconnected nature of the changes across multiple components necessitates careful review. Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
vscode/core/package.json (1)
186-191: Consider hiding test command in production.This test command is currently visible in the command palette. For testing/development commands, consider adding a visibility constraint in the
menus.commandPalettesection:{ "command": "konveyor.testLLMError", "when": "never" }Or document if this command is intentionally available to users for testing purposes.
vscode/core/src/utilities/ModifiedFiles/processMessage.ts (1)
346-348: Consider extracting hardcoded prefix to a constant.The prefix
"Failed to get llm response"is used in two places (lines 346 and 348). Consider extracting it to a constant at the module level to ensure consistency and ease future updates:const LLM_ERROR_PREFIX = "Failed to get llm response - ";Then use it:
if (errorMessage.includes(LLM_ERROR_PREFIX.replace(" - ", ""))) { const actualError = errorMessage.replace(LLM_ERROR_PREFIX, ""); // ... }webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (1)
76-115: LGTM! Well-structured error display with good UX.The LLM error rendering follows the existing pattern for config errors and includes thoughtful touches like:
- Danger variant to distinguish severity
- Dismissible alerts with close buttons
- Collapsible technical details to avoid overwhelming users
Using
error.timestampas the key should be safe since timestamps are generated vianew Date().toISOString()and collisions are extremely unlikely in practice.Consider extracting the inline styles to constants or CSS classes for better maintainability:
const styles = { detailsSummary: { cursor: "pointer" }, errorPre: { fontSize: "12px", whiteSpace: "pre-wrap" as const, marginTop: "8px", padding: "8px", backgroundColor: "#f5f5f5", borderRadius: "4px", }, }; // Then use: style={styles.errorPre}vscode/core/src/commands.ts (2)
229-253: Consider adding the initialization error to llmErrors array.The try-catch around workflow initialization is excellent for resilience and properly handles cleanup. However, for consistency with the new error tracking system, consider also adding the error to
llmErrors:state.mutateData((draft) => { draft.isFetchingSolution = false; draft.solutionState = "failedOnSending"; draft.chatMessages.push({ messageToken: `m${Date.now()}`, kind: ChatMessageType.String, value: { message: `Workflow initialization failed: ${errorMessage}` }, timestamp: new Date().toISOString(), }); + draft.llmErrors.push(createLLMError.workflowInitializationFailed(errorMessage)); });This would make the error dismissible in the UI and provide consistent error tracking across all LLM-related failures.
982-1037: Useful development command. Consider marking as internal.The
testLLMErrorcommand is valuable for development and testing the error display UI. However, consider adding a comment or configuration guard to prevent accidental use in production:// Test command for simulating LLM errors (development/testing only) // This command should be removed or guarded before production release [`${EXTENSION_NAME}.testLLMError`]: async () => { // Optional: Add guard // if (!isDevelopmentMode()) { // vscode.window.showWarningMessage("This command is only available in development mode"); // return; // }Alternatively, register this command conditionally based on an environment variable or development mode flag.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
.gitignore(1 hunks)shared/src/types/types.ts(3 hunks)vscode/core/package.json(1 hunks)vscode/core/src/commands.ts(6 hunks)vscode/core/src/extension.ts(1 hunks)vscode/core/src/utilities/ModifiedFiles/processMessage.ts(2 hunks)vscode/core/src/webviewMessageHandler.ts(2 hunks)webview-ui/src/components/AnalysisPage/AnalysisPage.tsx(3 hunks)webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx(2 hunks)webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx(5 hunks)webview-ui/src/context/ExtensionStateContext.tsx(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-30T18:00:23.969Z
Learnt from: djzager
PR: konveyor/editor-extensions#885
File: vscode/src/commands.ts:18-18
Timestamp: 2025-09-30T18:00:23.969Z
Learning: In the konveyor/editor-extensions repository, the commands konveyor.configureLabelSelector and konveyor.configureSourcesTargets were previously removed along with their handlers. Any PR that removes their definitions from vscode/package.json is cleanup work, not adding new functionality.
Applied to files:
vscode/core/package.json
🧬 Code graph analysis (5)
vscode/core/src/commands.ts (3)
vscode/core/src/client/tracer.ts (1)
logger(60-65)vscode/core/src/utilities/constants.ts (1)
EXTENSION_NAME(18-18)shared/src/types/types.ts (1)
createLLMError(220-266)
vscode/core/src/webviewMessageHandler.ts (1)
vscode/core/src/extensionState.ts (1)
ExtensionState(19-53)
vscode/core/src/utilities/ModifiedFiles/processMessage.ts (1)
shared/src/types/types.ts (1)
createLLMError(220-266)
webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx (1)
shared/src/types/types.ts (1)
LLMError(159-164)
webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (1)
shared/src/types/types.ts (2)
ConfigError(166-170)LLMError(159-164)
🔇 Additional comments (17)
.gitignore (1)
20-21: LGTM! Standard IDE artifacts correctly excluded.The added patterns appropriately exclude VSCode test artifacts and Eclipse metadata from version control.
vscode/core/src/extension.ts (1)
92-92: LGTM! Consistent initialization.The
llmErrorsfield is properly initialized as an empty array, following the same pattern asconfigErrorsand other array fields in the state.vscode/core/src/utilities/ModifiedFiles/processMessage.ts (1)
353-372: Error categorization relies on fragile string matching.The error categorization logic uses string matching on error messages, which is brittle and may break if the upstream workflow changes error message formats. The presence of many variations for context limit errors (lines 358-362) suggests the error message format is already inconsistent.
Consider requesting the workflow layer to emit structured error types instead of relying on string parsing, or document the expected error message formats that must be maintained for compatibility.
As a user, would you like me to search for documentation on the expected error message formats from the
@editor-extensions/agenticworkflow module?vscode/core/src/webviewMessageHandler.ts (1)
262-266: LGTM! Clear and correct implementation.The
CLEAR_LLM_ERRORSaction handler properly clears thellmErrorsarray using the state mutation pattern consistent with the rest of the codebase.webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx (2)
241-241: VerifyisToastusage inside PageSection.The
AlertGrouphasisToast={true}but is rendered inside aPageSection. TheisToastprop typically positions alerts as floating toasts at the page level, which may not work correctly when constrained within a section. Consider removingisToastor moving theAlertGroupto a higher level in the component tree if toast-style positioning is desired.
239-278: LLM error alerts implemented correctly.The error alert rendering logic properly filters dismissed errors and provides a good UX with collapsible technical details. The dismissal state is local to the component, meaning dismissed errors will reappear after navigation, which appears intentional for ensuring users don't permanently miss important errors.
webview-ui/src/components/AnalysisPage/AnalysisPage.tsx (1)
76-76: LGTM! Clean LLM error handling integration.The LLM error handling is properly integrated with defensive defaults, consistent dismissal patterns, and clean state management. The filtered errors and dismiss callback are correctly passed to
ConfigAlerts.Also applies to: 91-91, 218-226
shared/src/types/types.ts (3)
129-129: LGTM! Consistent with existing error tracking.The
llmErrorsfield is well-placed alongsideconfigErrorsand follows the same pattern for centralized error management.
150-164: LGTM! Well-structured error type system.The LLMError interface follows the established ConfigError pattern while adding a timestamp field, which is essential for uniqueness and user dismissal tracking. The error types comprehensively cover common LLM failure scenarios.
220-266: LGTM! Clean factory pattern with actionable messages.The
createLLMErrorfactory mirrors thecreateConfigErrorpattern effectively. Each method provides user-friendly messages with clear guidance. Consistent use ofnew Date().toISOString()ensures timestamp uniformity across all error types.webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (3)
2-10: LGTM! Appropriate imports for dismissible alerts.The addition of
AlertActionCloseButtonand type-only import ofLLMErrorare necessary for the new functionality.
13-31: LGTM! Well-designed props with sensible defaults.The optional props with default values allow backward compatibility. Using timestamp as the dismissal identifier is appropriate given its uniqueness.
33-37: LGTM! Alert visibility logic properly updated.The
shouldShowAlertscondition correctly includes LLM errors while preserving existing checks.vscode/core/src/commands.ts (4)
18-24: LGTM! Necessary import for error handling.The
createLLMErrorimport is required for the workflow initialization error handling and test command.
211-211: LGTM! Proper cleanup of previous errors.Clearing
llmErrorswhen starting a new solution prevents stale errors from confusing users and is consistent with the clearing ofchatMessagesabove.
299-303: LGTM! Centralized error handling approach.The comment clearly explains that error handling is delegated to the workflow message processor, which prevents duplicate state mutations and centralizes error handling logic.
329-345: LGTM! Comprehensive error handling and cleanup.The error handling properly extracts error messages, adds them to chat for user visibility, and ensures state cleanup to prevent stuck "fetching" states.
| activeDecorators: {}, | ||
| solutionServerConnected: false, | ||
| isWaitingForUserInteraction: false, | ||
| llmErrors: [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing array validation for llmErrors.
The llmErrors field is added to the default state but is not included in the array validation logic in getInitialState (lines 37-49) and handleMessage (lines 73-85). This inconsistency could cause runtime errors if llmErrors is received as undefined or in an invalid format.
Apply this diff to add array validation:
return {
...defaultState,
...windowData,
ruleSets: Array.isArray(windowData.ruleSets) ? windowData.ruleSets : [],
enhancedIncidents: Array.isArray(windowData.enhancedIncidents)
? windowData.enhancedIncidents
: [],
chatMessages: Array.isArray(windowData.chatMessages) ? windowData.chatMessages : [],
configErrors: Array.isArray(windowData.configErrors) ? windowData.configErrors : [],
profiles: Array.isArray(windowData.profiles) ? windowData.profiles : [],
+ llmErrors: Array.isArray(windowData.llmErrors) ? windowData.llmErrors : [],
activeDecorators: windowData.activeDecorators || {},
isWaitingForUserInteraction: windowData.isWaitingForUserInteraction || false,
};And in the handleMessage handler around line 80:
const safeData: ExtensionData = {
...defaultState,
...event.data,
ruleSets: Array.isArray(event.data.ruleSets) ? event.data.ruleSets : [],
enhancedIncidents: Array.isArray(event.data.enhancedIncidents)
? event.data.enhancedIncidents
: [],
chatMessages: Array.isArray(event.data.chatMessages) ? event.data.chatMessages : [],
configErrors: Array.isArray(event.data.configErrors) ? event.data.configErrors : [],
profiles: Array.isArray(event.data.profiles) ? event.data.profiles : [],
+ llmErrors: Array.isArray(event.data.llmErrors) ? event.data.llmErrors : [],
activeDecorators: event.data.activeDecorators || {},
isWaitingForUserInteraction: event.data.isWaitingForUserInteraction || false,
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| llmErrors: [], | |
| return { | |
| ...defaultState, | |
| ...windowData, | |
| ruleSets: Array.isArray(windowData.ruleSets) ? windowData.ruleSets : [], | |
| enhancedIncidents: Array.isArray(windowData.enhancedIncidents) | |
| ? windowData.enhancedIncidents | |
| : [], | |
| chatMessages: Array.isArray(windowData.chatMessages) ? windowData.chatMessages : [], | |
| configErrors: Array.isArray(windowData.configErrors) ? windowData.configErrors : [], | |
| profiles: Array.isArray(windowData.profiles) ? windowData.profiles : [], | |
| llmErrors: Array.isArray(windowData.llmErrors) ? windowData.llmErrors : [], | |
| activeDecorators: windowData.activeDecorators || {}, | |
| isWaitingForUserInteraction: windowData.isWaitingForUserInteraction || false, | |
| }; |
| llmErrors: [], | |
| const safeData: ExtensionData = { | |
| ...defaultState, | |
| ...event.data, | |
| ruleSets: Array.isArray(event.data.ruleSets) ? event.data.ruleSets : [], | |
| enhancedIncidents: Array.isArray(event.data.enhancedIncidents) | |
| ? event.data.enhancedIncidents | |
| : [], | |
| chatMessages: Array.isArray(event.data.chatMessages) ? event.data.chatMessages : [], | |
| configErrors: Array.isArray(event.data.configErrors) ? event.data.configErrors : [], | |
| profiles: Array.isArray(event.data.profiles) ? event.data.profiles : [], | |
| llmErrors: Array.isArray(event.data.llmErrors) ? event.data.llmErrors : [], | |
| activeDecorators: event.data.activeDecorators || {}, | |
| isWaitingForUserInteraction: event.data.isWaitingForUserInteraction || false, | |
| }; |
🤖 Prompt for AI Agents
In webview-ui/src/context/ExtensionStateContext.tsx around line 27 and in the
getInitialState block (lines ~37-49) and handleMessage handler (around lines
~73-85), the default state includes llmErrors: [] but there is no validation to
ensure incoming state messages contain llmErrors as an array; add validation
similar to other array fields: when building initial state validate that
Array.isArray(llmErrors) ? llmErrors : [] and in handleMessage when merging
incoming payload ensure llmErrors is checked and set to [] if missing or not an
array, so runtime code always sees a valid array.
Signed-off-by: Ian Bolton <[email protected]>
Signed-off-by: Ian Bolton <[email protected]>
1a11306 to
cb3d50e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (9)
vscode/core/package.json (1)
186-191: Hide dev-only “Test LLM Error” from Command PaletteThis looks like a developer tool. Suggest hiding it from the palette to avoid user confusion. Add it to contributes.menus.commandPalette with when: "never".
"commandPalette": [ { "command": "konveyor.expandSingleIssue", "when": "never" }, { "command": "konveyor.loadRuleSets", "when": "never" + }, + { + "command": "konveyor.testLLMError", + "when": "never" } ]vscode/core/src/utilities/ModifiedFiles/processMessage.ts (1)
339-391: Make LLM error parsing robust, preserve details, cap list, and sanitize
- Parsing: Use startsWith on the known prefix; current replace() after includes() may strip text mid-string.
- Diagnostics: Pass actualError as .error for timeout/rate-limit/context-limit too (helps debugging).
- Bound growth: Cap draft.llmErrors length to avoid unbounded memory.
- Privacy: Sanitize secrets before storing/logging raw provider errors.
- if (errorMessage.includes("Failed to get llm response")) { - // Extract the actual error message after the prefix - const actualError = errorMessage.replace("Failed to get llm response - ", ""); + const llmPrefix = "Failed to get llm response - "; + if (errorMessage.startsWith(llmPrefix)) { + // Extract the actual error message after the exact prefix + const rawActual = errorMessage.slice(llmPrefix.length); + const sanitize = (s: string) => + s + // redact common secrets/tokens + .replace(/(sk-[A-Za-z0-9]{20,})/g, "<redacted>") + .replace(/(Bearer\s+[A-Za-z0-9._-]{10,})/gi, "Bearer <redacted>") + .replace(/api[_-]?key\s*[:=]\s*["']?[^"'\s]+/gi, "api_key=<redacted>"); + const actualError = sanitize(rawActual); const lowerError = actualError.toLowerCase(); // Categorize the LLM error based on the actual error content let llmError; if (lowerError.includes("timeout") || lowerError.includes("timed out")) { - llmError = createLLMError.llmTimeout(); + llmError = createLLMError.llmTimeout(); + llmError.error = actualError; } else if (lowerError.includes("rate limit") || lowerError.includes("429")) { - llmError = createLLMError.llmRateLimit(); + llmError = createLLMError.llmRateLimit(); + llmError.error = actualError; } else if ( lowerError.includes("context length") || lowerError.includes("token limit") || lowerError.includes("context_length_exceeded") || lowerError.includes("max_tokens") ) { - llmError = createLLMError.llmContextLimit(); + llmError = createLLMError.llmContextLimit(); + llmError.error = actualError; } else if ( lowerError.includes("parse") || lowerError.includes("json") || lowerError.includes("invalid response") ) { llmError = createLLMError.llmResponseParseFailed(actualError); } else { llmError = createLLMError.llmRequestFailed(actualError); } - state.mutateData((draft) => { - draft.llmErrors.push(llmError); - }); + state.mutateData((draft) => { + draft.llmErrors.push(llmError); + // keep last 20 + if (draft.llmErrors.length > 20) { + draft.llmErrors = draft.llmErrors.slice(-20); + } + });webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx (2)
136-147: Unify dismissal across pagesDismissal is local to this page; AnalysisPage tracks its own set. Consider lifting dismissal to shared context (or extension state) so dismissing in one view hides in the other.
239-278: Avoid duplicated LLM alert rendering logicConfigAlerts already renders LLM errors. To reduce duplication, extract a small LLMErrorAlerts component and reuse it here and in ConfigAlerts/AnalysisPage.
webview-ui/src/components/AnalysisPage/AnalysisPage.tsx (1)
216-225: Fix minor lint: wrap single-arg arrow params in parenthesesPrettier/ESLint suggests parentheses. No behavior change.
- llmErrors={rawLLMErrors.filter(error => !dismissedLLMErrors.has(error.timestamp))} + llmErrors={rawLLMErrors.filter((error) => !dismissedLLMErrors.has(error.timestamp))} ... - onDismissLLMError={(timestamp) => { - setDismissedLLMErrors(prev => new Set([...prev, timestamp])); - }} + onDismissLLMError={(timestamp) => { + setDismissedLLMErrors((prev) => new Set([...prev, timestamp])); + }}webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (1)
76-115: LGTM: clear LLM error surfacing with optional close + detailsNice, self-contained and typed. As a small a11y polish, consider adding aria-label to the details summary (“Technical details”) and ensure the Alert titles are unique per error when multiple appear.
vscode/core/src/commands.ts (3)
229-253: Also surface init failure via llmErrorsAdd a structured LLM error alongside the chat message so UI alerts display it consistently.
} catch (initError) { logger.error("Failed to initialize workflow", initError); const errorMessage = initError instanceof Error ? initError.message : String(initError); state.mutateData((draft) => { draft.isFetchingSolution = false; draft.solutionState = "failedOnSending"; + // Structured LLM error for UI + draft.llmErrors.push(createLLMError.workflowInitializationFailed(errorMessage)); draft.chatMessages.push({ messageToken: `m${Date.now()}`, kind: ChatMessageType.String, value: { message: `Workflow initialization failed: ${errorMessage}` }, timestamp: new Date().toISOString(), }); });
329-340: Add structured LLM “unknown” error on run() failureThe catch currently adds a chat message only. Also push an llmUnknownError so alerts surface it.
- const errorMessage = err instanceof Error ? err.message : String(err); + const errorMessage = err instanceof Error ? err.message : String(err); // Ensure isFetchingSolution is reset on any error state.mutateData((draft) => { // Add to chat messages for visibility draft.chatMessages.push({ messageToken: `m${Date.now()}`, kind: ChatMessageType.String, value: { message: `Error: ${errorMessage}` }, timestamp: new Date().toISOString(), }); + draft.llmErrors.push(createLLMError.llmUnknownError(errorMessage));
982-1037: Gate the test command behind a debug flagPrevent accidental use in production. Check konveyor.debug.webview (already defined in package.json) before proceeding.
[`${EXTENSION_NAME}.testLLMError`]: async () => { + const debugEnabled = workspace.getConfiguration(EXTENSION_NAME).get<boolean>("debug.webview"); + if (!debugEnabled) { + window.showWarningMessage("Test LLM Error is only available when 'konveyor.debug.webview' is enabled."); + return; + } const errorType = await vscode.window.showQuickPick(Consider also adding the command to contributes.menus.commandPalette with when: "never" (see package.json comment) to hide it from users.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
.gitignore(1 hunks)shared/src/types/types.ts(3 hunks)vscode/core/package.json(1 hunks)vscode/core/src/commands.ts(6 hunks)vscode/core/src/extension.ts(1 hunks)vscode/core/src/utilities/ModifiedFiles/processMessage.ts(2 hunks)webview-ui/src/components/AnalysisPage/AnalysisPage.tsx(3 hunks)webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx(2 hunks)webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx(5 hunks)webview-ui/src/context/ExtensionStateContext.tsx(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- webview-ui/src/context/ExtensionStateContext.tsx
- .gitignore
- shared/src/types/types.ts
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-30T18:00:23.969Z
Learnt from: djzager
PR: konveyor/editor-extensions#885
File: vscode/src/commands.ts:18-18
Timestamp: 2025-09-30T18:00:23.969Z
Learning: In the konveyor/editor-extensions repository, the commands konveyor.configureLabelSelector and konveyor.configureSourcesTargets were previously removed along with their handlers. Any PR that removes their definitions from vscode/package.json is cleanup work, not adding new functionality.
Applied to files:
vscode/core/package.json
🧬 Code graph analysis (4)
vscode/core/src/utilities/ModifiedFiles/processMessage.ts (1)
shared/src/types/types.ts (1)
createLLMError(220-266)
webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (1)
shared/src/types/types.ts (2)
ConfigError(166-170)LLMError(159-164)
vscode/core/src/commands.ts (3)
vscode/core/src/client/tracer.ts (1)
logger(60-65)vscode/core/src/utilities/constants.ts (1)
EXTENSION_NAME(18-18)shared/src/types/types.ts (1)
createLLMError(220-266)
webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx (1)
shared/src/types/types.ts (1)
LLMError(159-164)
🪛 GitHub Check: Lint
webview-ui/src/components/AnalysisPage/AnalysisPage.tsx
[warning] 224-224:
Replace prev with (prev)
[warning] 218-218:
Replace error with (error)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build (linux)
- GitHub Check: Build (macos)
- GitHub Check: Build (windows)
🔇 Additional comments (4)
vscode/core/package.json (1)
190-190: Verify codicon name rendersPlease confirm "$(testing-error-icon)" is a valid Codicon and renders across VS Code versions targeted by engines.vscode "^1.93.0". If not, pick a known icon (e.g., "$(beaker)" or "$(error)").
vscode/core/src/extension.ts (1)
92-93: LGTM: initialize llmErrors in stateGood default initialization; aligns with new UI consumers.
vscode/core/src/commands.ts (2)
211-213: LGTM: clear stale LLM errors at startPrevents old errors leaking into new runs.
298-303: Comment clarity is goodDelegating workflow errors to processMessage is the right direction.
| }, | ||
|
|
||
| // Test command for simulating LLM errors (development only) | ||
| [`${EXTENSION_NAME}.testLLMError`]: async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did you mean to keep this?
Summary by CodeRabbit
New Features
Chores