Skip to content

Conversation

@ibolton336
Copy link
Member

@ibolton336 ibolton336 commented Oct 23, 2025

Summary by CodeRabbit

  • New Features

    • Implemented comprehensive LLM error tracking system supporting timeout, rate limits, context limits, parse failures, and request failures
    • Added dismissible error alerts in Analysis and Resolutions pages to display LLM errors
    • Added test command to simulate LLM error scenarios
  • Chores

    • Extended gitignore patterns for VSCode and Eclipse artifacts

@ibolton336 ibolton336 requested a review from a team as a code owner October 23, 2025 16:25
@ibolton336 ibolton336 force-pushed the add-llm-error-support branch from 98cb688 to 1a11306 Compare October 23, 2025 16:25
@coderabbitai
Copy link

coderabbitai bot commented Oct 23, 2025

Walkthrough

This pull request introduces structured LLM error tracking and handling across the extension. It adds new LLMError type definitions and a createLLMError factory in the shared types, updates the VSCode extension with a new test command and error initialization handling, classifies workflow errors into specific LLM error categories, and extends both the Analysis and Resolutions UI pages to display and dismiss LLM errors. The .gitignore file is also updated with additional VSCode and Eclipse artifact patterns.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

The changes span 9 files with a mix of simple additions (types, .gitignore patterns, initialization) and moderate complexity (error classification logic, state management, filtered UI rendering with callbacks). The changes follow a consistent pattern of threading LLM error support through the codebase, requiring understanding of the error flow from backend through state management to UI display. While individually straightforward, the interconnected nature of the changes across multiple components necessitates careful review.

Possibly related PRs

Suggested reviewers

  • djzager
  • pranavgaikwad
  • fabianvf

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning No pull request description was provided by the author. The description field is empty, which means there is no explanation of the changes, their motivation, implementation details, or testing approach. While the provided template focuses on title formatting conventions rather than specifying required description sections, a completely empty description is insufficient for understanding the scope and rationale of this relatively complex change that affects error handling across the extension. The author should add a meaningful pull request description explaining what changes were made, why they were necessary, and any relevant context for reviewers. For a feature of this complexity, the description should cover the error types introduced, how errors flow through the system (workflow → state → UI), the new test command, and any testing that was performed to validate the implementation.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title :sparkles: Handle llm error messages from workflow clearly and directly summarizes the main change in this pull request. The changeset introduces comprehensive LLM error handling across multiple layers—new error types and factories, command handlers, workflow message processing, and UI components for displaying and dismissing errors. The title accurately captures this core objective and uses the correct non-breaking feature prefix. The phrasing is specific enough that a reviewer scanning history would immediately understand the PR adds LLM error message handling functionality.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
vscode/core/package.json (1)

186-191: Consider hiding test command in production.

This test command is currently visible in the command palette. For testing/development commands, consider adding a visibility constraint in the menus.commandPalette section:

{
  "command": "konveyor.testLLMError",
  "when": "never"
}

Or document if this command is intentionally available to users for testing purposes.

vscode/core/src/utilities/ModifiedFiles/processMessage.ts (1)

346-348: Consider extracting hardcoded prefix to a constant.

The prefix "Failed to get llm response" is used in two places (lines 346 and 348). Consider extracting it to a constant at the module level to ensure consistency and ease future updates:

const LLM_ERROR_PREFIX = "Failed to get llm response - ";

Then use it:

if (errorMessage.includes(LLM_ERROR_PREFIX.replace(" - ", ""))) {
  const actualError = errorMessage.replace(LLM_ERROR_PREFIX, "");
  // ...
}
webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (1)

76-115: LGTM! Well-structured error display with good UX.

The LLM error rendering follows the existing pattern for config errors and includes thoughtful touches like:

  • Danger variant to distinguish severity
  • Dismissible alerts with close buttons
  • Collapsible technical details to avoid overwhelming users

Using error.timestamp as the key should be safe since timestamps are generated via new Date().toISOString() and collisions are extremely unlikely in practice.

Consider extracting the inline styles to constants or CSS classes for better maintainability:

const styles = {
  detailsSummary: { cursor: "pointer" },
  errorPre: {
    fontSize: "12px",
    whiteSpace: "pre-wrap" as const,
    marginTop: "8px",
    padding: "8px",
    backgroundColor: "#f5f5f5",
    borderRadius: "4px",
  },
};

// Then use: style={styles.errorPre}
vscode/core/src/commands.ts (2)

229-253: Consider adding the initialization error to llmErrors array.

The try-catch around workflow initialization is excellent for resilience and properly handles cleanup. However, for consistency with the new error tracking system, consider also adding the error to llmErrors:

 state.mutateData((draft) => {
   draft.isFetchingSolution = false;
   draft.solutionState = "failedOnSending";
   draft.chatMessages.push({
     messageToken: `m${Date.now()}`,
     kind: ChatMessageType.String,
     value: { message: `Workflow initialization failed: ${errorMessage}` },
     timestamp: new Date().toISOString(),
   });
+  draft.llmErrors.push(createLLMError.workflowInitializationFailed(errorMessage));
 });

This would make the error dismissible in the UI and provide consistent error tracking across all LLM-related failures.


982-1037: Useful development command. Consider marking as internal.

The testLLMError command is valuable for development and testing the error display UI. However, consider adding a comment or configuration guard to prevent accidental use in production:

// Test command for simulating LLM errors (development/testing only)
// This command should be removed or guarded before production release
[`${EXTENSION_NAME}.testLLMError`]: async () => {
  // Optional: Add guard
  // if (!isDevelopmentMode()) {
  //   vscode.window.showWarningMessage("This command is only available in development mode");
  //   return;
  // }

Alternatively, register this command conditionally based on an environment variable or development mode flag.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 711c2bb and 1a11306.

📒 Files selected for processing (11)
  • .gitignore (1 hunks)
  • shared/src/types/types.ts (3 hunks)
  • vscode/core/package.json (1 hunks)
  • vscode/core/src/commands.ts (6 hunks)
  • vscode/core/src/extension.ts (1 hunks)
  • vscode/core/src/utilities/ModifiedFiles/processMessage.ts (2 hunks)
  • vscode/core/src/webviewMessageHandler.ts (2 hunks)
  • webview-ui/src/components/AnalysisPage/AnalysisPage.tsx (3 hunks)
  • webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (2 hunks)
  • webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx (5 hunks)
  • webview-ui/src/context/ExtensionStateContext.tsx (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-30T18:00:23.969Z
Learnt from: djzager
PR: konveyor/editor-extensions#885
File: vscode/src/commands.ts:18-18
Timestamp: 2025-09-30T18:00:23.969Z
Learning: In the konveyor/editor-extensions repository, the commands konveyor.configureLabelSelector and konveyor.configureSourcesTargets were previously removed along with their handlers. Any PR that removes their definitions from vscode/package.json is cleanup work, not adding new functionality.

Applied to files:

  • vscode/core/package.json
🧬 Code graph analysis (5)
vscode/core/src/commands.ts (3)
vscode/core/src/client/tracer.ts (1)
  • logger (60-65)
vscode/core/src/utilities/constants.ts (1)
  • EXTENSION_NAME (18-18)
shared/src/types/types.ts (1)
  • createLLMError (220-266)
vscode/core/src/webviewMessageHandler.ts (1)
vscode/core/src/extensionState.ts (1)
  • ExtensionState (19-53)
vscode/core/src/utilities/ModifiedFiles/processMessage.ts (1)
shared/src/types/types.ts (1)
  • createLLMError (220-266)
webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx (1)
shared/src/types/types.ts (1)
  • LLMError (159-164)
webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (1)
shared/src/types/types.ts (2)
  • ConfigError (166-170)
  • LLMError (159-164)
🔇 Additional comments (17)
.gitignore (1)

20-21: LGTM! Standard IDE artifacts correctly excluded.

The added patterns appropriately exclude VSCode test artifacts and Eclipse metadata from version control.

vscode/core/src/extension.ts (1)

92-92: LGTM! Consistent initialization.

The llmErrors field is properly initialized as an empty array, following the same pattern as configErrors and other array fields in the state.

vscode/core/src/utilities/ModifiedFiles/processMessage.ts (1)

353-372: Error categorization relies on fragile string matching.

The error categorization logic uses string matching on error messages, which is brittle and may break if the upstream workflow changes error message formats. The presence of many variations for context limit errors (lines 358-362) suggests the error message format is already inconsistent.

Consider requesting the workflow layer to emit structured error types instead of relying on string parsing, or document the expected error message formats that must be maintained for compatibility.

As a user, would you like me to search for documentation on the expected error message formats from the @editor-extensions/agentic workflow module?

vscode/core/src/webviewMessageHandler.ts (1)

262-266: LGTM! Clear and correct implementation.

The CLEAR_LLM_ERRORS action handler properly clears the llmErrors array using the state mutation pattern consistent with the rest of the codebase.

webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx (2)

241-241: Verify isToast usage inside PageSection.

The AlertGroup has isToast={true} but is rendered inside a PageSection. The isToast prop typically positions alerts as floating toasts at the page level, which may not work correctly when constrained within a section. Consider removing isToast or moving the AlertGroup to a higher level in the component tree if toast-style positioning is desired.


239-278: LLM error alerts implemented correctly.

The error alert rendering logic properly filters dismissed errors and provides a good UX with collapsible technical details. The dismissal state is local to the component, meaning dismissed errors will reappear after navigation, which appears intentional for ensuring users don't permanently miss important errors.

webview-ui/src/components/AnalysisPage/AnalysisPage.tsx (1)

76-76: LGTM! Clean LLM error handling integration.

The LLM error handling is properly integrated with defensive defaults, consistent dismissal patterns, and clean state management. The filtered errors and dismiss callback are correctly passed to ConfigAlerts.

Also applies to: 91-91, 218-226

shared/src/types/types.ts (3)

129-129: LGTM! Consistent with existing error tracking.

The llmErrors field is well-placed alongside configErrors and follows the same pattern for centralized error management.


150-164: LGTM! Well-structured error type system.

The LLMError interface follows the established ConfigError pattern while adding a timestamp field, which is essential for uniqueness and user dismissal tracking. The error types comprehensively cover common LLM failure scenarios.


220-266: LGTM! Clean factory pattern with actionable messages.

The createLLMError factory mirrors the createConfigError pattern effectively. Each method provides user-friendly messages with clear guidance. Consistent use of new Date().toISOString() ensures timestamp uniformity across all error types.

webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (3)

2-10: LGTM! Appropriate imports for dismissible alerts.

The addition of AlertActionCloseButton and type-only import of LLMError are necessary for the new functionality.


13-31: LGTM! Well-designed props with sensible defaults.

The optional props with default values allow backward compatibility. Using timestamp as the dismissal identifier is appropriate given its uniqueness.


33-37: LGTM! Alert visibility logic properly updated.

The shouldShowAlerts condition correctly includes LLM errors while preserving existing checks.

vscode/core/src/commands.ts (4)

18-24: LGTM! Necessary import for error handling.

The createLLMError import is required for the workflow initialization error handling and test command.


211-211: LGTM! Proper cleanup of previous errors.

Clearing llmErrors when starting a new solution prevents stale errors from confusing users and is consistent with the clearing of chatMessages above.


299-303: LGTM! Centralized error handling approach.

The comment clearly explains that error handling is delegated to the workflow message processor, which prevents duplicate state mutations and centralizes error handling logic.


329-345: LGTM! Comprehensive error handling and cleanup.

The error handling properly extracts error messages, adds them to chat for user visibility, and ensures state cleanup to prevent stuck "fetching" states.

activeDecorators: {},
solutionServerConnected: false,
isWaitingForUserInteraction: false,
llmErrors: [],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing array validation for llmErrors.

The llmErrors field is added to the default state but is not included in the array validation logic in getInitialState (lines 37-49) and handleMessage (lines 73-85). This inconsistency could cause runtime errors if llmErrors is received as undefined or in an invalid format.

Apply this diff to add array validation:

       return {
         ...defaultState,
         ...windowData,
         ruleSets: Array.isArray(windowData.ruleSets) ? windowData.ruleSets : [],
         enhancedIncidents: Array.isArray(windowData.enhancedIncidents)
           ? windowData.enhancedIncidents
           : [],
         chatMessages: Array.isArray(windowData.chatMessages) ? windowData.chatMessages : [],
         configErrors: Array.isArray(windowData.configErrors) ? windowData.configErrors : [],
         profiles: Array.isArray(windowData.profiles) ? windowData.profiles : [],
+        llmErrors: Array.isArray(windowData.llmErrors) ? windowData.llmErrors : [],
         activeDecorators: windowData.activeDecorators || {},
         isWaitingForUserInteraction: windowData.isWaitingForUserInteraction || false,
       };

And in the handleMessage handler around line 80:

       const safeData: ExtensionData = {
         ...defaultState,
         ...event.data,
         ruleSets: Array.isArray(event.data.ruleSets) ? event.data.ruleSets : [],
         enhancedIncidents: Array.isArray(event.data.enhancedIncidents)
           ? event.data.enhancedIncidents
           : [],
         chatMessages: Array.isArray(event.data.chatMessages) ? event.data.chatMessages : [],
         configErrors: Array.isArray(event.data.configErrors) ? event.data.configErrors : [],
         profiles: Array.isArray(event.data.profiles) ? event.data.profiles : [],
+        llmErrors: Array.isArray(event.data.llmErrors) ? event.data.llmErrors : [],
         activeDecorators: event.data.activeDecorators || {},
         isWaitingForUserInteraction: event.data.isWaitingForUserInteraction || false,
       };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
llmErrors: [],
return {
...defaultState,
...windowData,
ruleSets: Array.isArray(windowData.ruleSets) ? windowData.ruleSets : [],
enhancedIncidents: Array.isArray(windowData.enhancedIncidents)
? windowData.enhancedIncidents
: [],
chatMessages: Array.isArray(windowData.chatMessages) ? windowData.chatMessages : [],
configErrors: Array.isArray(windowData.configErrors) ? windowData.configErrors : [],
profiles: Array.isArray(windowData.profiles) ? windowData.profiles : [],
llmErrors: Array.isArray(windowData.llmErrors) ? windowData.llmErrors : [],
activeDecorators: windowData.activeDecorators || {},
isWaitingForUserInteraction: windowData.isWaitingForUserInteraction || false,
};
Suggested change
llmErrors: [],
const safeData: ExtensionData = {
...defaultState,
...event.data,
ruleSets: Array.isArray(event.data.ruleSets) ? event.data.ruleSets : [],
enhancedIncidents: Array.isArray(event.data.enhancedIncidents)
? event.data.enhancedIncidents
: [],
chatMessages: Array.isArray(event.data.chatMessages) ? event.data.chatMessages : [],
configErrors: Array.isArray(event.data.configErrors) ? event.data.configErrors : [],
profiles: Array.isArray(event.data.profiles) ? event.data.profiles : [],
llmErrors: Array.isArray(event.data.llmErrors) ? event.data.llmErrors : [],
activeDecorators: event.data.activeDecorators || {},
isWaitingForUserInteraction: event.data.isWaitingForUserInteraction || false,
};
🤖 Prompt for AI Agents
In webview-ui/src/context/ExtensionStateContext.tsx around line 27 and in the
getInitialState block (lines ~37-49) and handleMessage handler (around lines
~73-85), the default state includes llmErrors: [] but there is no validation to
ensure incoming state messages contain llmErrors as an array; add validation
similar to other array fields: when building initial state validate that
Array.isArray(llmErrors) ? llmErrors : [] and in handleMessage when merging
incoming payload ensure llmErrors is checked and set to [] if missing or not an
array, so runtime code always sees a valid array.

@ibolton336 ibolton336 force-pushed the add-llm-error-support branch from 1a11306 to cb3d50e Compare October 23, 2025 17:44
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (9)
vscode/core/package.json (1)

186-191: Hide dev-only “Test LLM Error” from Command Palette

This looks like a developer tool. Suggest hiding it from the palette to avoid user confusion. Add it to contributes.menus.commandPalette with when: "never".

       "commandPalette": [
         {
           "command": "konveyor.expandSingleIssue",
           "when": "never"
         },
         {
           "command": "konveyor.loadRuleSets",
           "when": "never"
+        },
+        {
+          "command": "konveyor.testLLMError",
+          "when": "never"
         }
       ]
vscode/core/src/utilities/ModifiedFiles/processMessage.ts (1)

339-391: Make LLM error parsing robust, preserve details, cap list, and sanitize

  • Parsing: Use startsWith on the known prefix; current replace() after includes() may strip text mid-string.
  • Diagnostics: Pass actualError as .error for timeout/rate-limit/context-limit too (helps debugging).
  • Bound growth: Cap draft.llmErrors length to avoid unbounded memory.
  • Privacy: Sanitize secrets before storing/logging raw provider errors.
-      if (errorMessage.includes("Failed to get llm response")) {
-        // Extract the actual error message after the prefix
-        const actualError = errorMessage.replace("Failed to get llm response - ", "");
+      const llmPrefix = "Failed to get llm response - ";
+      if (errorMessage.startsWith(llmPrefix)) {
+        // Extract the actual error message after the exact prefix
+        const rawActual = errorMessage.slice(llmPrefix.length);
+        const sanitize = (s: string) =>
+          s
+            // redact common secrets/tokens
+            .replace(/(sk-[A-Za-z0-9]{20,})/g, "<redacted>")
+            .replace(/(Bearer\s+[A-Za-z0-9._-]{10,})/gi, "Bearer <redacted>")
+            .replace(/api[_-]?key\s*[:=]\s*["']?[^"'\s]+/gi, "api_key=<redacted>");
+        const actualError = sanitize(rawActual);
         const lowerError = actualError.toLowerCase();

         // Categorize the LLM error based on the actual error content
         let llmError;
         if (lowerError.includes("timeout") || lowerError.includes("timed out")) {
-          llmError = createLLMError.llmTimeout();
+          llmError = createLLMError.llmTimeout();
+          llmError.error = actualError;
         } else if (lowerError.includes("rate limit") || lowerError.includes("429")) {
-          llmError = createLLMError.llmRateLimit();
+          llmError = createLLMError.llmRateLimit();
+          llmError.error = actualError;
         } else if (
           lowerError.includes("context length") ||
           lowerError.includes("token limit") ||
           lowerError.includes("context_length_exceeded") ||
           lowerError.includes("max_tokens")
         ) {
-          llmError = createLLMError.llmContextLimit();
+          llmError = createLLMError.llmContextLimit();
+          llmError.error = actualError;
         } else if (
           lowerError.includes("parse") ||
           lowerError.includes("json") ||
           lowerError.includes("invalid response")
         ) {
           llmError = createLLMError.llmResponseParseFailed(actualError);
         } else {
           llmError = createLLMError.llmRequestFailed(actualError);
         }

-        state.mutateData((draft) => {
-          draft.llmErrors.push(llmError);
-        });
+        state.mutateData((draft) => {
+          draft.llmErrors.push(llmError);
+          // keep last 20
+          if (draft.llmErrors.length > 20) {
+            draft.llmErrors = draft.llmErrors.slice(-20);
+          }
+        });
webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx (2)

136-147: Unify dismissal across pages

Dismissal is local to this page; AnalysisPage tracks its own set. Consider lifting dismissal to shared context (or extension state) so dismissing in one view hides in the other.


239-278: Avoid duplicated LLM alert rendering logic

ConfigAlerts already renders LLM errors. To reduce duplication, extract a small LLMErrorAlerts component and reuse it here and in ConfigAlerts/AnalysisPage.

webview-ui/src/components/AnalysisPage/AnalysisPage.tsx (1)

216-225: Fix minor lint: wrap single-arg arrow params in parentheses

Prettier/ESLint suggests parentheses. No behavior change.

-              llmErrors={rawLLMErrors.filter(error => !dismissedLLMErrors.has(error.timestamp))}
+              llmErrors={rawLLMErrors.filter((error) => !dismissedLLMErrors.has(error.timestamp))}
...
-              onDismissLLMError={(timestamp) => {
-                setDismissedLLMErrors(prev => new Set([...prev, timestamp]));
-              }}
+              onDismissLLMError={(timestamp) => {
+                setDismissedLLMErrors((prev) => new Set([...prev, timestamp]));
+              }}
webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (1)

76-115: LGTM: clear LLM error surfacing with optional close + details

Nice, self-contained and typed. As a small a11y polish, consider adding aria-label to the details summary (“Technical details”) and ensure the Alert titles are unique per error when multiple appear.

vscode/core/src/commands.ts (3)

229-253: Also surface init failure via llmErrors

Add a structured LLM error alongside the chat message so UI alerts display it consistently.

         } catch (initError) {
           logger.error("Failed to initialize workflow", initError);
           const errorMessage = initError instanceof Error ? initError.message : String(initError);

           state.mutateData((draft) => {
             draft.isFetchingSolution = false;
             draft.solutionState = "failedOnSending";
+            // Structured LLM error for UI
+            draft.llmErrors.push(createLLMError.workflowInitializationFailed(errorMessage));
             draft.chatMessages.push({
               messageToken: `m${Date.now()}`,
               kind: ChatMessageType.String,
               value: { message: `Workflow initialization failed: ${errorMessage}` },
               timestamp: new Date().toISOString(),
             });
           });

329-340: Add structured LLM “unknown” error on run() failure

The catch currently adds a chat message only. Also push an llmUnknownError so alerts surface it.

-          const errorMessage = err instanceof Error ? err.message : String(err);
+          const errorMessage = err instanceof Error ? err.message : String(err);

           // Ensure isFetchingSolution is reset on any error
           state.mutateData((draft) => {
             // Add to chat messages for visibility
             draft.chatMessages.push({
               messageToken: `m${Date.now()}`,
               kind: ChatMessageType.String,
               value: { message: `Error: ${errorMessage}` },
               timestamp: new Date().toISOString(),
             });
+            draft.llmErrors.push(createLLMError.llmUnknownError(errorMessage));

982-1037: Gate the test command behind a debug flag

Prevent accidental use in production. Check konveyor.debug.webview (already defined in package.json) before proceeding.

     [`${EXTENSION_NAME}.testLLMError`]: async () => {
+      const debugEnabled = workspace.getConfiguration(EXTENSION_NAME).get<boolean>("debug.webview");
+      if (!debugEnabled) {
+        window.showWarningMessage("Test LLM Error is only available when 'konveyor.debug.webview' is enabled.");
+        return;
+      }
       const errorType = await vscode.window.showQuickPick(

Consider also adding the command to contributes.menus.commandPalette with when: "never" (see package.json comment) to hide it from users.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1a11306 and cb3d50e.

📒 Files selected for processing (10)
  • .gitignore (1 hunks)
  • shared/src/types/types.ts (3 hunks)
  • vscode/core/package.json (1 hunks)
  • vscode/core/src/commands.ts (6 hunks)
  • vscode/core/src/extension.ts (1 hunks)
  • vscode/core/src/utilities/ModifiedFiles/processMessage.ts (2 hunks)
  • webview-ui/src/components/AnalysisPage/AnalysisPage.tsx (3 hunks)
  • webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (2 hunks)
  • webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx (5 hunks)
  • webview-ui/src/context/ExtensionStateContext.tsx (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • webview-ui/src/context/ExtensionStateContext.tsx
  • .gitignore
  • shared/src/types/types.ts
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-30T18:00:23.969Z
Learnt from: djzager
PR: konveyor/editor-extensions#885
File: vscode/src/commands.ts:18-18
Timestamp: 2025-09-30T18:00:23.969Z
Learning: In the konveyor/editor-extensions repository, the commands konveyor.configureLabelSelector and konveyor.configureSourcesTargets were previously removed along with their handlers. Any PR that removes their definitions from vscode/package.json is cleanup work, not adding new functionality.

Applied to files:

  • vscode/core/package.json
🧬 Code graph analysis (4)
vscode/core/src/utilities/ModifiedFiles/processMessage.ts (1)
shared/src/types/types.ts (1)
  • createLLMError (220-266)
webview-ui/src/components/AnalysisPage/ConfigAlerts.tsx (1)
shared/src/types/types.ts (2)
  • ConfigError (166-170)
  • LLMError (159-164)
vscode/core/src/commands.ts (3)
vscode/core/src/client/tracer.ts (1)
  • logger (60-65)
vscode/core/src/utilities/constants.ts (1)
  • EXTENSION_NAME (18-18)
shared/src/types/types.ts (1)
  • createLLMError (220-266)
webview-ui/src/components/ResolutionsPage/ResolutionsPage.tsx (1)
shared/src/types/types.ts (1)
  • LLMError (159-164)
🪛 GitHub Check: Lint
webview-ui/src/components/AnalysisPage/AnalysisPage.tsx

[warning] 224-224:
Replace prev with (prev)


[warning] 218-218:
Replace error with (error)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build (linux)
  • GitHub Check: Build (macos)
  • GitHub Check: Build (windows)
🔇 Additional comments (4)
vscode/core/package.json (1)

190-190: Verify codicon name renders

Please confirm "$(testing-error-icon)" is a valid Codicon and renders across VS Code versions targeted by engines.vscode "^1.93.0". If not, pick a known icon (e.g., "$(beaker)" or "$(error)").

vscode/core/src/extension.ts (1)

92-93: LGTM: initialize llmErrors in state

Good default initialization; aligns with new UI consumers.

vscode/core/src/commands.ts (2)

211-213: LGTM: clear stale LLM errors at start

Prevents old errors leaking into new runs.


298-303: Comment clarity is good

Delegating workflow errors to processMessage is the right direction.

},

// Test command for simulating LLM errors (development only)
[`${EXTENSION_NAME}.testLLMError`]: async () => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you mean to keep this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants