diff --git a/assets/images/analytics/build-comparison-charts.webp b/assets/images/analytics/build-comparison-charts.webp
new file mode 100644
index 00000000..4c1a2503
Binary files /dev/null and b/assets/images/analytics/build-comparison-charts.webp differ
diff --git a/assets/images/analytics/build-comparison-empty-state.webp b/assets/images/analytics/build-comparison-empty-state.webp
new file mode 100644
index 00000000..f3831d1f
Binary files /dev/null and b/assets/images/analytics/build-comparison-empty-state.webp differ
diff --git a/assets/images/analytics/build-comparison-select-build.webp b/assets/images/analytics/build-comparison-select-build.webp
new file mode 100644
index 00000000..12fdd290
Binary files /dev/null and b/assets/images/analytics/build-comparison-select-build.webp differ
diff --git a/assets/images/analytics/build-comparison-summary.webp b/assets/images/analytics/build-comparison-summary.webp
new file mode 100644
index 00000000..c238543d
Binary files /dev/null and b/assets/images/analytics/build-comparison-summary.webp differ
diff --git a/assets/images/analytics/build-comparison-table.webp b/assets/images/analytics/build-comparison-table.webp
new file mode 100644
index 00000000..99cee199
Binary files /dev/null and b/assets/images/analytics/build-comparison-table.webp differ
diff --git a/assets/images/analytics/build-insights-page-2-tab-1-insights.png b/assets/images/analytics/build-insights-page-2-tab-1-insights.png
deleted file mode 100644
index fe357926..00000000
Binary files a/assets/images/analytics/build-insights-page-2-tab-1-insights.png and /dev/null differ
diff --git a/assets/images/analytics/build-insights-page-2-tab-1-insights.webp b/assets/images/analytics/build-insights-page-2-tab-1-insights.webp
new file mode 100644
index 00000000..6357ee8a
Binary files /dev/null and b/assets/images/analytics/build-insights-page-2-tab-1-insights.webp differ
diff --git a/assets/images/analytics/build-insights-page-2-tab-2-tests.png b/assets/images/analytics/build-insights-page-2-tab-2-tests.png
deleted file mode 100644
index 875af07a..00000000
Binary files a/assets/images/analytics/build-insights-page-2-tab-2-tests.png and /dev/null differ
diff --git a/assets/images/analytics/build-insights-page-2-tab-2-tests.webp b/assets/images/analytics/build-insights-page-2-tab-2-tests.webp
new file mode 100644
index 00000000..bf658bf5
Binary files /dev/null and b/assets/images/analytics/build-insights-page-2-tab-2-tests.webp differ
diff --git a/docs/analytics-build-comparison.md b/docs/analytics-build-comparison.md
new file mode 100644
index 00000000..dae7d6aa
--- /dev/null
+++ b/docs/analytics-build-comparison.md
@@ -0,0 +1,216 @@
+---
+id: analytics-build-comparison
+title: Build Comparison - Compare Test Builds and Track Regressions
+sidebar_label: Build Comparison
+description: Compare two builds side by side to identify new failures, fixed tests, and stability changes across your test suite
+keywords:
+ - analytics
+ - build comparison
+ - build compare
+ - regression detection
+ - test stability
+ - build diff
+ - test observability
+url: https://www.lambdatest.com/support/docs/analytics-build-comparison/
+site_name: LambdaTest
+slug: analytics-build-comparison/
+---
+
+
+
+---
+
+import NewTag from '../src/component/newTag';
+
+## Overview
+
+Build Comparison allows you to compare two builds side by side to instantly see what changed - which tests started failing, which got fixed, and which remain stable. Use it to validate releases, debug regressions, and track test stability.
+
+## Accessing Build Comparison
+
+1. Navigate to **Insights** β **Build Insights**
+2. Click on any build to open the **Build Details** page
+3. Select the **Compare** tab
+
+## Selecting Builds to Compare
+
+When you first open the Compare tab, you'll see an empty state prompting you to select a build for comparison.
+
+
+
+Click **Select build to compare** to open the build selection dialog.
+
+### Build Selection Dialog
+
+
+
+The dialog provides options to find builds:
+
+| Option | Description |
+|--------|-------------|
+| **Past runs of same build** | Shows previous executions of the current build (default) |
+| **All Builds** | Shows all builds across your account for cross-build comparison |
+| **Search** | Search bar to find builds by name |
+
+Each build in the list displays:
+- **Build name** - Full build identifier
+- **Duration** - Total execution time (e.g., 52m 53s)
+- **Test count** - Number of tests executed
+- **Timestamp** - Execution date and time
+- **Tag** - Associated project tag (e.g., atxSmoke)
+- **Results summary** - Quick pass/fail/other counts (π’ passed, π΄ failed, β« other)
+
+Select a build and click **Compare Builds** to run the comparison. The selected build becomes the **Compare** build, while the current build you navigated from becomes the **Base** build.
+
+:::tip
+For release validation, select your last stable production build as **Base** and the release candidate as **Compare**.
+:::
+
+---
+
+## Key Comparison Metrics
+
+
+
+:::info Understanding Failed Statuses
+The following statuses are considered **failed statuses**: **Failed**, **Error**, **Lambda Error**, **Idle Timeout**, and **Queue Timeout**. Change detection is based on whether a test transitions to or from these statuses.
+:::
+
+| Metric | Description | When to Act |
+|--------|-------------|-------------|
+| **New Failures** | Tests not failing in Base but failing in Compare (see details below) | π¨ Investigate immediately before release - these are regressions |
+| **Pass Rate** | Percentage of passed tests with delta (β or β) from Base. | Set release gates (e.g., "Release only if >95%") |
+| **Fixed** | Tests that failed in Base but passed in Compare. | Verify fixes are genuine, not flaky behavior |
+| **No Change** | Tests with same non-passing status in both builds. | Review for persistent infrastructure issues |
+| **Additional Tests** | New tests in Compare not present in Base. | Confirm new features have test coverage |
+| **Dropped Tests** | Tests in Base but missing from Compare. | β οΈ Investigate if not intentionally removed |
+
+### Understanding New Failures
+
+The **New Failures** metric includes two scenarios:
+
+| Scenario | Description | Label in Table |
+|----------|-------------|----------------|
+| **Regression** | Test existed in Base with a non-failed status but has a failed status in Compare | New Failure |
+| **New test failing** | Test did not exist in Base but has a failed status in Compare | New Failure (Additional) |
+
+Both scenarios are counted together in the **New Failures** metric shown in the summary cards and charts. In the Test Instances table, tests that didn't exist in Base are labeled as **New Failure (Additional)** to help you distinguish between regressions in existing tests versus failures in newly added tests.
+
+---
+
+## Results Comparison Chart
+
+
+
+The horizontal bar chart compares test counts by status between builds:
+- **Purple bar**: Base build
+- **Orange bar**: Compare build
+
+If the orange bar is longer for Failed/Error statuses, more tests are failing in the newer build.
+
+## Status Changes Chart
+
+The donut chart categorizes tests by how their status changed:
+
+| Category | Description | Action |
+|----------|-------------|--------|
+| **New Failures** | Non-failed β Failed (includes New Failure Additional) | Prioritize - check recent code changes |
+| **Fixed Instances** | Failed β Passed | Verify fix is stable, not flaky |
+| **Stable Instances** | Passed β Passed | No action - reliable tests β |
+| **Consistent Failures** | Failed in both builds | Triage - document or fix before release |
+
+---
+
+## Test Instances Comparison Table
+
+
+
+| Column | Description | Use Case |
+|--------|-------------|----------|
+| **Test Instances** | Test name, spec file, platform, browser | Click to view detailed logs and recordings |
+| **Base** | Status and duration in Base build | Reference point for comparison |
+| **Compare** | Status and duration in Compare build | Identify status changes at a glance |
+| **Duration Change** | Time difference (+slower, -faster) | Flag tests with >30% increase for performance review |
+| **Change Type** | Stable, Status Change, Fixed, New Failure (Additional), etc. | Filter to focus on specific change categories |
+
+### Filtering Options
+
+| Filter | Description |
+|--------|-------------|
+| **All** | Filter by change type |
+| **Search** | Find tests by name or spec file |
+| **OS** | Filter by operating system |
+| **Browser** | Filter by browser type |
+| **Test Tags** | Filter by custom tags |
+
+:::tip
+Use filters to isolate platform-specific issues. If failures only occur on a specific browser or OS, it helps prioritize the fix.
+:::
+
+---
+
+## Common Use Cases
+
+### Pre-Release Validation
+Compare your last stable build (Base) with the release candidate (Compare). Proceed only if **New Failures = 0** and pass rate meets standards.
+
+### Debugging a Broken Build
+Compare the last passing build (Base) with the failing build (Compare). Review **New Failures** and use filters to isolate platform-specific issues.
+
+### Measuring Stabilization Progress
+Compare the sprint-start build (Base) with the latest build (Compare). Use **Fixed** count and reduced **Consistent Failures** to demonstrate progress.
+
+### Environment Comparison
+Compare production build (Base) with staging build (Compare) to identify environment-specific failures.
+
+### Cross-Browser Compatibility
+Compare Chrome build (Base) with Firefox/Safari builds (Compare) to catch browser-specific issues.
+
+---
+
+## Best Practices
+
+1. **Compare similar test suites** - Comparing different test sets leads to misleading Additional/Dropped counts.
+2. **Investigate New Failures immediately** - These are potential regressions.
+3. **Verify Fixed tests** - Run them multiple times to confirm stability.
+4. **Monitor Duration Changes** - Increases >20-30% may indicate performance issues.
+5. **Document Consistent Failures** - Maintain a list of known, accepted failures.
+6. **Establish comparison baselines** - Define standard comparison points (last production release, previous nightly, sprint-start).
+
+---
+
+## FAQ
+
+**Can I compare builds from different projects?**
+Yes, but for meaningful results, compare builds with similar test suites.
+
+**Why are tests showing as "Dropped"?**
+Tests may be skipped in configuration, failed to execute, or removed from the suite.
+
+**How is Pass Rate calculated?**
+`(Passed Tests / Total Tests) Γ 100`. The delta shows the change from Base.
+
+**How far back can I compare?**
+Any two builds within your data retention period.
diff --git a/docs/analytics-build-insights.md b/docs/analytics-build-insights.md
index 9c419402..18a18bd1 100644
--- a/docs/analytics-build-insights.md
+++ b/docs/analytics-build-insights.md
@@ -49,6 +49,8 @@ Build Insights is your build-level health dashboard. It shows how stable each bu
With Build Insights, you can view all your unique builds in a centralized list, then drill down into individual build details to explore comprehensive metrics and test-level insights. The feature is designed to be intuitive and accessible, whether you're a QA engineer analyzing test results or a team lead tracking overall build health.
+Build Insights also supports **Unique Instances** view, which consolidates retry runs by grouping tests by name and environment (browser + OS + device + resolution), showing only the final run result for cleaner, more accurate reporting.
+
## Build Insights Flow
Build Insights organizes your test data into two main views:
@@ -138,7 +140,7 @@ Use filters to narrow analysis to exactly the slice you care about (for example,
Use the **Insights** tab to understand the overall health and performance of the selected build before you dive into individual tests.
-
+
### Key Metrics Summary
@@ -198,7 +200,24 @@ Each metric points you directly to tests that need attention (for example, focus
Use the **Tests** tab when you are ready to debug at the individual test level.
-
+### Show Unique Instances Toggle
+
+The **Show Unique Instances** toggle consolidates retry runs to give you a cleaner view of your test results.
+
+**How it works:**
+
+- When **ON**: Tests are grouped by **test name + environment** (browser + OS + device + resolution) as a single instance. Only the **final run** of each instance is considered in reporting, eliminating noise from intermediate retry attempts.
+- When **OFF**: All individual test executions are shown, including every retry attempt.
+
+:::note Processing Time
+Retry run consolidation requires a small amount of processing time after test execution completes. If you've just finished a build, wait a moment before toggling on Unique Instances to ensure all data is consolidated.
+:::
+
+:::tip Build Comparison
+Want to compare two builds side by side? Use the **Compare** tab to identify new failures, fixed tests, and stability changes between any two builds. This is especially useful for release validation and regression detection. Learn more in the [Build Comparison](/support/docs/analytics-build-comparison/) documentation.
+:::
+
+
### Search Functionality
@@ -287,8 +306,9 @@ This approach ensures that Build Insights can provide you with meaningful histor
## Best Practices
1. **Check builds early and often**: Start your day on the Build Insights page to spot risky builds before they block releases.
-2. **Filter with intent**: Use filters to answer specific questions (for example, βAre failures only on Windows?β) instead of browsing everything at once.
+2. **Filter with intent**: Use filters to answer specific questions (for example, "Are failures only on Windows?") instead of browsing everything at once.
3. **Trust history, not one run**: Use Result History, Duration History, and the test History column to judge stability over time, not just a single execution.
-4. **Share context, not just failures**: When sharing a build, also mention which metrics you looked at (for example, βpass rate dropped from 98% to 90% in the last 3 runsβ).
+4. **Share context, not just failures**: When sharing a build, also mention which metrics you looked at (for example, "pass rate dropped from 98% to 90% in the last 3 runs").
5. **Standardize build names**: Maintain common build names so histories stay meaningful and easy to compare across days and weeks.
+6. **Use Unique Instances for accurate reporting**: Toggle on "Show Unique Instances" to consolidate retry runs and see the true pass/fail state of each test-environment combination, especially when your pipeline uses automatic retries.
diff --git a/sidebars.js b/sidebars.js
index 5cf54f7b..54193022 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -3917,6 +3917,7 @@ module.exports = {
"analytics-test-insights",
"analytics-modules-test-intelligence-flaky-test-analytics",
"analytics-build-insights",
+ "analytics-build-comparison",
"analytics-smart-tags-test-intelligence",
"analytics-test-failure-classification",
"analytics-ai-root-cause-analysis",