|
| 1 | +--- |
| 2 | +id: analytics-build-comparison |
| 3 | +title: Build Comparison - Compare Test Builds and Track Regressions |
| 4 | +sidebar_label: Build Comparison |
| 5 | +description: Compare two builds side by side to identify new failures, fixed tests, and stability changes across your test suite |
| 6 | +keywords: |
| 7 | + - analytics |
| 8 | + - build comparison |
| 9 | + - build compare |
| 10 | + - regression detection |
| 11 | + - test stability |
| 12 | + - build diff |
| 13 | + - test observability |
| 14 | +url: https://www.lambdatest.com/support/docs/analytics-build-comparison/ |
| 15 | +site_name: LambdaTest |
| 16 | +slug: analytics-build-comparison/ |
| 17 | +--- |
| 18 | + |
| 19 | +<script type="application/ld+json" |
| 20 | + dangerouslySetInnerHTML={{ __html: JSON.stringify({ |
| 21 | + "@context": "https://schema.org", |
| 22 | + "@type": "BreadcrumbList", |
| 23 | + "itemListElement": [{ |
| 24 | + "@type": "ListItem", |
| 25 | + "position": 1, |
| 26 | + "name": "Home", |
| 27 | + "item": "https://www.lambdatest.com" |
| 28 | + },{ |
| 29 | + "@type": "ListItem", |
| 30 | + "position": 2, |
| 31 | + "name": "Support", |
| 32 | + "item": "https://www.lambdatest.com/support/docs/" |
| 33 | + },{ |
| 34 | + "@type": "ListItem", |
| 35 | + "position": 3, |
| 36 | + "name": "Build Comparison", |
| 37 | + "item": "https://www.lambdatest.com/support/docs/analytics-build-comparison/" |
| 38 | + }] |
| 39 | + }) |
| 40 | + }} |
| 41 | +></script> |
| 42 | + |
| 43 | +--- |
| 44 | + |
| 45 | +import NewTag from '../src/component/newTag'; |
| 46 | + |
| 47 | +## Overview |
| 48 | + |
| 49 | +Build Comparison allows you to compare two builds side by side to instantly see what changed - which tests started failing, which got fixed, and which remain stable. Use it to validate releases, debug regressions, and track test stability. |
| 50 | + |
| 51 | +## Accessing Build Comparison |
| 52 | + |
| 53 | +1. Navigate to **Insights** → **Build Insights** |
| 54 | +2. Click on any build to open the **Build Details** page |
| 55 | +3. Select the **Compare** tab |
| 56 | + |
| 57 | +## Selecting Builds to Compare |
| 58 | + |
| 59 | +When you first open the Compare tab, you'll see an empty state prompting you to select a build for comparison. |
| 60 | + |
| 61 | +<img loading="lazy" src={require('../assets/images/analytics/build-comparison-empty-state.webp').default} alt="Build Comparison - Empty State" className="doc_img"/> |
| 62 | + |
| 63 | +Click **Select build to compare** to open the build selection dialog. |
| 64 | + |
| 65 | +### Build Selection Dialog |
| 66 | + |
| 67 | +<img loading="lazy" src={require('../assets/images/analytics/build-comparison-select-build.webp').default} alt="Build Comparison - Select Build Dialog" className="doc_img"/> |
| 68 | + |
| 69 | +The dialog provides options to find builds: |
| 70 | + |
| 71 | +| Option | Description | |
| 72 | +|--------|-------------| |
| 73 | +| **Past runs of same build** | Shows previous executions of the current build (default) | |
| 74 | +| **All Builds** | Shows all builds across your account for cross-build comparison | |
| 75 | +| **Search** | Search bar to find builds by name | |
| 76 | + |
| 77 | +Each build in the list displays: |
| 78 | +- **Build name** - Full build identifier |
| 79 | +- **Duration** - Total execution time (e.g., 52m 53s) |
| 80 | +- **Test count** - Number of tests executed |
| 81 | +- **Timestamp** - Execution date and time |
| 82 | +- **Tag** - Associated project tag (e.g., atxSmoke) |
| 83 | +- **Results summary** - Quick pass/fail/other counts (🟢 passed, 🔴 failed, ⚫ other) |
| 84 | + |
| 85 | +Select a build and click **Compare Builds** to run the comparison. The selected build becomes the **Compare** build, while the current build you navigated from becomes the **Base** build. |
| 86 | + |
| 87 | +:::tip |
| 88 | +For release validation, select your last stable production build as **Base** and the release candidate as **Compare**. |
| 89 | +::: |
| 90 | + |
| 91 | +--- |
| 92 | + |
| 93 | +## Key Comparison Metrics |
| 94 | + |
| 95 | +<img loading="lazy" src={require('../assets/images/analytics/build-comparison-summary.webp').default} alt="Build Comparison - Key Metrics Summary" className="doc_img"/> |
| 96 | + |
| 97 | +:::info Understanding Failed Statuses |
| 98 | +The following statuses are considered **failed statuses**: **Failed**, **Error**, **Lambda Error**, **Idle Timeout**, and **Queue Timeout**. Change detection is based on whether a test transitions to or from these statuses. |
| 99 | +::: |
| 100 | + |
| 101 | +| Metric | Description | When to Act | |
| 102 | +|--------|-------------|-------------| |
| 103 | +| **New Failures** | Tests not failing in Base but failing in Compare (see details below) | 🚨 Investigate immediately before release - these are regressions | |
| 104 | +| **Pass Rate** | Percentage of passed tests with delta (↗ or ↘) from Base. | Set release gates (e.g., "Release only if >95%") | |
| 105 | +| **Fixed** | Tests that failed in Base but passed in Compare. | Verify fixes are genuine, not flaky behavior | |
| 106 | +| **No Change** | Tests with same non-passing status in both builds. | Review for persistent infrastructure issues | |
| 107 | +| **Additional Tests** | New tests in Compare not present in Base. | Confirm new features have test coverage | |
| 108 | +| **Dropped Tests** | Tests in Base but missing from Compare. | ⚠️ Investigate if not intentionally removed | |
| 109 | + |
| 110 | +### Understanding New Failures |
| 111 | + |
| 112 | +The **New Failures** metric includes two scenarios: |
| 113 | + |
| 114 | +| Scenario | Description | Label in Table | |
| 115 | +|----------|-------------|----------------| |
| 116 | +| **Regression** | Test existed in Base with a non-failed status but has a failed status in Compare | New Failure | |
| 117 | +| **New test failing** | Test did not exist in Base but has a failed status in Compare | New Failure (Additional) | |
| 118 | + |
| 119 | +Both scenarios are counted together in the **New Failures** metric shown in the summary cards and charts. In the Test Instances table, tests that didn't exist in Base are labeled as **New Failure (Additional)** to help you distinguish between regressions in existing tests versus failures in newly added tests. |
| 120 | + |
| 121 | +--- |
| 122 | + |
| 123 | +## Results Comparison Chart |
| 124 | + |
| 125 | +<img loading="lazy" src={require('../assets/images/analytics/build-comparison-charts.webp').default} alt="Build Comparison - Results Comparison and Status Changes Charts" className="doc_img"/> |
| 126 | + |
| 127 | +The horizontal bar chart compares test counts by status between builds: |
| 128 | +- **Purple bar**: Base build |
| 129 | +- **Orange bar**: Compare build |
| 130 | + |
| 131 | +If the orange bar is longer for Failed/Error statuses, more tests are failing in the newer build. |
| 132 | + |
| 133 | +## Status Changes Chart |
| 134 | + |
| 135 | +The donut chart categorizes tests by how their status changed: |
| 136 | + |
| 137 | +| Category | Description | Action | |
| 138 | +|----------|-------------|--------| |
| 139 | +| **New Failures** | Non-failed → Failed (includes New Failure Additional) | Prioritize - check recent code changes | |
| 140 | +| **Fixed Instances** | Failed → Passed | Verify fix is stable, not flaky | |
| 141 | +| **Stable Instances** | Passed → Passed | No action - reliable tests ✓ | |
| 142 | +| **Consistent Failures** | Failed in both builds | Triage - document or fix before release | |
| 143 | + |
| 144 | +--- |
| 145 | + |
| 146 | +## Test Instances Comparison Table |
| 147 | + |
| 148 | +<img loading="lazy" src={require('../assets/images/analytics/build-comparison-table.webp').default} alt="Build Comparison - Test Instances Comparison Table" className="doc_img"/> |
| 149 | + |
| 150 | +| Column | Description | Use Case | |
| 151 | +|--------|-------------|----------| |
| 152 | +| **Test Instances** | Test name, spec file, platform, browser | Click to view detailed logs and recordings | |
| 153 | +| **Base** | Status and duration in Base build | Reference point for comparison | |
| 154 | +| **Compare** | Status and duration in Compare build | Identify status changes at a glance | |
| 155 | +| **Duration Change** | Time difference (+slower, -faster) | Flag tests with >30% increase for performance review | |
| 156 | +| **Change Type** | Stable, Status Change, Fixed, New Failure (Additional), etc. | Filter to focus on specific change categories | |
| 157 | + |
| 158 | +### Filtering Options |
| 159 | + |
| 160 | +| Filter | Description | |
| 161 | +|--------|-------------| |
| 162 | +| **All** | Filter by change type | |
| 163 | +| **Search** | Find tests by name or spec file | |
| 164 | +| **OS** | Filter by operating system | |
| 165 | +| **Browser** | Filter by browser type | |
| 166 | +| **Test Tags** | Filter by custom tags | |
| 167 | + |
| 168 | +:::tip |
| 169 | +Use filters to isolate platform-specific issues. If failures only occur on a specific browser or OS, it helps prioritize the fix. |
| 170 | +::: |
| 171 | + |
| 172 | +--- |
| 173 | + |
| 174 | +## Common Use Cases |
| 175 | + |
| 176 | +### Pre-Release Validation |
| 177 | +Compare your last stable build (Base) with the release candidate (Compare). Proceed only if **New Failures = 0** and pass rate meets standards. |
| 178 | + |
| 179 | +### Debugging a Broken Build |
| 180 | +Compare the last passing build (Base) with the failing build (Compare). Review **New Failures** and use filters to isolate platform-specific issues. |
| 181 | + |
| 182 | +### Measuring Stabilization Progress |
| 183 | +Compare the sprint-start build (Base) with the latest build (Compare). Use **Fixed** count and reduced **Consistent Failures** to demonstrate progress. |
| 184 | + |
| 185 | +### Environment Comparison |
| 186 | +Compare production build (Base) with staging build (Compare) to identify environment-specific failures. |
| 187 | + |
| 188 | +### Cross-Browser Compatibility |
| 189 | +Compare Chrome build (Base) with Firefox/Safari builds (Compare) to catch browser-specific issues. |
| 190 | + |
| 191 | +--- |
| 192 | + |
| 193 | +## Best Practices |
| 194 | + |
| 195 | +1. **Compare similar test suites** - Comparing different test sets leads to misleading Additional/Dropped counts. |
| 196 | +2. **Investigate New Failures immediately** - These are potential regressions. |
| 197 | +3. **Verify Fixed tests** - Run them multiple times to confirm stability. |
| 198 | +4. **Monitor Duration Changes** - Increases >20-30% may indicate performance issues. |
| 199 | +5. **Document Consistent Failures** - Maintain a list of known, accepted failures. |
| 200 | +6. **Establish comparison baselines** - Define standard comparison points (last production release, previous nightly, sprint-start). |
| 201 | + |
| 202 | +--- |
| 203 | + |
| 204 | +## FAQ |
| 205 | + |
| 206 | +**Can I compare builds from different projects?** |
| 207 | +Yes, but for meaningful results, compare builds with similar test suites. |
| 208 | + |
| 209 | +**Why are tests showing as "Dropped"?** |
| 210 | +Tests may be skipped in configuration, failed to execute, or removed from the suite. |
| 211 | + |
| 212 | +**How is Pass Rate calculated?** |
| 213 | +`(Passed Tests / Total Tests) × 100`. The delta shows the change from Base. |
| 214 | + |
| 215 | +**How far back can I compare?** |
| 216 | +Any two builds within your data retention period. |
0 commit comments