You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Replace PNG images with WEBP format for Build Insights documentation and enhance content with Unique Instances feature explanation. Updated related documentation for clarity and accuracy.
Copy file name to clipboardExpand all lines: docs/analytics-build-insights.md
+20-4Lines changed: 20 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,6 +49,8 @@ Build Insights is your build-level health dashboard. It shows how stable each bu
49
49
50
50
With Build Insights, you can view all your unique builds in a centralized list, then drill down into individual build details to explore comprehensive metrics and test-level insights. The feature is designed to be intuitive and accessible, whether you're a QA engineer analyzing test results or a team lead tracking overall build health.
51
51
52
+
Build Insights also supports **Unique Instances** view, which consolidates retry runs by grouping tests by name and environment (browser + OS + device + resolution), showing only the final run result for cleaner, more accurate reporting.
53
+
52
54
## Build Insights Flow
53
55
54
56
Build Insights organizes your test data into two main views:
@@ -138,7 +140,7 @@ Use filters to narrow analysis to exactly the slice you care about (for example,
138
140
139
141
Use the **Insights** tab to understand the overall health and performance of the selected build before you dive into individual tests.
@@ -198,11 +200,24 @@ Each metric points you directly to tests that need attention (for example, focus
198
200
199
201
Use the **Tests** tab when you are ready to debug at the individual test level.
200
202
203
+
### Show Unique Instances Toggle
204
+
205
+
The **Show Unique Instances** toggle consolidates retry runs to give you a cleaner view of your test results.
206
+
207
+
**How it works:**
208
+
209
+
- When **ON**: Tests are grouped by **test name + environment** (browser + OS + device + resolution) as a single instance. Only the **final run** of each instance is considered in reporting, eliminating noise from intermediate retry attempts.
210
+
- When **OFF**: All individual test executions are shown, including every retry attempt.
211
+
212
+
:::note Processing Time
213
+
Retry run consolidation requires a small amount of processing time after test execution completes. If you've just finished a build, wait a moment before toggling on Unique Instances to ensure all data is consolidated.
214
+
:::
215
+
201
216
:::tip Build Comparison
202
217
Want to compare two builds side by side? Use the **Compare** tab to identify new failures, fixed tests, and stability changes between any two builds. This is especially useful for release validation and regression detection. Learn more in the [Build Comparison](/support/docs/analytics-build-comparison/) documentation.
@@ -291,8 +306,9 @@ This approach ensures that Build Insights can provide you with meaningful histor
291
306
## Best Practices
292
307
293
308
1.**Check builds early and often**: Start your day on the Build Insights page to spot risky builds before they block releases.
294
-
2.**Filter with intent**: Use filters to answer specific questions (for example, “Are failures only on Windows?”) instead of browsing everything at once.
309
+
2.**Filter with intent**: Use filters to answer specific questions (for example, "Are failures only on Windows?") instead of browsing everything at once.
295
310
3.**Trust history, not one run**: Use Result History, Duration History, and the test History column to judge stability over time, not just a single execution.
296
-
4.**Share context, not just failures**: When sharing a build, also mention which metrics you looked at (for example, “pass rate dropped from 98% to 90% in the last 3 runs”).
311
+
4.**Share context, not just failures**: When sharing a build, also mention which metrics you looked at (for example, "pass rate dropped from 98% to 90% in the last 3 runs").
297
312
5.**Standardize build names**: Maintain common build names so histories stay meaningful and easy to compare across days and weeks.
313
+
6.**Use Unique Instances for accurate reporting**: Toggle on "Show Unique Instances" to consolidate retry runs and see the true pass/fail state of each test-environment combination, especially when your pipeline uses automatic retries.
0 commit comments