-
-
Notifications
You must be signed in to change notification settings - Fork 672
Store and show request body in UI #327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
gschier
commented
Dec 22, 2025
- Create separate blobs.sqlite database for large body storage - Add body_chunks table with body_id, chunk_index, and data columns - Implement BlobManager and BlobContext with core CRUD methods - Add BlobManagerExt trait for app_handle.blobs() access - Support streaming writes with configurable chunk sizes
- Add TeeReader to fork body streams while sending
- Wrap request body with TeeReader in execute_transaction
- Buffer and write 1MB chunks to blob DB with body_id format {response_id}.request
- Support both Bytes and Stream body types
|
@claude review this please |
|
@claude review |
|
@claude review |
|
Claude finished @gschier's task in 1m 46s —— View job Code Review CompleteI've reviewed this PR which adds the ability to store and display request bodies in the UI. Overall, this is a solid implementation with good architecture. Here are my findings: 🎯 Architecture & DesignStrengths:
🐛 Potential Issues1. Race Condition Risk (Medium Priority)Location: The // At line 594-601, final update might race with UI queries
app_handle.with_tx(|tx| {
debug!("Updating final body length {total_bytes}");
if let Ok(mut response) = tx.get_http_response(&response_id) {
response.request_content_length = Some(total_bytes as i32);
tx.update_http_response_if_id(&response, update_source)?;
}
Ok(())
})?;Recommendation: Consider using optimistic locking or ensuring the UI properly handles the case where 2. Database Lock Contention (Low-Medium Priority)Location: The pub fn connect(&self) -> BlobContext {
let conn = self
.pool
.lock()
.expect("Failed to gain lock on blob DB") // ⚠️ Panics on poisoned lock
.get()
.expect("Failed to get blob DB connection from pool");
BlobContext { conn }
}Recommendation: Use 3. Missing Cleanup on Error Paths (Low Priority)Location: If the request fails after writing some chunks to the blob DB (e.g., network error), the partial chunks aren't cleaned up. This could lead to orphaned blob data over time. Recommendation: Add cleanup logic in error handlers to remove partial chunk data when requests fail. 4. Integer Overflow Concern (Low Priority)Location: Content length is stored as r.request_content_length = Some(data.len() as i32);Recommendation: Consider using 5. Potential Memory Issue in Frontend (Low Priority)Location: The entire request body is loaded into memory as a const body = new Uint8Array(data);
const bodyText = new TextDecoder('utf-8', { fatal: false }).decode(body);Recommendation: The 💡 Code Quality ObservationsGood Practices:
Minor Issues:
🔒 Security Considerations
🧪 TestingWell Tested:
Could Be Improved:
📊 Performance Notes
✅ SummaryThis is a well-implemented feature with good architecture and testing. The main concerns are around edge cases (race conditions, error cleanup, large body handling) rather than fundamental design issues. Recommended actions before merge:
The current implementation is functional and safe for most use cases, but the above improvements would make it more robust for production use. |
Code Review: PR #327 - Store and Show Request Body in UIOverviewThis PR adds functionality to capture, store, and display HTTP request bodies. The implementation introduces a blob storage system, TeeReader for streaming capture, and UI components. Overall, the architecture is solid with good separation of concerns, but there are several critical issues that need addressing. Critical Issues (Must Fix Before Merge)1. Unbounded Channel Growth - Memory Exhaustion RiskLocation: src-tauri/src/http_request.rs:368, 338 Issue: Unbounded channels can grow indefinitely if the database writer is slower than the HTTP client, leading to memory exhaustion on large request bodies. Fix: Use bounded channels with backpressure instead of unbounded_channel() 2. Excessive Database Updates - Performance IssueLocation: src-tauri/src/http_request.rs:492-495 Issue: Database updates occur for every 8KB chunk read. For a 100MB response, this is ~12,800 DB updates, causing severe performance degradation. Fix: Throttle updates to every 1MB or similar interval 3. Integer Overflow on Large BodiesLocation: src-tauri/src/http_request.rs:535, 597 Issue: i32 max is ~2.1GB. Bodies larger than this will silently overflow. Fix: Use i64 and update the database schema. 4. No Size Limits - DoS/Storage Exhaustion RiskIssue: No validation prevents storing arbitrarily large request bodies, leading to potential disk exhaustion. Fix: Add configurable max size (e.g., 100MB default) 5. Silent Error Handling in Blob WritingLocation: src-tauri/src/http_request.rs:390 Issue: Errors are only logged. Users see requests as successful even when body storage fails, leading to data loss. Fix: Store error state in response or surface to UI High Priority Issues6. Task Leak on Early ErrorsLocation: src-tauri/src/http_request.rs:379-392 Issue: If execute_transaction fails early, the spawned write_stream_chunks_to_db task continues waiting indefinitely. Fix: Add cancellation mechanism using oneshot channels or CancellationToken. 7. Orphaned Ephemeral Response FilesLocation: src-tauri/src/http_request.rs:419-423 Issue: Ephemeral responses generate UUID-based file paths that are never cleaned up. Fix: Track ephemeral files and clean up on shutdown or periodically. 8. Orphaned Chunks on Partial FailuresIssue: If chunk writing fails mid-stream, partial data remains in the database. Fix: Add cleanup on error 9. Memory Inefficiency - Double BufferingLocation: src-tauri/src/http_request.rs:362 Issue: Full copy of request body created in memory. 100MB request = 200MB memory usage. Fix: Use Bytes::slice() or consume and reconstruct. 10. Race Condition in Stream Completion UpdateLocation: src-tauri/src/http_request.rs:594-601 Issue: Silent failure if response was deleted. Should log or error. Positive Aspects
Suggestions for Future Improvements
Test Coverage Needed
SummaryThis is a well-architected feature with solid fundamentals, but the critical issues around resource management, performance, and error handling must be addressed before merge. The unbounded channels and excessive DB updates are particularly concerning for production use. Recommendation: Address the 5 critical issues, then consider the high-priority improvements in a follow-up PR. Great work on the overall design! The TeeReader implementation is particularly clean. |
Pull Request Review: Store and Show Request Body in UISummaryThis PR implements request body storage and visualization in the UI. The implementation uses the blob storage system to capture request bodies as they're sent, stores them in chunks, and displays them in a new UI component with performance safeguards for large bodies. Code Quality & Architecture ✅Strengths:
Areas for improvement: 1. Potential Backpressure Deadlock in TeeReader (Critical)Location: The backpressure implementation in Err(mpsc::error::TrySendError::Full(_)) => {
cx.waker().wake_by_ref();
return Poll::Pending;
}When the channel is full, this wakes the waker immediately and returns Recommendation: Consider using 2. Error Handling in Stream Chunk WritingLocation: tauri::async_runtime::spawn(async move {
if let Err(e) = write_stream_chunks_to_db(...).await {
error!("Error writing stream chunks to DB: {}", e);
};
});Errors in the background task are only logged. If blob insertion fails, the Recommendation: Consider propagating errors back to the main response or setting an error flag on the response to indicate partial data capture. 3. Unbounded Memory in Buffer AccumulationLocation: let mut buffer = Vec::with_capacity(REQUEST_BODY_CHUNK_SIZE);
// ...
buffer.extend_from_slice(&data);While the buffer has an initial capacity, if many small chunks arrive rapidly before reaching Recommendation: Add a maximum buffer size check or flush timeout to prevent memory issues with pathological input patterns. 4. Race Condition in Final Size UpdateLocation: The final size update happens in a background task after streaming completes: if let Ok(mut response) = tx.get_http_response(&response_id) {
response.request_content_length = Some(total_bytes as i32);
tx.update_http_response_if_id(&response, update_source)?;
}There's a potential race where the UI might read the response before this update completes, showing incomplete size information. Recommendation: Consider using atomic updates or ensuring this completes before marking the response as closed. Performance Considerations
|
Pull Request Review: Store and Show Request Body in UISummaryThis PR implements a significant feature that captures and stores HTTP request bodies in a separate blob database. The implementation is well-architected with proper separation of concerns. Code Quality & Architecture ✅Strengths:
Suggestions:
Potential Bugs & Issues
|
PR Review: Store and show request body in UIOverviewThis PR adds functionality to capture and display HTTP request bodies in the UI, implementing a robust streaming architecture with blob storage. Overall, this is a well-structured implementation with good attention to performance and user experience. Code Quality & Best Practices ✅Strengths:
Areas for improvement:
Potential Bugs & Issues
|
Pull Request Review: Store and Show Request Body in UIOverviewThis PR implements functionality to capture, store, and display HTTP request bodies in the UI. The implementation involves significant changes across both the Rust backend (Tauri) and TypeScript frontend, introducing a new blob storage system for chunked body data. Code Quality & Architecture✅ Strengths
|