-
Notifications
You must be signed in to change notification settings - Fork 8
feat(spv): add checkpoints and start sync from last checkpoint (#165,… #186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v0.41-dev
Are you sure you want to change the base?
Conversation
…ay#165, dashpay#166) This commit addresses two related issues: 1. dashpay#166: Add additional checkpoints - Added checkpoint at block 2,350,000 (Jan 2025) - Created fetch_checkpoints.sh script to generate new checkpoints - Script supports both RPC and block explorer APIs 2. dashpay#165: Header sync should always start at the last checkpoint - Modified initialization to ALWAYS use the latest checkpoint by default - Previously only used checkpoints if start_from_height was set - Now falls back to genesis only if no checkpoints exist - Skips ~2.35M headers on mainnet initial sync Benefits: - Faster initial sync (skip downloading/validating millions of headers) - Reduced bandwidth usage (~90 MB saved on mainnet) - Easy checkpoint maintenance via shell script
WalkthroughAdds a checkpoint-fetching shell script, inserts a mainnet checkpoint at height 2350000, changes startup to always initialize from checkpoints, and updates CoinJoin derivation path (adds hardened index 4) with corresponding test updates. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Script as fetch_checkpoints.sh
participant RPC as RPC Node
participant Explorer as Block Explorer
participant Output as Rust Snippet
User->>Script: run with network + heights (+ optional RPC creds)
alt RPC creds provided
Script->>RPC: getblockchaininfo / getblockhash / getblock
RPC-->>Script: block JSON
else no RPC creds
Script->>Explorer: /status -> /block/<height> -> /block/<hash>
Explorer-->>Script: block JSON
end
Script->>Script: format_checkpoint (normalize fields)
Script->>Output: emit Rust-formatted checkpoint fragment
sequenceDiagram
participant Bootstrap
participant CheckpointMgr
participant Chain
participant Storage
Bootstrap->>CheckpointMgr: load network checkpoints
CheckpointMgr->>CheckpointMgr: select checkpoint (last or best ≤ start_height)
CheckpointMgr->>Chain: construct BlockHeader from checkpoint
Chain->>Chain: validate header hash
alt header valid
Chain->>Storage: init chain state from checkpoint
Storage->>Storage: persist checkpoint-derived state
Storage-->>Bootstrap: ready for sync
else invalid / none
Bootstrap->>Bootstrap: fall back to genesis initialization
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
contrib/fetch_checkpoints.sh (1)
40-52: Harden get_current_height against invalid or non-numeric responses
get_current_heightassumes that.result.blocks/.info.blocksis always present and numeric. If RPC/explorer returns"null", an error object, or HTML,current_heightcan become empty/non‑numeric, making later numeric comparisons like:[ "$height" -gt "$current_height" ]fragile and potentially misleading.
It would be safer to validate and fail early when height parsing fails, e.g.:
get_current_height() { if [ -n "$RPC_URL" ] && [ -n "$RPC_USER" ] && [ -n "$RPC_PASS" ]; then height=$(curl -s -u "$RPC_USER:$RPC_PASS" \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":"checkpoint","method":"getblockchaininfo","params":[]}' \ "$RPC_URL" | jq -r '.result.blocks // empty') else height=$(curl -s "$EXPLORER_URL/status?q=getInfo" | jq -r '.info.blocks // empty') fi if ! printf '%s\n' "$height" | grep -Eq '^[0-9]+$'; then echo "Error: Failed to get numeric blockchain height" >&2 return 1 fi echo "$height" }This keeps later logic simple and avoids subtle failures when the backend misbehaves.
dash-spv/src/chain/checkpoints.rs (1)
580-605: Update the ignored masternode-list test to reflect the new highest ML checkpoint
test_masternode_list_checkpoint(currently#[ignore]) still asserts that the last checkpoint with a masternode list is at height1900000, butmainnet_checkpoints()now includes a newer ML checkpoint at2350000.Even though the test is ignored, it will be misleading for future maintenance and will fail if re‑enabled. Consider either:
- Updating the expected height to
2350000, or- Making the test assert that
last_checkpoint_having_masternode_list()returns the max height over all checkpoints withmasternode_list_name.is_some()(so it doesn’t need manual updates when adding future ML checkpoints).Example minimal change:
- assert_eq!(ml_checkpoint.expect("Should have ML checkpoint").height, 1900000); + assert_eq!(ml_checkpoint.expect("Should have ML checkpoint").height, 2350000);dash-spv/src/client/lifecycle.rs (1)
360-383: Hard-coded header version may prevent checkpoint bootstrapping from ever activatingWhen constructing the checkpoint-derived
BlockHeader, the version is hard-coded:let checkpoint_header = BlockHeader { version: Version::from_consensus(536870912), // 0x20000000 // ... };You then immediately verify:
let calculated_hash = checkpoint_header.block_hash(); if calculated_hash != checkpoint.block_hash { tracing::warn!("Checkpoint header hash mismatch..."); // falls back to genesis path }If the actual block at
checkpoint.heightused a different version (or ifTarget::to_compact_lossy()doesn’t round-trip the original compact bits exactly), this check will fail and you’ll silently fall back to genesis, effectively disabling the “start from checkpoint” optimization for that network/height.To make this robust:
- Ideally, include the real
versioninCheckpoint(and infetch_checkpoints.shoutput) and use it here instead of a constant.- Alternatively, if adding version is not feasible immediately, consider logging a stronger error and/or gating on network so you can detect when checkpoints are misconfigured, rather than silently degrading to genesis.
This will help ensure that the new checkpoint-based bootstrap actually triggers in production rather than quietly falling back.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
contrib/fetch_checkpoints.sh(1 hunks)dash-spv/src/chain/checkpoints.rs(1 hunks)dash-spv/src/client/lifecycle.rs(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: DCG-Claude
Repo: dashpay/rust-dashcore PR: 0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
📚 Learning: 2025-06-26T16:01:37.609Z
Learnt from: DCG-Claude
Repo: dashpay/rust-dashcore PR: 0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
Applied to files:
dash-spv/src/client/lifecycle.rsdash-spv/src/chain/checkpoints.rs
🧬 Code graph analysis (1)
dash-spv/src/client/lifecycle.rs (2)
dash-spv/src/chain/checkpoints.rs (3)
mainnet_checkpoints(261-360)testnet_checkpoints(363-414)new(85-101)dash-spv/src/storage/disk/manager.rs (1)
new(79-129)
🔇 Additional comments (3)
dash-spv/src/chain/checkpoints.rs (1)
347-358: New mainnet checkpoint entry looks structurally consistent; please double-check chain dataThe added checkpoint for height
2350000matches the existing pattern (hash/prevhash/timestamp/bits/chain_work/merkle_root/nonce/ML string), and will correctly pick this as the latest checkpoint for mainnet sync.Given this is hardcoded consensus data, it’s worth re‑verifying all fields (especially
block_hash,prev_blockhash,bits,chain_work, andML2350000__70232) against an authoritative source or the output ofcontrib/fetch_checkpoints.shbefore release.dash-spv/src/client/lifecycle.rs (2)
319-337: Checkpoint selection logic aligns with “start from latest checkpoint” goalUsing network-specific checkpoint lists and defaulting to
last_checkpoint()(orbest_checkpoint_at_or_before_height(start_from_height)) nicely enforces “always start from the latest checkpoint when available”, with a clean fallback to genesis for networks without checkpoints.If, in future, you want to respect wallet creation time or other heuristics, you could reuse
CheckpointManager::get_sync_checkpointhere instead of duplicating selection logic, but the current behavior is consistent with the PR’s stated objective.
384-421: Checkpoint-based chain state init and storage wiring look sound; just ensure headers_len semanticsThe flow of:
chain_state.init_from_checkpoint(checkpoint.height, checkpoint_header, self.config.network);- cloning
chain_statefor storage,- persisting via
store_chain_state, and- updating the sync manager cache with
update_chain_state_cache(true, checkpoint.height, headers_len)is consistent with the intended “virtual base height + no pre-stored headers” model and correctly avoids storing the checkpoint header itself.
Given
headers_lenis derived fromchain_state_for_storage.headers.len(), please confirm that this matches whatSequentialSyncManager::update_chain_state_cacheexpects (e.g., whether it treats this as “number of stored headers” vs. “tip height”) so that stats/UI and sync behavior don’t get out of sync when bootstrapping from a checkpoint.
| # # Fetch using RPC (requires running Dash Core node) | ||
| # RPC_URL=http://localhost:9998 RPC_USER=user RPC_PASS=pass ./fetch_checkpoints.sh mainnet 2400000 | ||
|
|
||
| set -e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
set -e makes per-height error handling ineffective; failures will abort the script instead of continuing
With set -e enabled, a non‑zero exit from get_block_rpc/get_block_explorer inside command substitution:
block_data=$(get_block_rpc "$height")will cause the whole script to exit before the subsequent $? check, so the // Error: Failed to fetch block...; continue path is never reached. This can happen e.g. when the explorer/RPC returns invalid JSON or no blockHash.
Consider restructuring the loop so failures are handled inside an if/! guard (where set -e is suppressed) instead of relying on $? after command substitution, for example:
- if [ -n "$RPC_URL" ] && [ -n "$RPC_USER" ] && [ -n "$RPC_PASS" ]; then
- block_data=$(get_block_rpc "$height")
- else
- block_data=$(get_block_explorer "$height")
- fi
-
- if [ $? -ne 0 ] || [ -z "$block_data" ]; then
- echo "// Error: Failed to fetch block at height $height"
- continue
- fi
+ if [ -n "$RPC_URL" ] && [ -n "$RPC_USER" ] && [ -n "$RPC_PASS" ]; then
+ if ! block_data=$(get_block_rpc "$height"); then
+ echo "// Error: Failed to fetch block at height $height"
+ continue
+ fi
+ else
+ if ! block_data=$(get_block_explorer "$height"); then
+ echo "// Error: Failed to fetch block at height $height"
+ continue
+ fi
+ fi
+
+ if [ -z "$block_data" ]; then
+ echo "// Error: Empty block response at height $height"
+ continue
+ fiThis preserves fail‑fast behavior for unrecoverable cases while allowing per‑height failures to be skipped as intended.
Also applies to: 173-196
I'll verify the review comment by examining the actual file and testing the shell behavior claims.
🏁 Script executed:
#!/bin/bash
# View the contrib/fetch_checkpoints.sh file to see the actual code
cat contrib/fetch_checkpoints.shLength of output: 6138
🏁 Script executed:
#!/bin/bash
# Test set -e behavior with command substitution and $? check
# This will help verify the review comment's claim
cat > /tmp/test_set_e.sh << 'EOF'
#!/bin/sh
set -e
# Test 1: command substitution + $? check under set -e
echo "=== Test 1: set -e with command substitution ==="
output=$(false 2>/dev/null)
result=$?
echo "After 'output=\$(false)', \$? = $result"
echo "Script continued (this line would not print if set -e caused exit)"
# Test 2: In a loop with continue
echo ""
echo "=== Test 2: Loop with command substitution and continue ==="
for i in 1 2 3; do
output=$([ $i -eq 2 ] && false || echo "value_$i")
if [ $? -ne 0 ] || [ -z "$output" ]; then
echo "Skipping iteration $i"
continue
fi
echo "Processed: $output"
done
echo "Loop completed"
EOF
sh /tmp/test_set_e.sh 2>&1 || echo "[Script exited with code: $?]"Length of output: 77
The review comment is accurate and well-founded.
The shell test confirms the core claim: under set -e, when output=$(false) fails inside command substitution, the script exits immediately rather than allowing the subsequent $? check. Your test script never reached the lines after the failing command substitution, exiting with code 1 instead.
In the actual code at lines 186–189, both get_block_rpc() and get_block_explorer() return exit code 1 on errors (e.g., when block hash is null or missing). Under set -e, these failures inside command substitution will abort the entire script, never reaching the continue at line 193. This defeats the intended per-height error recovery.
The proposed restructuring using if ! block_data=$(...) is the correct approach, as it suppresses set -e within the guard and allows proper per-height error handling.
🤖 Prompt for AI Agents
In contrib/fetch_checkpoints.sh around lines 186-189 (affecting behavior up to
line 193), failures from get_block_rpc() and get_block_explorer() inside command
substitutions are causing the script to exit due to set -e, preventing the
intended per-height continue; change each risky substitution to a guarded form
like: use if ! var=$(command); then log the error and continue; fi so the
command failure is handled locally and the loop proceeds. Ensure you apply this
pattern to both places where command substitution can fail so the continue at
line 193 is reachable.
) The CoinJoin derivation path was missing the feature index (4') in account_type.rs, resulting in m/9'/coin_type'/account' instead of the correct DIP9 path m/9'/coin_type'/4'/account'. Changes: - Added feature index 4' to CoinJoin derivation path in AccountType - Updated tests to expect correct path format - All CoinJoin tests passing (14/14) Fixes dashpay#176 (derivation path portion)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
key-wallet/src/account/account_type.rs(1 hunks)key-wallet/src/tests/account_tests.rs(2 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: DCG-Claude
Repo: dashpay/rust-dashcore PR: 0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
📚 Learning: 2025-06-15T15:31:44.136Z
Learnt from: QuantumExplorer
Repo: dashpay/rust-dashcore PR: 74
File: key-wallet/src/address.rs:30-40
Timestamp: 2025-06-15T15:31:44.136Z
Learning: In the key-wallet crate, QuantumExplorer prefers keeping wildcard arms that default unknown Network variants to testnet prefixes rather than using exhaustive matches or panics. This fallback behavior is intentional.
Applied to files:
key-wallet/src/tests/account_tests.rs
🧬 Code graph analysis (2)
key-wallet/src/tests/account_tests.rs (2)
key-wallet/src/account/account_type.rs (2)
derivation_path(215-395)index(126-155)key-wallet/src/account/mod.rs (2)
derivation_path(127-129)index(117-119)
key-wallet/src/account/account_type.rs (1)
key-wallet/src/bip32.rs (7)
from(81-83)from(226-228)from(776-786)from(790-805)from(984-989)from(1201-1203)from_hardened_idx(622-630)
🔇 Additional comments (2)
key-wallet/src/tests/account_tests.rs (2)
132-133: Test correctly updated to match implementation.The test expectations have been properly updated to reflect the new CoinJoin derivation path format
m/9'/1'/4'/index'. The test is internally consistent with the implementation changes inaccount_type.rs.However, the underlying concerns about this being a breaking change and potentially out-of-scope for this PR still apply (see comments on
account_type.rs).
500-505: Test data correctly updated.The expected derivation path in the uniqueness test has been properly updated to
m/9'/1'/4'/0', maintaining consistency with the CoinJoin path changes throughout the codebase.
| Self::CoinJoin { | ||
| index, | ||
| } => { | ||
| // m/9'/coin_type'/account' | ||
| // m/9'/coin_type'/4'/account' (DIP9 Feature 4 = CoinJoin) | ||
| Ok(DerivationPath::from(vec![ | ||
| ChildNumber::from_hardened_idx(9).map_err(crate::error::Error::Bip32)?, | ||
| ChildNumber::from_hardened_idx(coin_type) | ||
| .map_err(crate::error::Error::Bip32)?, | ||
| ChildNumber::from_hardened_idx(4).map_err(crate::error::Error::Bip32)?, // Feature index for CoinJoin | ||
| ChildNumber::from_hardened_idx(*index).map_err(crate::error::Error::Bip32)?, | ||
| ])) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
This change appears unrelated to the PR objectives and introduces a breaking change.
The PR objectives state this PR is about "add checkpoints and start sync from last checkpoint", but this code modifies the CoinJoin derivation path by inserting a new hardened index (4) into the path, changing it from m/9'/coin_type'/index' to m/9'/coin_type'/4'/index'.
Critical concerns:
-
Scope inconsistency: This wallet derivation path change is unrelated to checkpoint synchronization. It should likely be in a separate PR with appropriate release notes.
-
Breaking change: Existing CoinJoin accounts will have different derivation paths after this change. Users with existing CoinJoin funds at the old path
m/9'/coin_type'/index'will not be able to access them using the new path. This requires:- Migration logic to support both old and new paths
- Clear communication to users about the change
- Potentially a wallet upgrade/migration tool
Please verify:
- Is this change intentional for this PR, or was it accidentally included?
- Does DIP9 Feature 4 specification require this path structure?
- What is the migration strategy for existing CoinJoin users?
🌐 Web query:
DIP9 Feature 4 CoinJoin derivation path specification
💡 Result:
Summary — DIP9 Feature 4 (CoinJoin) derivation path
- Feature root per DIP9: m / purpose' / coin_type' / feature' / ... (purpose = 9'). [1]
- CoinJoin is registered as feature index 4'. The CoinJoin account layout used in Dash docs: account 0 external = 0'/0/* and account 0 change = 0'/1/*. Example first Account 0 external key: m/9'/<coin_type>'/4'/0'/0/0. [1][2]
References
- DIP-0009 — Feature Derivation Paths (purpose = 9'). [1]
- Registered Features (Feature 4 = CoinJoin; example path and account subpaths). [2]
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for migration-related code for CoinJoin accounts
rg -n -C5 -i "migration|migrate|coinJoin" key-wallet/src/account/ --type=rs
# Search for old derivation path patterns
rg -n "9.*coin_type.*index" key-wallet/src/ --type=rs -C3
# Check for version upgrade or compatibility code
rg -n -i "version|upgrade|compatibility" key-wallet/src/account/ --type=rs -C3
Length of output: 80
🏁 Script executed:
#!/bin/bash
# Check git history to see if there are related changes
git log --oneline -20 -- key-wallet/src/account/account_type.rs
# Check for TODO or FIXME comments related to this change
rg -n "TODO|FIXME" key-wallet/src/account/account_type.rs -C2Length of output: 206
---
🏁 Script executed:
```shell
#!/bin/bash
# Check if this commit is in current branch or already merged
git log --oneline --all | grep "correct CoinJoin derivation path"
# Check for PR #176 references in commits
git log --oneline | head -30
# Look for any migration or backward compatibility handling
git log -p --all -S "CoinJoin" -- key-wallet/src/account/account_type.rs | head -100
Length of output: 2114
🏁 Script executed:
#!/bin/bash
# Search for documentation about the derivation path change
find . -name "*.md" -o -name "*.txt" -o -name "*.rst" | xargs grep -l "CoinJoin\|derivation.*path" 2>/dev/null | head -10
# Search for any account recovery or migration code
fd -t f -e rs | xargs grep -l "recovery\|migration\|backward" 2>/dev/null | head -10Length of output: 638
---
🏁 Script executed:
```shell
#!/bin/bash
# Check README for migration or breaking changes documentation
grep -n -i "coinJoin\|migration\|breaking\|upgrade" ./key-wallet/README.md
# Check implementation summary
grep -n -i "coinJoin\|derivation\|dip9" ./key-wallet/IMPLEMENTATION_SUMMARY.md
# Check TODOS for any related items
grep -n -i "coinJoin\|migration\|derivation" ./key-wallet/TODOS.md ./key-wallet-manager/TODO.md
Length of output: 1413
🏁 Script executed:
#!/bin/bash
# Search for any account path recovery or support for both old/new paths
rg -n "hardened_idx.*9" . --type=rust -C3
# Look for any conditional logic handling multiple path versions
rg -n "CoinJoin" . --type=rust -B2 -A2 | head -50Length of output: 9143
DIP9 Fix Verified: CoinJoin derivation path is correct, but migration handling is critical.
The CoinJoin derivation path m/9'/coin_type'/4'/index' is correct per DIP9 specification (Feature 4 = CoinJoin). However, this fix creates a breaking change requiring immediate attention:
Critical concerns:
-
No migration code exists: Users with CoinJoin accounts on the old (incorrect) path will lose access to their funds. The codebase contains no backward compatibility logic or migration helper to bridge old and new paths.
-
Example code outdated:
key-wallet/examples/basic_usage.rs(lines 75-81) shows the old incorrect derivation path and needs updating to reflect the corrected path. -
Migration strategy required: Either implement code to support both paths during transition, or provide clear user-facing documentation and migration tooling.
Actions needed:
- Add migration logic or dual-path support for existing accounts
- Update example code to reflect correct path
- Document migration steps for users with existing CoinJoin accounts
🤖 Prompt for AI Agents
In key-wallet/src/account/account_type.rs around lines 246 to 257 the CoinJoin
derivation path was corrected to m/9'/coin_type'/4'/index' but there is no
migration or backward-compatibility handling; add dual-path support and a
migration flow: implement lookup that tries the new path first then falls back
to the old (pre-fix) path, mark found accounts and provide a one-time migration
routine that re-derives and re-saves accounts to the new path (or exposes a CLI
migration command), update key-wallet/examples/basic_usage.rs (lines ~75-81) to
use the corrected path, and add a clear migration entry in the README and
CHANGELOG describing steps users must take and how to run the migration tool.
…ssing (dashpay#164) Optimized header synchronization by requesting the next batch of headers immediately after validation, before processing the current batch. This allows network I/O to overlap with CPU-intensive header processing. Changes: - Moved header request from end of processing to start - Network fetch of next batch now overlaps with validation/storage of current batch - Removed duplicate request to avoid sending multiple getheaders messages Expected performance improvement: - Reduces sync time by overlapping network round-trips with processing - Particularly beneficial for high-latency connections - No additional memory overhead (same batch sizes) All header sync tests passing (46/46) Fixes dashpay#164
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
dash-spv/src/sync/headers_with_reorg.rs (1)
317-327: Consider the timing of next batch requests relative to validation and storage.The pipelining optimization requests the next batch before the current batch completes checkpoint validation (lines 336-346) and storage (lines 356-375). If either of these operations fails, the next batch has already been requested, which could lead to:
- Receiving headers that don't connect to the actual persisted tip
- Wasted network bandwidth
- Potential sync state confusion when the pre-requested batch arrives
While this optimization improves sync speed by overlapping I/O with processing, consider whether the early-request strategy could cause issues in error scenarios. Alternatively, you might add tracking to handle/ignore the next batch if the current batch processing fails.
One approach would be to move the next batch request after storage succeeds:
// Store Headers in Bulk: Single atomic database operation // Extract precomputed hashes from cached headers to avoid redundant X11 in storage let precomputed_hashes: Vec<BlockHash> = cached_headers.iter().map(|ch| ch.block_hash()).collect(); // Use the internal storage method if available (DiskStorageManager optimization) if let Some(disk_storage) = storage.as_any_mut().downcast_mut::<crate::storage::disk::DiskStorageManager>() { disk_storage .store_headers_internal(&headers, Some(&precomputed_hashes)) .await .map_err(|e| SyncError::Storage(format!("Failed to store headers batch: {}", e)))?; } else { // Fallback to standard store_headers for other storage backends storage .store_headers(&headers) .await .map_err(|e| SyncError::Storage(format!("Failed to store headers batch: {}", e)))?; } + + // Request next batch after successful storage (pipelined for performance) + if self.syncing_headers && !cached_headers.is_empty() { + let last_cached = cached_headers.last().unwrap(); + let next_hash = last_cached.block_hash(); + tracing::debug!( + "🚀 Requesting next batch starting from {} after storing current batch", + next_hash + ); + self.request_headers(network, Some(next_hash)).await?; + }This maintains much of the pipelining benefit while ensuring the request only goes out after successful validation and storage.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
dash-spv/src/sync/headers_with_reorg.rs(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: DCG-Claude
Repo: dashpay/rust-dashcore PR: 0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
🔇 Additional comments (1)
dash-spv/src/sync/headers_with_reorg.rs (1)
404-406: Clear documentation of the optimization.The comments effectively explain the pipelining change and help future maintainers understand why the sequential request was removed.
xdustinface
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just a nit and a question.
| done | ||
|
|
||
| echo "" | ||
| echo "// NOTE: Update masternode_list_name values if the blocks have DML (Deterministic Masternode List)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there something missing in the script?
| // OPTIMIZATION (#164): Next batch request moved earlier (line 317-327) for pipelining | ||
| // This overlaps network I/O with header processing to improve sync speed | ||
| // Original sequential request removed to avoid duplicate requests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // OPTIMIZATION (#164): Next batch request moved earlier (line 317-327) for pipelining | |
| // This overlaps network I/O with header processing to improve sync speed | |
| // Original sequential request removed to avoid duplicate requests |
Probably not needed? And the line numbers will be off soon anyway i guess :)
|
@ANAVHEOBA thanks for your contributions! Super cool. If you don't mind, can you either email me pasta at dash.org or ping me in #dev-talk on the community discord? I'd like to get to know you a bit more. https://discordapp.com/invite/PXbUxJB Otherwise, would you please open a separate PR with your change in this commit: d70c1ff -- the changes are too logically distinct to be bundled in this PR. LMK if you need help with that. Thanks again! looking forward to chatting with you. |
| # Fetch checkpoint data from Dash network | ||
| # | ||
| # This script fetches block information at specific heights to create checkpoints | ||
| # for the rust-dashcore SPV client. | ||
| # | ||
| # Usage: | ||
| # # Fetch specific heights using block explorer | ||
| # ./fetch_checkpoints.sh mainnet 2400000 2450000 | ||
| # | ||
| # # Fetch using RPC (requires running Dash Core node) | ||
| # RPC_URL=http://localhost:9998 RPC_USER=user RPC_PASS=pass ./fetch_checkpoints.sh mainnet 2400000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it works; this may be sufficient, however, I would prefer our helper scripting be done in python or some other non-bash language
HashEngineering
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
recommend removing changes related to CoinJoin derivation paths.
| index, | ||
| } => { | ||
| // m/9'/coin_type'/account' | ||
| // m/9'/coin_type'/4'/account' (DIP9 Feature 4 = CoinJoin) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that these changes should be in separate PR: d70c1ff.
CoinJoin paths are created in 3 places in the code and this one change will not be sufficient.
|
It saying the link has expired
…On Tue, 18 Nov 2025, 18:31 HashEngineering, ***@***.***> wrote:
***@***.**** commented on this pull request.
recommend removing changes related to CoinJoin derivation paths.
------------------------------
In key-wallet/src/account/account_type.rs
<#186 (comment)>
:
> @@ -246,11 +246,12 @@ impl AccountType {
Self::CoinJoin {
index,
} => {
- // m/9'/coin_type'/account'
+ // m/9'/coin_type'/4'/account' (DIP9 Feature 4 = CoinJoin)
I agree that these changes should be in separate PR: d70c1ff
<d70c1ff>
.
To get CoinJoin paths are created in 3 places in the code and this one
change will not be sufficient.
—
Reply to this email directly, view it on GitHub
<#186 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3ZBDGCDOOSP62GYITYPNM335NJWZAVCNFSM6AAAAACMH67IJ2VHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZTINZYHE3TENBYGU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
|
That's strange, the link appears to work for me. Maybe try it again or, use https://discord.gg/x8e5ZRJvtg |
|
Just joined
Looking forward to get to know you
…On Wed, 19 Nov 2025, 15:19 PastaPastaPasta, ***@***.***> wrote:
*PastaPastaPasta* left a comment (dashpay/rust-dashcore#186)
<#186 (comment)>
That's strange, the link appears to work for me. Maybe try it again or,
use https://discord.gg/x8e5ZRJvtg
—
Reply to this email directly, view it on GitHub
<#186 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3ZBDGCRPAZLZ6P676QG4TT35R37HAVCNFSM6AAAAACMH67IJ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTKNJSHE2TSMBSGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
|
This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them. |
… #166)
This commit addresses two related issues:
Add additional checkpoints. #166: Add additional checkpoints
Header sync should always start at the last checkpoint. #165: Header sync should always start at the last checkpoint
Benefits:
Summary by CodeRabbit
New Features
Improvements
Behavior Changes
Performance