Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion TEST_SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ I have successfully implemented comprehensive unit tests for several critical da
- State management and persistence
- Concurrent operations and thread safety
- Edge cases and error scenarios
- Performance considerations
- dash-spc/src considerations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typo in the documentation.

The text reads "dash-spc/src considerations" but should likely be "dash-spv/src considerations" to match the project naming convention.

Apply this diff:

-   - dash-spc/src considerations
+   - dash-spv/src considerations
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- dash-spc/src considerations
- dash-spv/src considerations
🤖 Prompt for AI Agents
In TEST_SUMMARY.md around line 83, the entry "dash-spc/src considerations" is a
typo; replace "dash-spc/src" with "dash-spv/src" so it matches the project
naming convention, and scan adjacent lines for any other occurrences to update
them for consistency.


2. **Test Quality**: All tests follow best practices:
- Clear test names describing what is being tested
Expand Down
5 changes: 5 additions & 0 deletions dash-spv/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -59,12 +59,17 @@ hickory-resolver = "0.25"
log = "0.4"

[dev-dependencies]
criterion = { version = "0.8.1", features = ["async_tokio"] }
tempfile = "3.0"
tokio-test = "0.4"
env_logger = "0.10"
hex = "0.4"
test-case = "3.3"

[[bench]]
name = "storage"
harness = false

[[bin]]
name = "dash-spv"
path = "src/main.rs"
Expand Down
93 changes: 93 additions & 0 deletions dash-spv/benches/storage.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
use std::time::Duration;

use criterion::{criterion_group, criterion_main, BatchSize, Criterion};
use dash_spv::{
storage::{DiskStorageManager, StorageManager},
Hash,
};
use dashcore::{block::Version, BlockHash, CompactTarget, Header};
use tempfile::TempDir;
use tokio::runtime::Builder;

fn create_test_header(height: u32) -> Header {
Header {
version: Version::from_consensus(1),
prev_blockhash: BlockHash::all_zeros(),
merkle_root: dashcore_hashes::sha256d::Hash::all_zeros().into(),
time: height,
bits: CompactTarget::from_consensus(0x207fffff),
nonce: height,
}
}
Comment on lines +12 to +21
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Extract duplicated test helper to a shared module.

The create_test_header function is identical to the one in dash-spv/tests/segmented_storage_test.rs (lines 14-23). This duplication should be eliminated by extracting it to a shared test utilities module that both the tests and benchmarks can import.

Consider creating a dash-spv/tests/common/mod.rs or similar:

// In dash-spv/tests/common/mod.rs
use dashcore::{block::Version, BlockHash, CompactTarget, Header};

pub fn create_test_header(height: u32) -> Header {
    Header {
        version: Version::from_consensus(1),
        prev_blockhash: BlockHash::all_zeros(),
        merkle_root: dashcore_hashes::sha256d::Hash::all_zeros().into(),
        time: height,
        bits: CompactTarget::from_consensus(0x207fffff),
        nonce: height,
    }
}

Then import it in both files:

// In benches and tests
mod common;
use common::create_test_header;
🤖 Prompt for AI Agents
In dash-spv/benches/storage.rs around lines 12 to 21, the create_test_header
helper is duplicated elsewhere; extract it to a shared test utilities module
(e.g. dash-spv/tests/common/mod.rs) containing the create_test_header function,
then replace the local function with an import: add mod common; and use
common::create_test_header; in both dash-spv/benches/storage.rs and
dash-spv/tests/segmented_storage_test.rs so both files reuse the single
implementation.


fn bench_disk_storage(c: &mut Criterion) {
const CHUNK_SIZE: u32 = 13_000;
const NUM_ELEMENTS: u32 = CHUNK_SIZE * 20;

let rt = Builder::new_multi_thread().worker_threads(4).enable_all().build().unwrap();

let headers = (0..NUM_ELEMENTS).map(create_test_header).collect::<Vec<Header>>();

c.bench_function("storage/disk/store", |b| {
b.to_async(&rt).iter_batched(
|| async {
DiskStorageManager::new(TempDir::new().unwrap().path().to_path_buf()).await.unwrap()
},
|a| async {
let mut storage = a.await;

for chunk in headers.chunks(CHUNK_SIZE as usize) {
storage.store_headers(chunk).await.unwrap();
}
},
BatchSize::SmallInput,
)
});
Comment on lines +31 to +45
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Benchmark measures initialization overhead along with storage operations.

The storage/disk/store benchmark creates a new TempDir and DiskStorageManager for each iteration (line 34). This means the benchmark is measuring both initialization costs and storage operation costs, which may not provide an accurate measurement of pure storage performance.

If the intent is to benchmark only storage operations, consider setting up the storage manager once outside the benchmark loop, or document that this benchmark intentionally measures initialization + storage.

🤖 Prompt for AI Agents
In dash-spv/benches/storage.rs around lines 31 to 45 the benchmark currently
creates a new TempDir and DiskStorageManager inside the per-iteration setup,
which causes the benchmark to measure initialization overhead together with
storage operations; move the TempDir::new() and DiskStorageManager::new(...)
call out of the per-iteration closure and initialize the storage manager once
before calling c.bench_function so each iteration only runs the store_headers
loop (or, if you intend to measure init+store, add a clear comment above the
benchmark documenting that initialization is measured).


let temp_dir = TempDir::new().unwrap();

let mut storage = rt.block_on(async {
let mut storage = DiskStorageManager::new(temp_dir.path().to_path_buf()).await.unwrap();

for chunk in headers.chunks(CHUNK_SIZE as usize) {
storage.store_headers(chunk).await.unwrap();
}

storage
});

c.bench_function("storage/disk/get", |b| {
b.to_async(&rt).iter_batched(
|| rand::random::<u32>() % NUM_ELEMENTS,
async |height| {
let _ = storage.get_header(height).await.unwrap();
},
BatchSize::SmallInput,
)
});

c.bench_function("storage/disk/reverse_index", |b| {
b.to_async(&rt).iter_batched(
|| {
let height = rand::random::<u32>() % NUM_ELEMENTS;
headers[height as usize].block_hash()
},
async |hash| {
let _ = storage.get_header_height_by_hash(&hash).await.unwrap();
},
BatchSize::SmallInput,
)
});
Comment on lines +59 to +80
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use deterministic inputs for reproducible benchmarks.

The benchmarks use rand::random() to generate heights (line 61) and hashes (lines 72-73), making the benchmark inputs non-deterministic. This violates the coding guideline to keep test vectors deterministic and may reduce benchmark reproducibility across runs.

As per coding guidelines, consider using a seeded RNG or a deterministic sequence:

+use rand::{rngs::StdRng, Rng, SeedableRng};
+
 fn bench_disk_storage(c: &mut Criterion) {
     const CHUNK_SIZE: u32 = 13_000;
     const NUM_ELEMENTS: u32 = CHUNK_SIZE * 20;
+    const SEED: u64 = 42;
 
     let rt = Builder::new_multi_thread().worker_threads(4).enable_all().build().unwrap();
 
     let headers = (0..NUM_ELEMENTS).map(create_test_header).collect::<Vec<Header>>();
+    let mut rng = StdRng::seed_from_u64(SEED);
 
     // ... storage setup ...
 
     c.bench_function("storage/disk/get", |b| {
         b.to_async(&rt).iter_batched(
-            || rand::random::<u32>() % NUM_ELEMENTS,
+            || rng.gen::<u32>() % NUM_ELEMENTS,
             async |height| {
                 let _ = storage.get_header(height).await.unwrap();
             },
             BatchSize::SmallInput,
         )
     });

Committable suggestion skipped: line range outside the PR's diff.


rt.block_on(async {
storage.shutdown().await;
});
}

criterion_group!(
name = disk_storage;
config = Criterion::default()
.sample_size(10)
.warm_up_time(Duration::from_secs(1));
targets = bench_disk_storage);
criterion_main!(disk_storage);
44 changes: 1 addition & 43 deletions dash-spv/tests/segmented_storage_test.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ use dashcore::pow::CompactTarget;
use dashcore::BlockHash;
use dashcore_hashes::Hash;
use std::sync::Arc;
use std::time::{Duration, Instant};
use std::time::Duration;
use tempfile::TempDir;
use tokio::time::sleep;

Expand Down Expand Up @@ -383,45 +383,3 @@ async fn test_filter_header_persistence() {
assert_eq!(loaded[3], create_test_filter_header(50_001));
}
}

#[tokio::test]
async fn test_performance_improvement() {
let temp_dir = TempDir::new().unwrap();
let mut storage = DiskStorageManager::new(temp_dir.path().to_path_buf()).await.unwrap();

// Store a large number of headers
let headers: Vec<BlockHeader> = (0..200_000).map(create_test_header).collect();

let start = Instant::now();
for chunk in headers.chunks(10_000) {
storage.store_headers(chunk).await.unwrap();
}
let store_time = start.elapsed();

println!("Stored 200,000 headers in {:?}", store_time);

// Test random access performance
let start = Instant::now();
for _ in 0..1000 {
let height = rand::random::<u32>() % 200_000;
let _ = storage.get_header(height).await.unwrap();
}
let access_time = start.elapsed();

println!("1000 random accesses in {:?}", access_time);
assert!(access_time < Duration::from_secs(1), "Random access should be fast");

// Test reverse index performance
let start = Instant::now();
for _ in 0..1000 {
let height = rand::random::<u32>() % 200_000;
let hash = headers[height as usize].block_hash();
let _ = storage.get_header_height_by_hash(&hash).await.unwrap();
}
let lookup_time = start.elapsed();

println!("1000 hash lookups in {:?}", lookup_time);
assert!(lookup_time < Duration::from_secs(1), "Hash lookups should be fast");

storage.shutdown().await;
}