-
Notifications
You must be signed in to change notification settings - Fork 2
Refactors the randomness service to be able to process pending commitments #303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Deploying happychain with
|
| Latest commit: |
1dfe4da
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://32dfccb3.happychain.pages.dev |
| Branch Preview URL: | https://gabriel-pending-commitments.happychain.pages.dev |
HAPPY-257 Create OnNewBlock hook
This hook will allow users to execute code on every new block without having to place that code inside the transaction collector or use another web3 client. In addition, after implementing this hook, we will have to move the randomness-service prune database function to this new hook and mark as expired the randomness for which the commitment has been submitted but not revealed, and it has already expired |
HAPPY-255 Process all pending commitments to be revealed, not just the one from the last block
Commitments should have a field indicating the status of the submitted commitment. For example:
|
| async start(): Promise<void> { | ||
| const randomnessesDb = (await db.selectFrom("randomnesses").selectAll().execute()).map(this.rowToEntity) | ||
| for (const randomness of randomnessesDb) { | ||
| this.map.set(randomness.timestamp, randomness) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the db is large, wont this load a huge amount of values into memory? or do i misunderstand
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it would happen. However, this database is going to be small because we only save the randomness of the last two minutes, so it will contain approximately 60 rows
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for a long running process, i see the database is pruned, but not this map yet, correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will actually grow pretty large — we need to register commitments 12 hours in advance to make sure the sequencer is unable to reorg the chain before posting the data to DA. We might be able to lower that number.
But even so, for 14 hours worth of commitments, with 3 commitments per second (assuming the faster future block times) and a memory footprint of 500 bytes per Randomness object (probably too high) we have 12*60*60*3*500 bytes = ~64 megabytes, so I think this should be quite manageable. It's ~130k rows, which is also well within manageable boundaries I think.
I wonder about the performance of pruning however, since it needs to run the predicate on all randomnesses (both in memory and in the DB).
I don't think the DB will be smart enough to do this, because it would need to maintain a sorted index of the transactions, then find the cutoff point based on the condition. Note that we might also need to maintain a sort list of timestamps (this will need to be block numbers in the future btw, as there could be multiple blocks per timestamp) in-memory to do this.
The alternative is to get a set of timestamps (block numbers) and delete these specifically, or even prune blocks one by one on expiry. Pruning would not happen when we are offline, but we can prune once when we load the DB.
Let's add issues to:
- change the primary identification of a randomness to be the block number instead of the timestamp
- handle the performance challenges of pruning when there is a large number of randomness
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've already updated this to work with blocks instead of timestamps in this PR: #318.
Regarding the performance issues, I created this issue: https://linear.app/happychain/issue/HAPPY-305/handle-the-performance-challenges-of-pruning-when-there-is-a-large
not-reed
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice and clear, i like it
9dba2dd to
bec7632
Compare
bd6c349 to
402e588
Compare
bec7632 to
93102a3
Compare
402e588 to
2edd290
Compare
93102a3 to
dcd504e
Compare
2edd290 to
737aae5
Compare
dcd504e to
fd50c3c
Compare
737aae5 to
5e90c92
Compare
69f3741 to
3521045
Compare
3521045 to
e895e28
Compare
norswap
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only thing left here is discuss the DB scenario, I think we figure it out and open an issue to handle it later. Let's merge once that is done.
e895e28 to
205fe15
Compare
3064c5f to
721eb06
Compare
721eb06 to
3725784
Compare

Linked Issues
Description
Refactors the randomness service to be able to process pending commitments instead of only the last commitment
Randomnessentity with clear state transitions and business logicCommitmentManagerwithRandomnessRepositoryfor better data managementToggle Checklist
Checklist
Basics
Correctness
C1. Builds and passes tests
C2. The code is properly parameterized & compatible with different environments
C3. I have manually tested my changes & connected features
Local environment
Tested commitment and reveal flows, including error scenarios and state transitions
C4. I have performed a thorough self-review of my code
Architecture & Documentation