Skip to content

Conversation

@GabrielMartinezRodriguez
Copy link
Contributor

@GabrielMartinezRodriguez GabrielMartinezRodriguez commented Dec 10, 2024

Linked Issues

Description

Refactors the randomness service to be able to process pending commitments instead of only the last commitment

  • Introduces a Randomness entity with clear state transitions and business logic
  • Replaces CommitmentManager with RandomnessRepository for better data management
  • Adds status tracking for both commitment and reveal transactions
  • Implements proper error handling for failed transactions
  • Adds transaction status change hooks to update randomness states
  • Updates database schema to track both commitment and reveal transaction states
  • Adds new Makefile commands for starting the service and running migrations
Toggle Checklist

Checklist

Basics

  • B1. I have applied the proper label & proper branch name
  • B2. This PR is not so big that it should be split & addresses only one concern
  • B3. The PR targets the lowest branch it can (ideally master)

Correctness

  • C1. Builds and passes tests

  • C2. The code is properly parameterized & compatible with different environments

  • C3. I have manually tested my changes & connected features

  • Local environment

  • Tested commitment and reveal flows, including error scenarios and state transitions

  • C4. I have performed a thorough self-review of my code

Architecture & Documentation

  • D1. I made it easy to reason locally about the code
  • D2. All public-facing APIs & meaningful internal APIs are properly documented
  • D3. If appropriate, the general architecture of the code is documented

@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Dec 10, 2024

Deploying happychain with  Cloudflare Pages  Cloudflare Pages

Latest commit: 1dfe4da
Status: ✅  Deploy successful!
Preview URL: https://32dfccb3.happychain.pages.dev
Branch Preview URL: https://gabriel-pending-commitments.happychain.pages.dev

View logs

@linear
Copy link

linear bot commented Dec 10, 2024

HAPPY-257 Create OnNewBlock hook

This hook will allow users to execute code on every new block without having to place that code inside the transaction collector or use another web3 client.

In addition, after implementing this hook, we will have to move the randomness-service prune database function to this new hook and mark as expired the randomness for which the commitment has been submitted but not revealed, and it has already expired

@linear
Copy link

linear bot commented Dec 10, 2024

HAPPY-255 Process all pending commitments to be revealed, not just the one from the last block

Commitments should have a field indicating the status of the submitted commitment. For example:

  • Commitment Submitted
  • Commitment Successfully Executed
    ... etc.

@GabrielMartinezRodriguez GabrielMartinezRodriguez marked this pull request as ready for review December 10, 2024 16:15
@GabrielMartinezRodriguez GabrielMartinezRodriguez changed the title Refactor randomness service to use repository pattern Refactors the randomness service to be able to process pending commitments Dec 10, 2024
async start(): Promise<void> {
const randomnessesDb = (await db.selectFrom("randomnesses").selectAll().execute()).map(this.rowToEntity)
for (const randomness of randomnessesDb) {
this.map.set(randomness.timestamp, randomness)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the db is large, wont this load a huge amount of values into memory? or do i misunderstand

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it would happen. However, this database is going to be small because we only save the randomness of the last two minutes, so it will contain approximately 60 rows

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for a long running process, i see the database is pruned, but not this map yet, correct?

Copy link
Collaborator

@norswap norswap Jan 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will actually grow pretty large — we need to register commitments 12 hours in advance to make sure the sequencer is unable to reorg the chain before posting the data to DA. We might be able to lower that number.

But even so, for 14 hours worth of commitments, with 3 commitments per second (assuming the faster future block times) and a memory footprint of 500 bytes per Randomness object (probably too high) we have 12*60*60*3*500 bytes = ~64 megabytes, so I think this should be quite manageable. It's ~130k rows, which is also well within manageable boundaries I think.

I wonder about the performance of pruning however, since it needs to run the predicate on all randomnesses (both in memory and in the DB).

I don't think the DB will be smart enough to do this, because it would need to maintain a sorted index of the transactions, then find the cutoff point based on the condition. Note that we might also need to maintain a sort list of timestamps (this will need to be block numbers in the future btw, as there could be multiple blocks per timestamp) in-memory to do this.

The alternative is to get a set of timestamps (block numbers) and delete these specifically, or even prune blocks one by one on expiry. Pruning would not happen when we are offline, but we can prune once when we load the DB.

Let's add issues to:

  • change the primary identification of a randomness to be the block number instead of the timestamp
  • handle the performance challenges of pruning when there is a large number of randomness

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've already updated this to work with blocks instead of timestamps in this PR: #318.

Regarding the performance issues, I created this issue: https://linear.app/happychain/issue/HAPPY-305/handle-the-performance-challenges-of-pruning-when-there-is-a-large

@GabrielMartinezRodriguez GabrielMartinezRodriguez added the reviewing-1 Ready for, or undergoing first-line review label Dec 11, 2024
Copy link
Contributor

@not-reed not-reed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice and clear, i like it

Copy link
Collaborator

@norswap norswap left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only thing left here is discuss the DB scenario, I think we figure it out and open an issue to handle it later. Let's merge once that is done.

@norswap norswap added merge-after-changes Can be merged after requested changes are made and removed reviewing-2 Ready for, or undergoing final review labels Jan 31, 2025
@GabrielMartinezRodriguez GabrielMartinezRodriguez mentioned this pull request Feb 3, 2025
10 tasks
@GabrielMartinezRodriguez GabrielMartinezRodriguez merged commit 10ce5f2 into master Feb 4, 2025
3 checks passed
@GabrielMartinezRodriguez GabrielMartinezRodriguez deleted the gabriel/pending-commitments branch February 4, 2025 10:59
This was referenced Feb 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

merge-after-changes Can be merged after requested changes are made

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants