Skip to content

Conversation

@real-or-random
Copy link
Collaborator

Alternative to #16 that just improves the text a bit.

A middle way between this and #16 will be to make the limit a parameter. That is, the algorithms enforce some limit, but we leave the actual value unspecified. The more I think about this, the more I like the idea. It frees us from picking a value, but we still tell implementors how to handle the limit. The only drawback of this approach is that we'll have to pick some value in our tests in order to instantiate the algorithm. I guess, that's fine, and we can really pick whatever we want, even much lower than 2^16-1?!

What do you think?

@fjahr
Copy link
Contributor

fjahr commented Jan 4, 2026

I’ve come back to this particular question a couple of times by now and never really could end up with a satisfying answer for myself. I have given this another shot now and hope this can help get to a final version of the draft.

I spent some time to look at some related BIPs to see if there are any comparable cases to reference. I found the following two the most relevant: 1. BIP327 doesn’t give a max number of participants in the specification but uses u = 2^32 in code. 2. BIP340 says the following about message size in the specification but has no limit in the reference code: "The signature scheme specified in this BIP accepts byte strings of arbitrary size as input messages. It is understood that implementations may reject messages which are too large in their environment or application context, e.g., messages which exceed predefined buffers or would otherwise cause resource exhaustion."

So, the options discussed so far are: 1. Keep as is (with text improvement) 2. Just remove the limit 3. Parameterize as part of API

Option 3 seems like a comfortable fix for all problems but on the other hand it feels not very “bip-like” and it seems odd that implementers would have to pass a constant at every call if they want to follow the spec closely. I also think that this gives a higher chance that the importance of this functionality may be misunderstood by implementers and that this could lead to some kind of “consensus” failure in some protocols. Unlikely but not impossible.

Option 2 suffers from the later problem described above as well. It is easy to do but only having a warning in the text may lead to people missing the importance of agreeing on this as part of a protocol and this might make this a foot gun that causes problems for implementations somewhere down the line. Also unlikely but not impossible.

For the reasons above a (very slightly) favor leaving the spec as is with some improvements (Option 1). In addition to the changes suggested here I would also suggest resolving the TODO by changing the max value to a much higher number (probably 2^32) that doesn’t have a restrictive effect in the real world. This is similar to what BIP327 does and I think still having an explicit value to reference is better than having no value at all IMO. And then in the spec also explain that the value may be lowered in the implementation if there are factors that make this necessary like hardware limitations, consensus goals etc. similar to BIP340 and the updated text suggested here.

On the test vector coverage we are missing I would say that it’s a bit unfortunate to lose it. On the flip side it would also not be so great to have an explicit test vector which tests the conservative limit when we expect that there might be protocol implementations in the future that set their own, higher, limit and then they can not use those test vectors anymore. Then they have to document why not all the test vectors can work in their code. So losing those test vectors isn’t a pure downside in my book.

That being said, I may have convinced myself that this would be the best option but it’s only by a hair and I would be happy to go along with the other suggestions too. But it would be great if we could agree on something and leave this behind :)

cc @jonasnick

@fjahr
Copy link
Contributor

fjahr commented Jan 10, 2026

For the reasons above a (very slightly) favor leaving the spec as is with some improvements (Option 1). In addition to the changes suggested here I would also suggest resolving the TODO by changing the max value to a much higher number (probably 2^32) that doesn’t have a restrictive effect in the real world.

I opened a PR for this just to make it my suggestion more explicit: #22

@jonasnick
Copy link
Contributor

After revisiting the current approach, I am coming to the conclusion that I don't like it. Saying

Having a maximum value is supposed to prevent integer overflows.

is missing a couple of logical steps to make sense. The maximum by itself doesn't prevent any integer overflows. If an implementation used 16-bit integers, they would definitely run into integer overflows even with the current limit. Similarly, if we increase the limit to 2^32, then 32-bit integers would lead to (potentially catastrophic) overflows in Verify:

Let ri = aggsig[i⋅32:(i+1)⋅32]

I don't have a strong opinion on the right approach, but @real-or-random was pretty successful in convincing me that limits like this don't have a place in purely cryptographic BIPs like this. If I recall correctly, the limit is an artefact of using a language with limited integer types for the specification. However, we can always use (and formally reason about) bigints just as well. Even when the spec mandates a limit, the spec typically doesn't have a test vector for it.

Implementations in the real world will often have some kind of limit. Thus, in order to ensure protocol compliance, specs that make use of half-agg as a primitive should specify a limit. Maybe the best way to communicate this is in fact the parameterized approach that @real-or-random suggests (although I'm not entirely sure what he means). Additionally (shower thought), we could suggest specific tests (i.e. run with the maximum size arrays) that higher level specs should provide in the form of test vectors or suggest to implementors.

@fjahr
Copy link
Contributor

fjahr commented Jan 15, 2026

If I recall correctly, the limit is an artefact of using a language with limited integer types for the specification. However, we can always use (and formally reason about) bigints just as well.

Ok, sure, I hadn't considered this as an argument.

Maybe the best way to communicate this is in fact the parameterized approach that @real-or-random suggests (although I'm not entirely sure what he means).

I am interpreting it as changing the API in the specification like this (adding the max parameter to the function signature and then checking it against the other params):

-VerifyAggregate(aggsig, pm_aggd<sub>0..u-1</sub>)
+VerifyAggregate(aggsig, pm_aggd<sub>0..u-1</sub>, max_sigs)
-IncAggregate(aggsig, pm_aggd<sub>0..v-1</sub>, pms_to_agg<sub>0..u-1</sub>)
+IncAggregate(aggsig, pm_aggd<sub>0..v-1</sub>, pms_to_agg<sub>0..u-1</sub>, max_sigs)
-Aggregate(pms<sub>0..u-1</sub>)
+Aggregate(pms<sub>0..u-1</sub>, max_sigs)

But maybe I am misunderstanding it, please let me know if this what you had in mind @real-or-random . I am still not a huge fan of it because it seems too verbose to pass a constant as a param everywhere and I think it adds a potential source of confusion because of it. And also I guess we would still need to give max_sigs a type, this would be a variable size integer then? But I am happy to make a draft of this if you both prefer this.

limits like this don't have a place in purely cryptographic BIPs like this

Under consideration of the bigint argument above I think I favor just removing the limit altogether from the spec and having strong hints in the text, i.e. #16.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants