Skip to content

Conversation

@saagarjha
Copy link
Contributor

It is very easy to convert a constexpr into a tensor but difficult to go in the reverse direction. The current behavior makes it very difficult to support certain patterns without ceremony. While technically a compatibility break, I expect this to be rarely hit because many operations (like assignment) decay constexprs to tensors anyway.

New contributor declaration

  • I am not making a trivial change, such as fixing a typo in a comment.

  • I have written a PR description following these
    rules.

  • I have run pre-commit run --from-ref origin/main --to-ref HEAD.

  • Select one of the following.

    • I have added tests.
      • /test for lit tests
      • /unittest for C++ tests
      • /python/test for end-to-end tests
    • This PR does not need a test because FILL THIS IN.
  • Select one of the following.

    • I have not added any lit tests.
    • The lit tests I have added follow these best practices,
      including the "tests should be minimal" section. (Usually running Python code
      and using the instructions it generates is not minimal.)

@saagarjha saagarjha requested a review from ptillet as a code owner November 17, 2025 23:00
@lezcano lezcano requested a review from peterbell10 November 18, 2025 01:32
@saagarjha
Copy link
Contributor Author

(The test failures are intentional btw, I could have fixed them but I wanted to show what would break. Consider it a RFC; my view is that this is fine to require cases like these to need explicit tensor conversion.)

@peterbell10
Copy link
Contributor

So you would have to spell out:

@triton.jit
def compare(x, y):
    if x < y:
        return tl.to_tensor(-1)
    elif x == y:
        return tl.to_tensor(0)

tbh that does look quite clunky.

I wonder if it would be reasonable to take all return statements and try to find a common type. In this case we would see the constexpr mismatch and implicitly promote to tensor.

@saagarjha
Copy link
Contributor Author

Sure, that seems reasonable to me. If you do mean to use a constexpr and accidentally hit this, usually you will find out really quickly that this promotion happened and can go back to fix it.

@saagarjha
Copy link
Contributor Author

Ok so the poison test is actually interesting. If we use the logic from above (which I have implemented as widening) this works for most cases, but for the test that checks for poison, we determine that the function's return type is constexpr on a dead path and use that since it's the only return value we have. Even though that path is dead, we "know" the return type and can "inflate" out a constexpr from just its type, even though the code is dynamically unreachable. I think this is kind of analogous to UB in C++ that can cause dead code to appear to run. So I am leaning towards "this test only really makes sense if the return type is explicitly a tensor".

@FindDefinition
Copy link

a workaround is to store the constexpr value into an aggregate object. also we don't need to use inline constexpr annotation a: tl.constexpr = some_jit_constexpr_fn() if we use aggregate object to store constexpr.

@peterbell10
Copy link
Contributor

I agree; the purpose of poison is to allow llvm to assume we don't hit that code path just like UB in C++. So, I think it's fine to return constexpr if it's the only valid code path. For the purpose of the test, you can change x to be a tensor so we still exercise the poison return path.

It is very easy to convert a constexpr into a tensor but difficult to go
in the reverse direction. The current behavior makes it very difficult
to support certain patterns without ceremony. While technically a
compatibility break, I expect this to be rarely hit because many
operations (like assignment) decay constexprs to tensors anyway.
@saagarjha
Copy link
Contributor Author

@FindDefinition right, this is what we are doing currently: https://github.com/sublinear-systems/triton-utils/blob/5bc103b715affd3607aa6c328653020830e971d1/utils.py#L66. We'd like to not :)

@peterbell10 Unfortunately unification is actually kind of complicated. It is not trivial to figure out return types because to do this you have to visit the return statements and find a difference, at which point you may have already emitted constexpr returns. I tried to walk it again using a specialized visitor to find these, but the problem is you need to evaluate the return expressions, which works in simple cases that I was trying but in the general case could be any arbitrary expression.

peterbell10 added a commit that referenced this pull request Nov 20, 2025
Closes #8749

Instead of eagerly promoting constexpr returns to tensor, this
now defers the decision until we have processed the entire function and
can look at all return values.

If all return paths return the same constexpr value, we return a
constexpr but if there are differing values we will promote to tensor
automatically.
peterbell10 added a commit that referenced this pull request Nov 20, 2025
Closes #8749

Instead of eagerly promoting constexpr returns to tensor, this
now defers the decision until we have processed the entire function and
can look at all return values.

If all return paths return the same constexpr value, we return a
constexpr but if there are differing values we will promote to tensor
automatically.
peterbell10 added a commit that referenced this pull request Nov 24, 2025
Closes #8749

Instead of eagerly promoting constexpr returns to tensor, this
now defers the decision until we have processed the entire function and
can look at all return values.

If all return paths return the same constexpr value, we return a
constexpr but if there are differing values we will promote to tensor
automatically.
pull bot pushed a commit to BoomchainLabs/triton that referenced this pull request Nov 25, 2025
…g#8785)

Closes triton-lang#8749

Instead of eagerly promoting constexpr returns to tensor, this
now defers the decision until we have processed the entire function and
can look at all return values.

If all return paths return the same constexpr value, we return a
constexpr but if there are differing values we will promote to tensor
automatically.

cc @saagarjha

---------

Co-authored-by: Saagar Jha <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants