Skip to content

Conversation

@pytorchbot
Copy link
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #15617 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/353/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/353/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/352/orig
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/353/orig
Differential Revision: D86340340
@diff-train-skip-merge

ssjia added 2 commits November 5, 2025 15:08
…p debug mode usage

Pull Request resolved: #15616

Title says it all!
ghstack-source-id: 321218516
@exported-using-ghexport

Differential Revision: [D86340342](https://our.internmc.facebook.com/intern/diff/D86340342/)
Pull Request resolved: #15617

## Context

The SDPA custom op accepts the `input_pos` (i.e. cache position) argument as a symbolic integer. The value of the symbolic integer is obtained by selecting the first element of a cache position input tensor and converting it to symint via local_scalar_dense.

Currently, ET-VK handles this in a hacky manner.

1. the select + local_scalar_dense op pattern is removed, and the cache pos tensor is passed directly into the custom sdpa ops
2. Single element tensors that have users that are all select + local_scalar_dense will be interpreted as symints instead of tensors

Unfortunately, this technique will not work for the huggingface implementation of transformer models, since the cache pos input tensor has not just a single element but is expected to be a vector of integer cache positions corresponding to all cache positions that will be updated.

## Changes

Introduce a custom op to capture the select + local_scalar_dense op pattern, which is the proper way to handle the op pattern.

Note that a custom op is needed because this op needs to access the staging buffer data of the input tensor, whereas `select` would typically be executed via a compute shader. The reason for this is because the `input_pos` value is needed to configure the sizes of attention weight tensors participating in the custom SDPA op, so the value must be set before any command buffers are dispatched.

As a consequence of this change, the previous handling of select + local scalar dense can also be removed.
ghstack-source-id: 321218518
@exported-using-ghexport

Differential Revision: [D86340340](https://our.internmc.facebook.com/intern/diff/D86340340/)
@pytorchbot pytorchbot requested a review from SS-JIA as a code owner November 6, 2025 17:41
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 6, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15644

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (6 Unrelated Failures)

As of commit 2568629 with merge base 2b02316 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 6, 2025
Base automatically changed from gh/SS-JIA/352/orig to main November 6, 2025 19:24
@SS-JIA SS-JIA merged commit e938fea into main Nov 6, 2025
154 of 166 checks passed
@SS-JIA SS-JIA deleted the gh/SS-JIA/353/orig branch November 6, 2025 19:25
abhinaykukkadapu pushed a commit to abhinaykukkadapu/executorch that referenced this pull request Nov 6, 2025
## Context

The SDPA custom op accepts the `input_pos` (i.e. cache position) argument as a symbolic integer. The value of the symbolic integer is obtained by selecting the first element of a cache position input tensor and converting it to symint via local_scalar_dense.

Currently, ET-VK handles this in a hacky manner.

1. the select + local_scalar_dense op pattern is removed, and the cache pos tensor is passed directly into the custom sdpa ops
2. Single element tensors that have users that are all select + local_scalar_dense will be interpreted as symints instead of tensors

Unfortunately, this technique will not work for the huggingface implementation of transformer models, since the cache pos input tensor has not just a single element but is expected to be a vector of integer cache positions corresponding to all cache positions that will be updated.

## Changes

Introduce a custom op to capture the select + local_scalar_dense op pattern, which is the proper way to handle the op pattern.

Note that a custom op is needed because this op needs to access the staging buffer data of the input tensor, whereas `select` would typically be executed via a compute shader. The reason for this is because the `input_pos` value is needed to configure the sizes of attention weight tensors participating in the custom SDPA op, so the value must be set before any command buffers are dispatched.

As a consequence of this change, the previous handling of select + local scalar dense can also be removed.

Differential Revision: [D86340340](https://our.internmc.facebook.com/intern/diff/D86340340/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants