Skip to content

Commit d408343

Browse files
authored
[main][bugfix] Change seq_lens in dummy attn_metadata to max_query_len (vllm-project#4097)
### What this PR does / why we need it? Currently, we set `seq_lens` in dummy attn_metadata to be `max_model_len` to get max workspace for attention during capturing. However, setting it consistently to be `max_model_len` causing dummy_run to execute a long attention when running actual inference. For example, if there is a single req with `seqs_lens` as [8] but `max_model_len` is 131072, the whole process will be slow down by dummy_run as it execute a fake long-seq attention. Therefore, we instead set it to max_query_len, which is also consistent with vLLM gpu implementation. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? - vLLM version: v0.11.0 - vLLM main: vllm-project/vllm@83f478b --------- Signed-off-by: Angazenn <[email protected]> Signed-off-by: luolun <[email protected]>
1 parent 9e5b118 commit d408343

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm_ascend/worker/model_runner_v1.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2824,7 +2824,7 @@ def _build_dummy_attn_metadata(
28242824

28252825
attn_metadata = {}
28262826

2827-
seq_lens = self.model_config.max_model_len
2827+
seq_lens = max_query_len
28282828
self.seq_lens_np[:num_reqs] = seq_lens
28292829
self.seq_lens_np[num_reqs:] = 0
28302830

0 commit comments

Comments
 (0)