Skip to content

Commit 824ed19

Browse files
committed
[Doc] add qwen3 reranker tutorial
Signed-off-by: wangyongjun <wangyongjun7.huawei.com>
1 parent 4750d45 commit 824ed19

File tree

2 files changed

+170
-0
lines changed

2 files changed

+170
-0
lines changed

docs/source/tutorials/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ single_npu
77
single_npu_multimodal
88
single_npu_audio
99
single_npu_qwen3_embedding
10+
single_npu_qwen3_reranker
1011
single_npu_qwen3_quantization
1112
multi_npu_qwen3_next
1213
multi_npu
Lines changed: 169 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,169 @@
1+
# Single NPU (Qwen3-Reranker-8B)
2+
3+
The Qwen3 Reranker model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This guide describes how to run the model with vLLM Ascend. Note that only 0.9.2rc1 and higher versions of vLLM Ascend support the model.
4+
5+
## Run Docker Container
6+
7+
Using the Qwen3-Reranker-8B model as an example, first run the docker container with the following command:
8+
9+
```bash
10+
# Update the vllm-ascend image
11+
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|
12+
docker run --rm \
13+
--name vllm-ascend \
14+
--shm-size=1g \
15+
--device /dev/davinci0 \
16+
--device /dev/davinci_manager \
17+
--device /dev/devmm_svm \
18+
--device /dev/hisi_hdc \
19+
-v /usr/local/dcmi:/usr/local/dcmi \
20+
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
21+
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
22+
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
23+
-v /etc/ascend_install.info:/etc/ascend_install.info \
24+
-v /root/.cache:/root/.cache \
25+
-p 8000:8000 \
26+
-it $IMAGE bash
27+
```
28+
29+
Set up environment variables:
30+
31+
```bash
32+
# Load model from ModelScope to speed up download
33+
export VLLM_USE_MODELSCOPE=True
34+
35+
# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory
36+
export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256
37+
```
38+
39+
### Online Inference
40+
41+
```bash
42+
vllm serve Qwen/Qwen3-Reranker-8B --task score --hf_overrides '{"architectures": ["Qwen3ForSequenceClassification"],"classifier_from_token": ["no", "yes"],"is_original_qwen3_reranker": true}'
43+
```
44+
45+
Once your server is started, you can send request with follow examples.
46+
47+
### requests demo + formatting query & document
48+
49+
```python
50+
import requests
51+
52+
url = "http://127.0.0.1:8000/v1/rerank"
53+
MODEL_NAME = "Qwen/Qwen3-Reranker-8B"
54+
55+
# Please use the query_template and document_template to format the query and
56+
# document for better reranker results.
57+
58+
prefix = '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n'
59+
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
60+
61+
query_template = "{prefix}<Instruct>: {instruction}\n<Query>: {query}\n"
62+
document_template = "<Document>: {doc}{suffix}"
63+
64+
instruction = (
65+
"Given a web search query, retrieve relevant passages that answer the query"
66+
)
67+
68+
query = "What is the capital of China?"
69+
70+
documents = [
71+
"The capital of China is Beijing.",
72+
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
73+
]
74+
75+
documents = [
76+
document_template.format(doc=doc, suffix=suffix) for doc in documents
77+
]
78+
79+
response = requests.post(url,
80+
json={
81+
"query": query_template.format(prefix=prefix, instruction=instruction, query=query),
82+
"documents": documents,
83+
}).json()
84+
85+
print(response)
86+
```
87+
88+
If you run this script successfully, you will see a list of scores printed to the console, similar to this:
89+
90+
```bash
91+
{'id': 'rerank-e856a17c954047a3a40f73d5ec43dbc6', 'model': 'Qwen/Qwen3-Reranker-8B', 'usage': {'total_tokens': 193}, 'results': [{'index': 0, 'document': {'text': '<Document>: The capital of China is Beijing.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n', 'multi_modal': None}, 'relevance_score': 0.9944348335266113}, {'index': 1, 'document': {'text': '<Document>: Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n', 'multi_modal': None}, 'relevance_score': 6.700084327349032e-07}]}
92+
```
93+
94+
### Offline Inference
95+
96+
```python
97+
from vllm import LLM
98+
99+
model_name = "Qwen/Qwen3-Reranker-8B"
100+
101+
# What is the difference between the official original version and one
102+
# that has been converted into a sequence classification model?
103+
# Qwen3-Reranker is a language model that doing reranker by using the
104+
# logits of "no" and "yes" tokens.
105+
# It needs to computing 151669 tokens logits, making this method extremely
106+
# inefficient, not to mention incompatible with the vllm score API.
107+
# A method for converting the original model into a sequence classification
108+
# model was proposed. See:https://huggingface.co/Qwen/Qwen3-Reranker-0.6B/discussions/3
109+
# Models converted offline using this method can not only be more efficient
110+
# and support the vllm score API, but also make the init parameters more
111+
# concise, for example.
112+
# model = LLM(model="Qwen/Qwen3-Reranker-8B", task="score")
113+
114+
# If you want to load the official original version, the init parameters are
115+
# as follows.
116+
117+
model = LLM(
118+
model=model_name,
119+
task="score",
120+
hf_overrides={
121+
"architectures": ["Qwen3ForSequenceClassification"],
122+
"classifier_from_token": ["no", "yes"],
123+
"is_original_qwen3_reranker": True,
124+
},
125+
)
126+
127+
# Why do we need hf_overrides for the official original version:
128+
# vllm converts it to Qwen3ForSequenceClassification when loaded for
129+
# better performance.
130+
# - Firstly, we need using `"architectures": ["Qwen3ForSequenceClassification"],`
131+
# to manually route to Qwen3ForSequenceClassification.
132+
# - Then, we will extract the vector corresponding to classifier_from_token
133+
# from lm_head using `"classifier_from_token": ["no", "yes"]`.
134+
# - Third, we will convert these two vectors into one vector. The use of
135+
# conversion logic is controlled by `using "is_original_qwen3_reranker": True`.
136+
137+
# Please use the query_template and document_template to format the query and
138+
# document for better reranker results.
139+
140+
prefix = '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n'
141+
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
142+
143+
query_template = "{prefix}<Instruct>: {instruction}\n<Query>: {query}\n"
144+
document_template = "<Document>: {doc}{suffix}"
145+
146+
if __name__ == "__main__":
147+
instruction = (
148+
"Given a web search query, retrieve relevant passages that answer the query"
149+
)
150+
151+
query = "What is the capital of China?"
152+
153+
documents = [
154+
"The capital of China is Beijing.",
155+
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
156+
]
157+
158+
documents = [document_template.format(doc=doc, suffix=suffix) for doc in documents]
159+
160+
outputs = model.score(query_template.format(prefix=prefix, instruction=instruction, query=query), documents)
161+
162+
print([output.outputs[0].score for output in outputs])
163+
```
164+
165+
If you run this script successfully, you will see a list of scores printed to the console, similar to this:
166+
167+
```bash
168+
[0.9943699240684509, 6.876250040477316e-07]
169+
```

0 commit comments

Comments
 (0)