Skip to content

Conversation

@vlserov
Copy link

@vlserov vlserov commented Oct 28, 2025

Motivation

Integration LoRA kernels into SCLang in order to support LoRA adapters on Ascend devices.

Modifications

Integrated LoRA backend, which used vllm kernels to perform sgmv operation instead of triton implementation.

Accuracy Tests

No LoRA enabled Qwen3-32B Qwen3-30B-A3B
Accuracy 0.848 0.904
Latency 270.254 s 148.327 s
Throughput 1303.601 t/s 1027.614 t/s
LoRA enabled Qwen3-32B Qwen3-30B-A3B
Accuracy 0.852 0.889
Latency 357.629 s 196.121 s
Throughput 1003.687 t/s 791.195 t/s

Benchmarking and Profiling

LoRA adapter should be matched with served model.
For example, for model Qwen3-32B can be used adapters:

  • flyfishxu/DeepNews-LoRA-Qwen3-32B
  • nicoboss/Qwen3-32B-Uncensored-Lora
python3 -m sglang.launch_server --model-path Qwen3-32B
    --enable-lora \
    --lora-paths lora0=flyfishxu/DeepNews-LoRA-Qwen3-32B \
    --max-loras-per-batch 1

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @vlserov, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the hardware compatibility of the SGLang framework by introducing native LoRA support for Ascend devices. It integrates a dedicated Ascend LoRA backend that leverages specialized NPU kernels for optimized performance. The changes also include a structural improvement to how LoRA backends are registered and managed, enhancing the system's extensibility. This enables users to efficiently fine-tune and deploy models with LoRA adapters on Ascend hardware, broadening the framework's utility across different computing environments.

Highlights

  • Ascend LoRA Backend Integration: A new LoRA backend, AscendLoRABackend, has been added to support LoRA adapters on Ascend devices. This backend utilizes NPU-specific kernels (torch.ops.npu.sgmv_shrink and torch.ops.npu.sgmv_expand) for efficient matrix operations in run_lora_a_sgemm, run_lora_b_sgemm, run_qkv_lora, and run_gate_up_lora.
  • LoRA Backend Registry Refactoring: The mechanism for managing LoRA backends has been refactored. A new file lora_registry.py introduces a centralized registry (LORA_SUPPORTED_BACKENDS) and a decorator (register_lora_backend) for registering different LoRA backends. The get_backend_from_name function has been moved to this new registry, and the old function in base_backend.py has been removed.
  • Enhanced LoRA Layer Compatibility: Modifications in lora/layers.py ensure that LoRALinear layers correctly expose their base layer's weight attribute. Additionally, LoRAQKVLinear now pre-computes output_offset_cpu and passes it to the run_qkv_lora method, which is crucial for the new Ascend backend's operations.
  • Flexible CUDA Graph Initialization: The init_cuda_graph_batch_info method in lora_manager.py and cuda_graph_runner.py has been updated to accept a device argument. This change allows for more flexible and device-agnostic initialization of CUDA graph batch information, accommodating different hardware platforms like Ascend.
  • Updated LoRA Backend Choices and NPU Kernel: The LORA_BACKEND_CHOICES in server_args.py now includes 'ascend', making the new backend selectable by users. The SGL_KERNEL_NPU_TAG in the CI/CD script npu_ci_install_dependency.sh has been updated to a newer version ('20251023'), indicating a dependency update for NPU kernels.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new LoRA backend for Ascend NPUs, which is a valuable addition. The refactoring of the backend selection mechanism into a registry pattern is a clean and extensible design choice. My review primarily focuses on the new Ascend backend implementation. I have identified a critical bug in the run_gate_up_lora method that needs to be addressed. Additionally, I've provided several suggestions for improving memory efficiency and ensuring consistency within the new backend code.

@ping1jing2 ping1jing2 marked this pull request as draft October 28, 2025 19:31
@vlserov vlserov changed the title LoRA: adding Ascend LoRA backend with using kernels from sgl_kernel_npu [Ascend] LoRA: adding Ascend LoRA backend with using kernels from sgl_kernel_npu Oct 29, 2025
@vlserov vlserov marked this pull request as ready for review October 29, 2025 09:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants