Skip to content

Conversation

@Anexdeus
Copy link

@Anexdeus Anexdeus commented Dec 16, 2025

Purpose

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@Anexdeus
Copy link
Author

Anexdeus commented Dec 16, 2025

Hello,
I am here to help with the development of the feature Adding Lora on visual blocks of multimodal models
We track the progress of the development in our department and really hope this feature will be in the main branch asap

@Anexdeus Anexdeus changed the title added abstract methods to the base class added ProcessingInfoMixin for WenVL series Dec 20, 2025
@Anexdeus Anexdeus changed the title added ProcessingInfoMixin for WenVL series added ProcessingInfoMixin for QwenVL series Dec 20, 2025
@Anexdeus Anexdeus changed the title added ProcessingInfoMixin for QwenVL series Added abstract methods to the base class Dec 20, 2025
@Anexdeus Anexdeus requested a review from jeejeelee as a code owner December 20, 2025 14:22
@Anexdeus Anexdeus changed the title Added abstract methods to the base class Extended SupportsMultiModal Dec 20, 2025
tower_model="visual.",
)

def get_num_mm_encoder_tokens(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DarkLight1337 @Isotr0py @prashanth058 To avoid everyone changing things back and forth, what do you think - should the get_num_mm_encoder_tokens, get_num_mm_connector_tokens be implemented in processinfo or in model? I personally lean towards making them functions of model.

Copy link

@prashanth058 prashanth058 Dec 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. Since these methods are used for building LoRA mappings during execution rather than preprocessing, makes sense for get_num_mm_encoder_tokens, get_num_mm_connector_tokens to live on the model.

get_allowed_mm_limits should probably continue to live on the processinfo though?

Copy link
Author

@Anexdeus Anexdeus Dec 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure! I have removed get_allowed_mm_limits().
I attempted to rewrite the logic in the model_runner files using that function, but it appears this issue has already been resolved

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same I also prefer implementing them on the model

@B-201 B-201 merged commit a3a8fc1 into jeejeelee:mlm-full-lora-support Dec 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants