Skip to content

Conversation

@tmm1
Copy link

@tmm1 tmm1 commented Jul 31, 2023

i noticed this while reviewing the code. when using --fp16, it still sets torch_dtype=torch.float32

is this by design, or an oversight?

is this intentionally set to a different value than compute_dtype?

@tongyx361
Copy link

I think it should be a typo error, because there is another torch.float32 at the end of if-else logic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants