forked from huggingface/diffusers
-
Notifications
You must be signed in to change notification settings - Fork 0
Merge changes #199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Merge changes #199
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Update __init__.py * add consisid * update consisid * update consisid * make style * make_style * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by: hlky <[email protected]> * add doc * make style * Rename consisid .md to consisid.md * Update geodiff_molecule_conformation.ipynb * Update geodiff_molecule_conformation.ipynb * Update geodiff_molecule_conformation.ipynb * Update demo.ipynb * Update pipeline_consisid.py * make fix-copies * Update docs/source/en/using-diffusers/consisid.md Co-authored-by: Steven Liu <[email protected]> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by: Steven Liu <[email protected]> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/using-diffusers/consisid.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/using-diffusers/consisid.md Co-authored-by: Steven Liu <[email protected]> * update doc & pipeline code * fix typo * make style * update example * Update docs/source/en/using-diffusers/consisid.md Co-authored-by: Steven Liu <[email protected]> * update example * update example * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by: hlky <[email protected]> * update * add test and update * remove some changes from docs * refactor * fix * undo changes to examples * remove save/load and fuse methods * update * link hf-doc-img & make test extremely small * update * add lora * fix test * update * update * change expected_diff_max to 0.4 * fix typo * fix link * fix typo * update docs * update * remove consisid lora tests --------- Co-authored-by: hlky <[email protected]> Co-authored-by: Steven Liu <[email protected]> Co-authored-by: Aryan <[email protected]>
set rest of the blocks with requires_grad False.
Signed-off-by: sunxunle <[email protected]>
* bugfix for npu not support float64 * is_mps is_npu --------- Co-authored-by: 白超 <[email protected]> Co-authored-by: hlky <[email protected]>
change licensing to 2025 from 2024.
* enable dreambooth_lora on other devices Signed-off-by: jiqing-feng <[email protected]> * enable xpu Signed-off-by: jiqing-feng <[email protected]> * check cuda device before empty cache Signed-off-by: jiqing-feng <[email protected]> * fix comment Signed-off-by: jiqing-feng <[email protected]> * import free_memory Signed-off-by: jiqing-feng <[email protected]> --------- Signed-off-by: jiqing-feng <[email protected]>
Remove the FP32 Wrapper Co-authored-by: Linoy Tsaban <[email protected]>
* initial comit * fix empty cache * fix one more * fix style * update device functions * update * update * Update src/diffusers/utils/testing_utils.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/utils/testing_utils.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/utils/testing_utils.py Co-authored-by: hlky <[email protected]> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/utils/testing_utils.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/utils/testing_utils.py Co-authored-by: hlky <[email protected]> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by: hlky <[email protected]> * with gc.collect * update * make style * check_torch_dependencies * add mps empty cache * bug fix * Apply suggestions from code review --------- Co-authored-by: hlky <[email protected]>
* add * style
* update * update * make style * remove dynamo disable * add coauthor Co-Authored-By: Dhruv Nair <[email protected]> * update * update * update * update mixin * add some basic tests * update * update * non_blocking * improvements * update * norm.* -> norm * apply suggestions from review * add example * update hook implementation to the latest changes from pyramid attention broadcast * deinitialize should raise an error * update doc page * Apply suggestions from code review Co-authored-by: Steven Liu <[email protected]> * update docs * update * refactor * fix _always_upcast_modules for asym ae and vq_model * fix lumina embedding forward to not depend on weight dtype * refactor tests * add simple lora inference tests * _always_upcast_modules -> _precision_sensitive_module_patterns * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case * check layer dtypes in lora test * fix UNet1DModelTests::test_layerwise_upcasting_inference * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback * skip test in NCSNppModelTests * skip tests for AutoencoderTinyTests * skip tests for AutoencoderOobleckTests * skip tests for UNet1DModelTests - unsupported pytorch operations * layerwise_upcasting -> layerwise_casting * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support * add layerwise fp8 pipeline test * use xfail * Apply suggestions from code review Co-authored-by: Dhruv Nair <[email protected]> * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass) * add note about memory consumption on tesla CI runner for failing test --------- Co-authored-by: Dhruv Nair <[email protected]> Co-authored-by: Steven Liu <[email protected]>
improve error message
…tils.py (#10624) add onnxruntime-migraphx to import_utils.py Co-authored-by: Sayak Paul <[email protected]>
* fixes * fixes * fixes * updates
fix image path in para attention docs
* uv * feedback
vars mixed-up
* Add IP-Adapter example to Flux docs * Apply suggestions from code review Co-authored-by: Sayak Paul <[email protected]> --------- Co-authored-by: Sayak Paul <[email protected]>
We already set the unet to requires grad false at line 506 Co-authored-by: Aryan <[email protected]>
…0631) * feat: add a lora extraction script. * updates
* add pipeline_stable_diffusion_xl_attentive_eraser * add pipeline_stable_diffusion_xl_attentive_eraser_make_style * make style and add example output * update Docs Co-authored-by: Other Contributor <[email protected]> * add Oral Co-authored-by: Other Contributor <[email protected]> * update_review Co-authored-by: Other Contributor <[email protected]> * update_review_ms Co-authored-by: Other Contributor <[email protected]> --------- Co-authored-by: Other Contributor <[email protected]>
* NPU Adaption for Sanna --------- Co-authored-by: J石页 <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
Sigmoid scheduler in scheduling_ddpm.py docs
* create a script to train vae * update main.py * update train_autoencoderkl.py * update train_autoencoderkl.py * add a check of --pretrained_model_name_or_path and --model_config_name_or_path * remove the comment, remove diffusers in requiremnets.txt, add validation_image ote * update autoencoderkl.py * quality --------- Co-authored-by: Sayak Paul <[email protected]>
* add community pipeline for semantic guidance for flux * fix imports in community pipeline for semantic guidance for flux * Update examples/community/pipeline_flux_semantic_guidance.py Co-authored-by: hlky <[email protected]> * fix community pipeline for semantic guidance for flux --------- Co-authored-by: Linoy Tsaban <[email protected]> Co-authored-by: hlky <[email protected]>
* [training] Convert to ImageFolder script * make
#10663) controlnet union XL, make control_image immutible when this argument is passed a list, __call__ modifies its content, since it is pass by reference the list passed by the caller gets its content modified unexpectedly make a copy at method intro so this does not happen Co-authored-by: Teriks <[email protected]>
Co-authored-by: Giuseppe Catalano <[email protected]>
* start pyramid attention broadcast * add coauthor Co-Authored-By: Xuanlei Zhao <[email protected]> * update * make style * update * make style * add docs * add tests * update * Update docs/source/en/api/pipelines/cogvideox.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/api/pipelines/cogvideox.md Co-authored-by: Steven Liu <[email protected]> * Pyramid Attention Broadcast rewrite + introduce hooks (#9826) * rewrite implementation with hooks * make style * update * merge pyramid-attention-rewrite-2 * make style * remove changes from latte transformer * revert docs changes * better debug message * add todos for future * update tests * make style * cleanup * fix * improve log message; fix latte test * refactor * update * update * update * revert changes to tests * update docs * update tests * Apply suggestions from code review Co-authored-by: Steven Liu <[email protected]> * update * fix flux test * reorder * refactor * make fix-copies * update docs * fixes * more fixes * make style * update tests * update code example * make fix-copies * refactor based on reviews * use maybe_free_model_hooks * CacheMixin * make style * update * add current_timestep property; update docs * make fix-copies * update * improve tests * try circular import fix * apply suggestions from review * address review comments * Apply suggestions from code review * refactor hook implementation * add test suite for hooks * PAB Refactor (#10667) * update * update * update --------- Co-authored-by: DN6 <[email protected]> * update * fix remove hook behaviour --------- Co-authored-by: Xuanlei Zhao <[email protected]> Co-authored-by: Steven Liu <[email protected]> Co-authored-by: DN6 <[email protected]>
…de (#10600) * fix: refer to use_framewise_encoding on AutoencoderKLHunyuanVideo._encode * fix: comment about tile_sample_min_num_frames --------- Co-authored-by: Aryan <[email protected]>
* update * remove unused fn * apply suggestions based on review * update + cleanup 🧹 * more cleanup 🧹 * make fix-copies * update test
…_max_memory` (#10669) * conditionally check if compute capability is met. * log info. * fix condition. * updates * updates * updates * updates
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.