-
Notifications
You must be signed in to change notification settings - Fork 163
vuln fixes oct vul fixes 2 #4504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- GHSA-887c-mr87-cxwp: Upgrade torch from 2.7.1 to 2.8.0 in all affected environments - GHSA-36rr-ww3j-vrjv: Upgrade keras from 3.11.0 to 3.11.3 in tensorflow environment - GHSA-4xh5-x5gv-qwph: Upgrade pip to latest secure version across all environments Environments fixed: - automl environments (ai-ml-automl-*) - fine-tuning environments (acft-*) - general ML environments (sklearn, lightgbm, tensorflow) - vision processing environments - pytorch environments All fixes maintain backward compatibility while resolving critical security issues.
- PyTorch 2.8.0 requires Python 3.10 or higher - Updated all AutoML environments using Python 3.9 to Python 3.10 - This resolves the conda solver error: 'nothing provides __cuda needed by pytorch-2.8.0' Environments updated: - ai-ml-automl - ai-ml-automl-dnn - ai-ml-automl-dnn-forecasting-gpu - ai-ml-automl-dnn-gpu - ai-ml-automl-dnn-text-gpu - ai-ml-automl-dnn-vision-gpu - ai-ml-automl-gpu
- torchvision 0.22.1 depends on torch==2.7.1 - torchvision 0.23.0 is compatible with torch==2.8.0 - This resolves pip dependency conflicts during installation Fixed environments: - acpt-grpo - ai-ml-automl-dnn-text-gpu - ai-ml-automl-dnn-forecasting-gpu - ai-ml-automl-dnn-vision-gpu - ai-ml-automl-dnn-gpu - automl-dnn-vision-gpu
Specialized fixes for environments with complex dependency conflicts: 1. ai-ml-automl-dnn-text-gpu: - Downgrade transformers from 4.53.0 to 4.48.0 (azureml-automl-dnn-nlp requirement) - Use torch==2.2.2 + torchvision==0.17.2 (azureml-automl-dnn-nlp requirement) - Downgrade urllib3 from 2.5.0 to 1.26.18 (azureml-automl-runtime requirement) 2. ai-ml-automl-gpu: - Downgrade urllib3 from 2.5.0 to 1.26.18 (azureml-automl-runtime requirement) - Keep pip security upgrade These environments require older package versions due to azureml-automl-runtime and azureml-automl-dnn-nlp compatibility constraints. The security vulnerabilities in torch and transformers will need to be addressed through runtime updates rather than package upgrades.
Test Results for assets-test12 tests 8 ✅ 2h 15m 25s ⏱️ For more details on these failures, see this check. Results for commit b258973. ♻️ This comment has been updated with latest results. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these updates do not seem correct.
- use no cache as images will result >8gb layers when installing torch+
- do not reinstall torch if you based on aifx image, it defeats the purpose of using that base image
- do not upgrade pip, esp in random envs that don't even exist in the image
- if you have to update packages like urllib or request, you have a bigger problem with your dependencies
- if you create a new conda environment and using aifx image, there is absolutely no reason to use aifx image, start from a light one from nvcr or one of training base images
|
This pull request has been marked as stale because it has been inactive for 14 days. |
|
This pull request has been automatically closed due to inactivity. |
No description provided.