-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Closed
Labels
Description
Describe the bug
When finetune Qwen2.5-14B with ZeRO2+offload on 4xA100 40G cards, got GPU OOM error.
To Reproduce
Config file:
{
"train_batch_size": 8,
"bf16": { "enabled": true },
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": false
}
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 2e-5,
"betas": [0.9, 0.999],
"eps": 1e-8,
"weight_decay": 0.01
}
},
"gradient_accumulation_steps": 1,
"gradient_clipping": 1.0,
"zero_allow_untested_optimizer": true,
"wall_clock_breakdown": true
}
Running script:
finetune_llama.py as in the the following PR (note, replace config file with above)
deepspeedai/DeepSpeedExamples#982
Launch command
deepspeed --num_gpus=4 finetune_llama.py --model_name Qwen/Qwen2.5-14B --output_dir output --lr 2e-5 --batch_size 8 --deepspeed_config zo_config.json --num_train_epochs 1
Error I get:
[rank3]: return self._call_impl(*args, **kwargs)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/raid/miniforge3/envs/gma/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
[rank3]: return forward_call(*args, **kwargs)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/raid/miniforge3/envs/gma/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 229, in forward
[rank3]: hidden_states = self.input_layernorm(hidden_states)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/raid/miniforge3/envs/gma/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[rank3]: return self._call_impl(*args, **kwargs)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/raid/miniforge3/envs/gma/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
[rank3]: return forward_call(*args, **kwargs)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/raid/miniforge3/envs/gma/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 197, in forward
[rank3]: variance = hidden_states.pow(2).mean(-1, keepdim=True)
[rank3]: ^^^^^^^^^^^^^^^^^^^^
[rank3]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 3 has a total capacity of 39.39 GiB of which 2.38 MiB is free. Including non-PyTorch memory, this process
has 39.35 GiB memory in use. Of the allocated memory 38.04 GiB is allocated by PyTorch, and 27.70 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try se
tting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[rank1]:[W811 02:32:16.088089371 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=57, addr=[::ffff:127.0.0.1]:40020, remote=[::ffff:127.0.0.1]:29500): Connection reset by peer
Expected behavior
4xA100 40GB + ZeRO offload should allow us to finetune 14B model.
ds_report output
[2025-08-11 05:01:36,409] [INFO] [real_accelerator.py:260:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-11 05:01:39,096] [INFO] [logging.py:107:log_dist] [Rank -1] [TorchCheckpointEngine] Initialized with serialization = False
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
dc ..................... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] FP Quantizer is using an untested triton version (3.4.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
/raid/miniforge3/envs/gma/compiler_compat/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlvsym'
/raid/miniforge3/envs/gma/compiler_compat/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlopen'
/raid/miniforge3/envs/gma/compiler_compat/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlclose'
/raid/miniforge3/envs/gma/compiler_compat/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlerror'
/raid/miniforge3/envs/gma/compiler_compat/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlsym'
collect2: error: ld returned 1 exit status
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.8
[WARNING] using untested triton version (3.4.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/raid/miniforge3/envs/gma/lib/python3.11/site-packages/torch']
torch version .................... 2.8.0+cu128
deepspeed install path ........... ['/raid/bduser/gma/DeepSpeed/deepspeed']
deepspeed info ................... 0.17.5+f897b673, f897b673, master
torch cuda version ............... 12.8
torch hip version ................ None
nvcc version ..................... 12.8
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
shared memory (/dev/shm) size .... 503.84 GB
Screenshots
N/A
System info (please complete the following information):
- OS: Ubuntu 20.04.6 LTS
- 8xNVIDIA A100-SXM4-40GB (4 of them are used in this report)
- N/A
- Python version: 3.11.0
Launcher context
Launch with deepspeed
Docker context
N/A, not running with docker
Additional context
N/A