-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Open
Labels
kind/bugsomething isn't workingsomething isn't working
Description
Prerequisite
- I have searched Issues and Discussions but cannot get the expected help.
- I have read the FAQ documentation but cannot get the expected help.
- The bug has not been fixed in the latest version (main) or latest version (0.x).
Task
I'm using the official example scripts/configs for the officially supported tasks/models/datasets.
Branch
main branch https://github.com/open-mmlab/mmagic
Environment
torch==2.1.2
pytorch-cuda=12.*
mmcv==2.1.0
mmengine==0.10.4
mmagic==1.2.0
Reproduces the problem - code sample
import os
from mmagic.apis import MMagicInferencer
from mmengine import mkdir_or_exist
video = "../data/video/barry-1.mp4"
result_out_dir = "./results/barry-1_basicvsrpp.mp4"
mkdir_or_exist(os.path.dirname(result_out_dir))
editor = MMagicInferencer('basicvsr')
results = editor.infer(video=video, result_out_dir=result_out_dir)Reproduces the problem - command or script
run the python script above
Reproduces the problem - error message
OutOfMemoryError: CUDA out of memory. Tried to allocate 3.73 GiB. GPU 0 has a total capacty of 14.58 GiB of which 3.24 GiB is free. Including non-PyTorch memory, this process has 11.33 GiB memory in use. Of the allocated memory 11.22 GiB is allocated by PyTorch, and 3.35 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Additional information
I'm getting OOM error when I'm running the basicvsr model using the MMagicInferencer. Is there somewhere in the config or model where I can reduce the memory usage in order to run this using the MMagicInferencer?
Metadata
Metadata
Assignees
Labels
kind/bugsomething isn't workingsomething isn't working