-
Notifications
You must be signed in to change notification settings - Fork 11
Description
First, @HuguesTHOMAS thanks a lot for your work, it is trully amazing.
I am trying to train a model with a bunch of datasets, but I am running into a problem with RAM limits in the preprocessing steps.
I have 128GB of RAM, and I am getting an OOM error during the subsampling stage when loading/creating+loading the .ply and .pkl files from all clouds.
Have you had a similar problem? I've check almost all related issues both in this repo and in KPConv and KPConv-Pytorch, and I cannot find any solution.
Do you recommend changing parameters? A lazy-load? (I understand that this one doesn't work because it would still need to load everything before the training starts) A shared memory with disc? Or maybe a more efficient method of storing the values in memory?
Thanks a lot.