Fix: stabilize test after distributed training completion #375
+3
−1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Related Issues
Fixes #316 #374
Summary of Changes
This PR fixes two problems for distributed training when
run_test=True:Synchronization before testing
torch.distributed.barrier()whenargs.distributedis enabled, right after training and before testing.checkpoint_best_total.pthbefore moving on to testing.Correct checkpoint loading in DDP
model.load_state_dict(best_state_dict)tomodel_without_ddp.load_state_dict(best_state_dict)Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
train.py :
python -m torch.distributed.launch --nproc_per_node=4 --use_env train.py