@@ -154,13 +154,13 @@ python enjoy.py --algo algo_name --env env_id -f logs/ --exp-id 1 --load-last-ch
154154
155155Upload model to hub (same syntax as for ` enjoy.py ` ):
156156``` 
157- python -m utils .push_to_hub --algo ppo --env CartPole-v1 -f logs/ -orga sb3 -m "Initial commit" 
157+ python -m rl_zoo .push_to_hub --algo ppo --env CartPole-v1 -f logs/ -orga sb3 -m "Initial commit" 
158158``` 
159159you can choose custom ` repo-name `  (default: ` {algo}-{env_id} ` ) by passing a ` --repo-name `  argument.
160160
161161Download model from hub:
162162``` 
163- python -m utils .load_from_hub --algo ppo --env CartPole-v1 -f logs/ -orga sb3 
163+ python -m rl_zoo .load_from_hub --algo ppo --env CartPole-v1 -f logs/ -orga sb3 
164164``` 
165165
166166## Hyperparameter yaml syntax  
@@ -255,7 +255,7 @@ for multiple, specify a list:
255255
256256` ` ` yaml 
257257env_wrapper :
258-     - utils .wrappers.DoneOnSuccessWrapper :
258+     - rl_zoo .wrappers.DoneOnSuccessWrapper :
259259        reward_offset : 1.0 
260260    - sb3_contrib.common.wrappers.TimeFeatureWrapper 
261261` ` ` 
@@ -279,7 +279,7 @@ Following the same syntax as env wrappers, you can also add custom callbacks to
279279
280280` ` ` yaml
281281callback: 
282-   - utils .callbacks.ParallelTrainCallback: 
282+   - rl_zoo .callbacks.ParallelTrainCallback: 
283283      gradient_steps: 256 
284284` ` ` 
285285
@@ -306,19 +306,19 @@ Note: if you want to pass a string, you need to escape it like that: `my_string:
306306Record 1000 steps with the latest saved model :
307307
308308` ` ` 
309- python -m utils .record_video --algo ppo --env BipedalWalkerHardcore-v3 -n 1000 
309+ python -m rl_zoo .record_video --algo ppo --env BipedalWalkerHardcore-v3 -n 1000 
310310` ` ` 
311311
312312Use the best saved model instead :
313313
314314` ` ` 
315- python -m utils .record_video --algo ppo --env BipedalWalkerHardcore-v3 -n 1000 --load-best 
315+ python -m rl_zoo .record_video --algo ppo --env BipedalWalkerHardcore-v3 -n 1000 --load-best 
316316` ` ` 
317317
318318Record a video of a checkpoint saved during training (here the checkpoint name is `rl_model_10000_steps.zip`) :
319319
320320` ` ` 
321- python -m utils .record_video --algo ppo --env BipedalWalkerHardcore-v3 -n 1000 --load-checkpoint 10000 
321+ python -m rl_zoo .record_video --algo ppo --env BipedalWalkerHardcore-v3 -n 1000 --load-checkpoint 10000 
322322` ` ` 
323323
324324# # Record a Video of a Training Experiment
@@ -328,18 +328,18 @@ Apart from recording videos of specific saved models, it is also possible to rec
328328Record 1000 steps for each checkpoint, latest and best saved models :
329329
330330` ` ` 
331- python -m utils .record_training --algo ppo --env CartPole-v1 -n 1000 -f logs --deterministic 
331+ python -m rl_zoo .record_training --algo ppo --env CartPole-v1 -n 1000 -f logs --deterministic 
332332` ` ` 
333333
334334The previous command will create a `mp4` file. To convert this file to `gif` format as well :
335335
336336` ` ` 
337- python -m utils .record_training --algo ppo --env CartPole-v1 -n 1000 -f logs --deterministic --gif 
337+ python -m rl_zoo .record_training --algo ppo --env CartPole-v1 -n 1000 -f logs --deterministic --gif 
338338` ` ` 
339339
340340# # Current Collection: 195+ Trained Agents!
341341
342- Final performance of the trained agents can be found in [`benchmark.md`](./benchmark.md). To compute them, simply run `python -m utils .benchmark`. 
342+ Final performance of the trained agents can be found in [`benchmark.md`](./benchmark.md). To compute them, simply run `python -m rl_zoo .benchmark`. 
343343
344344List and videos of trained agents can be found on our Huggingface page : https://huggingface.co/sb3 
345345
0 commit comments