This is the code repository of MURAL, published in RTCSA 2025.
- Ubuntu Linux
- Docker with NVIDIA container runtime support
- NVIDIA GPU or iGPU (tested on Jetson Xavier, Jetson Orin, and RTX 3050)
- nuScenes dataset, can be downloaded from here.
- Pre-trained model checkpoints, download from here.
- We did the training using a separate fork of the OpenPCDet repo (named AL-Train) available here. Instructions on how to train are planned to be available later. However, the same instructions on the OpenPCDet repository can be followed while utilizing the config files provided in the AL-Train repository. We recommend using GPU(s) having at least 16 GB of total memory. In our case, we used a single RTX 4090.
git clone https://github.com/CSL-KU/MURAL.git
cd MURAL/dockerBuild the Docker image with the appropriate CUDA architecture for your GPU:
docker buildx build . --build-arg CUDA_ARCH="8.6" -f Dockerfile.x86 -t kucsl/mural:x86_nv23.10Note: The example above uses CUDA_ARCH="8.6" assuming RTX 3050. You can find your GPU's CUDA architecture number at: https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/
docker buildx build . --build-arg CUDA_ARCH="8.7" -t kucsl/mural:jetson-orinDefine the Following Environment Variables:
NUSCENES_PATH: Path to your nuScenes dataset. The hierarchy of the dataset folder should be as follows:
nuscenes/
└── v1.0-trainval/
│── samples/
│── sweeps/
│── maps/
└── v1.0-trainval/
MODELS_PATH: Path to the downloaded model checkpoint files.
Once they are defined, run the following command for x86 systems:
docker run --gpus all --net host -it --ipc=host --privileged \
--cap-add=ALL --ulimit rtprio=99 --tmpfs /tmpfs \
-v $NUSCENES_PATH:/root/nuscenes \
-v $MODELS_PATH:/root/MURAL/models \
--name mural kucsl/mural:x86_nv23.10for NVIDIA Jetson systems, run the following instead:
docker run --runtime nvidia --net host -it --privileged --cap-add=ALL \
--ulimit rtprio=99 --tmpfs /tmpfs \
-v $NUSCENES_PATH:/root/nuscenes \
-v $MODELS_PATH:/root/MURAL/models \
-v /var/lib/nvpmodel/status:/var/lib/nvpmodel/status \
--name mural kucsl/mural:jetson-orinOnce inside the container (this happens automatically due to the -it flag), run:
cd ~/MURAL/tools
. initialize.shBefore running experiments, execute the benchmarking (calibration) procedure to build TensorRT engines and collect timing data:
. do_calib.shTo run the experiments for PillarNet and PointPillars (CenterPoint version):
. do_run_tests.shThis script evaluates all methods presented in the paper (baselines and MURAL) across a range of deadlines. If the script fails to complete some tests, simply re-run it to complete those. When completed, it will also plot all the results and save them in the folders named exp_plots_Pillarnet and exp_plots_PointpillarsCP.
You can modify the deadline ranges by editing the do_run_tests.sh script. Look for following command:
./run_tests.sh methods BEGIN STEP ENDWhere:
BEGIN: Starting deadline value (in seconds)STEP: Increment step (in seconds)END: Ending deadline value (in seconds)
The existing deadline ranges in the do_run_tests.sh script are used for an RTX 3050 (power usage was limited to 30W).
@INPROCEEDINGS{mural2025,
author={Soyyigit, Ahmet and Yao, Shuochao and Yun, Heechul},
booktitle={IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA)},
title={MURAL: A Multi-Resolution Anytime Framework for LiDAR Object Detection Deep Neural Networks},
address={Singapore},
year={2026}
}For questions and support, please open an issue in this repository.