NeuroShield is an end-to-end DNN compilation toolchain that applies dedicated obfuscation primitives for compiled DNN binaries. NeuroShield is built upon Apache TVM
This code base is tested on Ubuntu 22.04, llvm-14, Python 3.9. Install dependencies by:
$ sudo apt install llvm-14 llvm-14-dev
$ cd <path2thisrepo>
$ pip install -U setuptools # we recomend use virtualenv
$ pip install -r requirements.txt
$ pip install numpy==1.19.5 # ignore the errorinit:
$ cd <path2thisrepo>
$ git submodule update --initapply patch:
$ cd <path2thisrepo>/tvm
$ git submodule update --init
$ git apply ../dnnobf.patchchange the setting in <path2thisrepo>/tvm/cmake/config.cmake build:
set(USE_MICRO ON)
set(USE_LLVM /usr/lib/llvm-14/bin/llvm-config)
build tvm:
$ cd <path2thisrepo>/tvm
$ mkdir build && cd build
$ cp ../cmake/config.cmake .
$ cmake .. -G Ninja
$ ninjabuild tvm python:
$ export TVM_HOME=<path2thisrepo>/tvm
$ export PYTHONPATH=$TVM_HOME/python:${PYTHONPATH}
$ cd <path2thisrepo>/tvm/python
$ python setup.py installbuild llvm helper pass:
$ cd <path2thisrepo>/bingen/llvm_pass
$ mkdir build && cd build
$ cmake ..
$ makegenerate model onnx file (e.g., mnist):
$ cd <path2thisrepo>/models
$ python generate_onnx.py mnistprofile model neuron ranges:
$ cd <path2thisrepo>/profile
$ python neuron_profile.py mnist train Note: it takes about 3 days to profile all six models on an Intel Xeon Gold 6258R machine; this step can be skipped, in that case NeuroShield will generate a large enough neuron range
modify <path2thisrepo>/bingen/Makefile settings:
model_name := mnist # resnet, mobilenet, fasttext, esm, albert
dnnobf_flex_fuse := 1 # set to 0 to disable ff
dnnobf_fakeop_insert := 1 # set to 0 to disable fi
dnnobf_compute_reorder := 1 # set to 0 to disable cr
generate DNN binaries:
$ cd <path2thisrepo>/bingen
$ makerun generated binaries (e.g., mnist)
$ cd <path2thisrepo>/bingen/build_output/ff{dnnobf_flex_fuse}_fi{dnnobf_fakeop_insert}_cr{dnnobf_compute_reorder}
$ ./dnn_bin mnist_input.bin