@@ -31,7 +31,7 @@ To build the TensorRT-OSS components, you will first need the following software
3131** System Packages**
3232* [ CUDA] ( https://developer.nvidia.com/cuda-toolkit )
3333 * Recommended versions:
34- * cuda-12.0.1 + cuDNN-8.8
34+ * cuda-12.2.0 + cuDNN-8.8
3535 * cuda-11.8.0 + cuDNN-8.8
3636* [ GNU make] ( https://ftp.gnu.org/gnu/make/ ) >= v4.1
3737* [ cmake] ( https://github.com/Kitware/CMake/releases ) >= v3.13
@@ -99,9 +99,9 @@ For Linux platforms, we recommend that you generate a docker container for build
99991. # ### Generate the TensorRT-OSS build container.
100100 The TensorRT-OSS build container can be generated using the supplied Dockerfiles and build scripts. The build containers are configured for building TensorRT OSS out-of-the-box.
101101
102- ** Example: Ubuntu 20.04 on x86-64 with cuda-12.0 (default)**
102+ ** Example: Ubuntu 20.04 on x86-64 with cuda-12.1 (default)**
103103 ` ` ` bash
104- ./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.0
104+ ./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.1
105105 ` ` `
106106 ** Example: CentOS/RedHat 7 on x86-64 with cuda-11.8**
107107 ` ` ` bash
@@ -119,7 +119,7 @@ For Linux platforms, we recommend that you generate a docker container for build
1191192. # ### Launch the TensorRT-OSS build container.
120120 ** Example: Ubuntu 20.04 build container**
121121 ` ` ` bash
122- ./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.0 --gpus all
122+ ./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.1 --gpus all
123123 ` ` `
124124 > NOTE:
125125 < br> 1. Use the ` --tag` corresponding to build container generated in Step 1.
@@ -130,7 +130,7 @@ For Linux platforms, we recommend that you generate a docker container for build
130130# # Building TensorRT-OSS
131131* Generate Makefiles and build.
132132
133- ** Example: Linux (x86-64) build with default cuda-12.0 **
133+ ** Example: Linux (x86-64) build with default cuda-12.1 **
134134 ` ` ` bash
135135 cd $TRT_OSSPATH
136136 mkdir -p build && cd build
@@ -146,7 +146,7 @@ For Linux platforms, we recommend that you generate a docker container for build
146146 export PATH=" /opt/rh/devtoolset-8/root/bin:${PATH} "
147147 ` ` `
148148
149- ** Example: Linux (aarch64) build with default cuda-12.0 **
149+ ** Example: Linux (aarch64) build with default cuda-12.1 **
150150 ` ` ` bash
151151 cd $TRT_OSSPATH
152152 mkdir -p build && cd build
@@ -174,7 +174,7 @@ For Linux platforms, we recommend that you generate a docker container for build
174174 > NOTE: The latest JetPack SDK v5.1 only supports TensorRT 8.5.2.
175175
176176 > NOTE:
177- < br> 1. The default CUDA version used by CMake is 12.0 .1. To override this, for example to 11.8, append ` -DCUDA_VERSION=11.8` to the cmake command.
177+ < br> 1. The default CUDA version used by CMake is 11.4 .1. To override this, for example to 11.8, append ` -DCUDA_VERSION=11.8` to the cmake command.
178178 < br> 2. If samples fail to link on CentOS7, create this symbolic link: ` ln -s $TRT_OUT_DIR /libnvinfer_plugin.so $TRT_OUT_DIR /libnvinfer_plugin.so.8`
179179* Required CMake build arguments are:
180180 - ` TRT_LIB_DIR` : Path to the TensorRT installation directory containing libraries.
0 commit comments