Docker to Generate the TensorRT build container

Jetson nano 4gb Developer kit

Environment

Jetpack 4.5.1 LRT32.5.1
TensorRT Version: 7.1.3.0
CUDA Version: 10.2.89
CUDNN Version: 8.0.0.180
Operating System + Version: 18.04 Ubuntu
Python Version (if applicable): 3.6.9

i’m preparing the jetson for deepstream5.1/sources/objectDetector_Yolo

i followed instructions to download yolo config and weights files
$ ./prebuild.sh

Next at GitHub - NVIDIA/TensorRT at v7.0.0

i am executing the following step
step

and i get following error

what do i do?

Hi,

Do you use yolo-tlt sample (for TLT model)?
If not, you don’t need to compile the OSS plugin.

Thanks.

yes i plan to use yolo-tlt sample. please could you tell me how do i fix this issue?

Hi @rajkumarsaswamy

This error says /home/rajkumar/docker/ubuntu.Dockerfile not found.

If you wanted to run with these command, you should add your ubuntu.Dockerfile file into /home/rajkumar/docker/

Thanks

Hi @rajkumarsaswamy

Do you run this container in Jetson Nano?

ubuntu.Dockerfile gets the container from

FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04

The nvidia/cuda container is for x86. For Jetson, you should base container from nvcr.io/nvidia/l4t-base:r32.5.0. l4t-base already has CUDA/cuDNN/TensorRT when you run it with --runtime nvidia. If you need CUDA while building the container, set your docker default-runtime to nvidia and reboot: