Description
Hello. I have been redirected from the Deepstream forum towards this one because I have a problem generating an engine for an ocdnet.onnx file provided by NVIDIA. Here is my post. In summary, when I generate an trtengine from the onnx file, the process is “Killed” at the same step everytime.
I have explained all the details in the other post. Is there any hope to generate the engine on a Jetson Orin NX? Thank you in advance for your help.
Environment
TensorRT Version:
8.6.2.3-1+cuda12.2
GPU Type:
dGPU Jetson Orin NX 16GB
Nvidia Driver Version:
CUDA Version:
12.2.140
CUDNN Version:
8.9.4.25
Operating System + Version:
Ubuntu 22.04 L4T 36.3.0 Jetpack 6.0
Python Version (if applicable):
3.10.12
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Deepstream 7.0 triton multiarch provided by NVidia at DeepStream | NVIDIA NGC
Relevant Files
onnxsimlog.log (2.6 MB)
workspace8192log.log (3.3 MB)
log.log (3.3 MB)
Steps To Reproduce
-
Start deepstream 7.0 triton multiarch docker on a Jetson Orin NX 16GB (Jetpack 6.0 L4T 36.3)
docker run -it -d --gpus all --privileged --name deepstream-vit --net=host --runtime nvidia -v /phoenix/applications/ocdr/:/~/volume/ nvcr.io/nvidia/deepstream:7.0-triton-multiarch
-
Install libopencv-dev
apt update && apt install -y libopencv-dev
-
cd /~/volume/
-
Download the model:
wget --content-disposition ‘https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/ocdnet/deployable_v2.3/files?redirect=true&path=ocdnet_fan_tiny_2x_icdar_pruned.onnx’ -O ocdnet.onnx -
Generate the engine for the model:
/usr/src/tensorrt/bin/trtexec --onnx=ocdnet.onnx --minShapes=input:1x3x736x1280 --optShapes=input:1x3x736x1280 --maxShapes=input:4x3x736x1280 --fp16 --saveEngine=ocdnet.fp16.engine