Unable to setup nvOCDR on Jetson Orin NX

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.2
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question

Hello,
I am trying to execute this sample OCR application on the Jetson Orin NX and deploy it via Triton. Initially I tried using the tao-5.2 branch of the repository which has the ViT models but ran into an issue at this step:

bash build_engine.sh 736 1280 4 0 after the creation of the Triton server. Please find the error below

Cuda failure: CUDA driver version is insufficient for CUDA runtime version
build_engine.sh: line 17:   128 Aborted                 
(core dumped) /usr/src/tensorrt/bin/trtexec --device=${DEVICE} --onnx=/opt/nvocdr/onnx_model/ocdnet_vit.onnx --minShapes=input:1x3x${OCD_H}x${OCD_W} --optShapes=input:${OCD_MAX_BS}x3x${OCD_H}x${OCD_W} --maxShapes=input:${OCD_MAX_BS}x3x${OCD_H}x${OCD_W} --fp16 --saveEngine=/opt/nvocdr/engines/ocdnet_vit.fp16.engine

I have tried executing it without the vit-models too (since ViT models need trtexec 8.6+)by following the README on the tao-5.0 branch however the docker build for the Triton Server failed with the following error


134.5 cp: cannot create regular file '/usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.5.3': No such file or directory
------
Triton_Server.Dockerfile:20
--------------------
  19 |     # step3: install deformable-conv trt plugin
  20 | >>> RUN wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.0/local_repos/nv-tensorrt-local-repo-ubuntu2004-8.6.0-cuda-11.8_1.0-1_amd64.deb && \
  21 | >>>     dpkg-deb -xv nv-tensorrt-local-repo-ubuntu2004-8.6.0-cuda-11.8_1.0-1_amd64.deb debs && \
  22 | >>>     cd debs/var/nv-tensorrt-local-repo-ubuntu2004-8.6.0-cuda-11.8 && \
  23 | >>>     dpkg-deb  -xv libnvinfer-plugin8_8.6.0.12-1+cuda11.8_amd64.deb deb_file && \
  24 | >>>     cp deb_file/usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.6.0  /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.5.3
  25 |         
--------------------
ERROR: failed to solve: process "/bin/sh -c wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.0/local_repos/nv-tensorrt-local-repo-ubuntu2004-8.6.0-cuda-11.8_1.0-1_amd64.deb &&     dpkg-deb -xv nv-tensorrt-local-repo-ubuntu2004-8.6.0-cuda-11.8_1.0-1_amd64.deb debs &&     cd debs/var/nv-tensorrt-local-repo-ubuntu2004-8.6.0-cuda-11.8 &&     dpkg-deb  -xv libnvinfer-plugin8_8.6.0.12-1+cuda11.8_amd64.deb deb_file &&     cp deb_file/usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.6.0  /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.5.3" did not complete successfully: exit code: 1

Kindly advise the right steps to setup these models on the Triton server. I am currently using PaddleOCR on the Jetson (without GPU) and would like to test the nvOCDR models for faster inferencing.

please check if all libs version meets the requirements according to this table. if yes, could you execute that trtexec command-list to get more logs?

please check if libnvinfer_plugin.so.8.6.0 exists. if yes, will copying to other paths succeed?

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!
as the doc shown, there is a tip “TensorRT 8.5 or above (To use ViT-based model, TensorRT 8.6 above is required.)”. please install jp6.0 for iT-based model.

Hi, please leave this topic open for a few more days. Will get back to it soon.

OK, thanks for the update!