Deepstream 6.3-triton-multiarch in Docker cannot use Tracker

Nvidia Jetson Orin NX
ubuntu 20.04

DeepStreamSDK 6.3.0
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.4
TensorRT Version: 8.5
cuDNN Version: 8.6
libNVWarp360 Version: 2.0.1d3

I execute this in Docker deepstream 6.3-triton-multiarch :
deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

And I got:
root@ffbccffd8848:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app# deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
0:00:03.672682568 1159 0xaaab072a7f20 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 6]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 20x1x1

0:00:03.871422407 1159 0xaaab072a7f20 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 6]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:03.906121084 1159 0xaaab072a7f20 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_2> [UID 6]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_carmake.txt sucessfully
0:00:06.273672213 1159 0xaaab072a7f20 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 5]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/…/…/models/Secondary_CarColor/resnet18.root@ffbccffd8848:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app# vim source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
root@ffbccffd8848:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app# deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
0:00:03.732610100 1188 0xaaaaed9d1120 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 6]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 20x1x1

0:00:03.929626984 1188 0xaaaaed9d1120 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 6]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:03.963898655 1188 0xaaaaed9d1120 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_2> [UID 6]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_carmake.txt sucessfully
0:00:06.314255694 1188 0xaaaaed9d1120 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 5]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/…/…/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 12x1x1

0:00:06.520056400 1188 0xaaaaed9d1120 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 5]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/…/…/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:06.528239611 1188 0xaaaaed9d1120 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_1> [UID 5]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt sucessfully
0:00:08.789068229 1188 0xaaaaed9d1120 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 4]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/…/…/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 6x1x1

0:00:08.995406057 1188 0xaaaaed9d1120 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 4]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/…/…/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:09.001758010 1188 0xaaaaed9d1120 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_0> [UID 4]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.3/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Failed to open low-level lib at /opt/nvidia/deepstream/deepstream-6.3/lib/libnvds_nvmultiobjecttracker.so
dlopen error: /opt/nvidia/cupva-2.3/lib/aarch64-linux-gnu/libcupva_host.so.2.3: undefined symbol: PvaProgramSetDMADescriptorsV2
gstnvtracker: Failed to initilaize low level lib.
** ERROR: main:716: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
nvstreammux: Successfully handled EOS for source_id=1
nvstreammux: Successfully handled EOS for source_id=2
nvstreammux: Successfully handled EOS for source_id=3
App run failed

I don’t know where did I go wrong? This is a brand new Docker image I just pulled, did you guys verified this config before publishing it?

Please help me get rid of the trouble, looking forward to your early reply.

Thanks for your report. I will check and feedback.

I don’t meet the fail with Jetpack 5.1.2 and Docker deepstream 6.3-triton-multiarch. Do you have anything changed by yourself?

It’s pretty strange, I just pulled the image nvcr.io/nvidia/deepstream:6.3-triton-multiarch and tried it again, and then, I got the same err msg with no doubt:dlopen error: /opt/nvidia/cupva-2.3/lib/aarch64-linux-gnu/libcupva_host.so.2.3: undefined symbol: PvaProgramSetDMADescriptorsV2.

I haven’t changed anything in the container, here is my command: docker run --runtime nvidia -it -p 122:22 --privileged -v /tmp/.X11-unix/:/tmp/.X11-unix -v /home/amov/Desktop/workspace/Deepstream-hongyan-app:/opt/nvidia/deepstream/deepstream/workspace/Deepstream-hongyan-app -e DISPLAY=:0 -w /opt/nvidia/deepstream/deepstream nvcr.io/nvidia/deepstream:6.3-triton-multiarch
When I entered the container, I set CUDA_VER to 11.4 and then into /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-app to execute a make instantly.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Can you share your Jetpack version? Can you try with clean Jetpack 5.1.2?
Here is the guide how to run DeepStream docer container on Jetson: DeepStream-l4t | NVIDIA NGC

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.