Problems with deepstream docker on Jetson nano

I am trying to run deepstream-app with available sample config file on Jetson nano inside deepstream_sdk_v4.0.0.2_jetson container but getting following error :

deepstream-app: error while loading shared libraries: libnvinfer.so.6: cannot open shared object file: No such file or directory

Not sure what is wrong here. I am assuming running this docker should support deepstream-app out of the box.

https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t/tags

Can you find libnvinfer.so.6 under /usr/lib/aarch64-linux-gnu/ within docker?
and suggest you run directly on Jetson devices, after you install Jetpack, you can run the samples directly.

No its not there. I need to run using container provided by NGC

Thia is how I am starting docker image : sudo docker run -it --net=host -v /tmp/.X11-unix:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:4.0.2-19.12-base

Please let me know , as I couldnt find enough documentation on how to run deepstream-app from container

Sorry , I fail to understand how to run these containers without any documentation , if someone can refer me to a single page where steps are provided it will be great . Whats the point of adding these fancy containers if there are no instructions on how to use them and using them gives error

I fixed cuda and everything , can run deepstream outside container , but running deepstream-app from provided deepstream container still gives the same error.

Finally I managed to run the deepstream-app using

nvidia-docker run  \
        --rm \
        -it \
        -e "DISPLAY" \
        --net=host \
        --device=/dev/nvhost-ctrl \
        --device=/dev/nvhost-ctrl-gpu \
        --device=/dev/nvhost-prof-gpu \
        --device=/dev/nvmap \
        --device=/dev/nvhost-gpu \
        --device=/dev/nvhost-as-gpu \
        --device=/dev/nvhost-vic \
        nvcr.io/nvidia/deepstream-l4t:4.0.2-19.12-samples

My issue is I cannot run any sample on my jetson nano :

deepstream-app -c deepstream_sdk_v4.0.2_jetson/samples/configs/deepstream-app/source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

gives:
Creating LL OSD context new
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
gstnvtracker: Failed to open low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
dlopen error: libcudart.so.10.2: cannot open shared object file: No such file or directory
gstnvtracker: Failed to initilaize low level lib.
** ERROR: main:651: Failed to set pipeline to PAUSED’

I can find libcudart.so.10.0 in /usr/local/cuda/lib64/ .

Any pointers?

My advice would be: until the approach changes on how Nvidia-Docker works on Tegra, don’t use it. Docker was designed at least in part to deal with dependency issues, but the way nvidia-docker is implemented on Tegra many files are bind mounted from the host to save image space.

One major drawback, out of several, is that the host and image need to be kept in sync or else libraries will fail to be found. It was a neat experiment, but untimately some neat experiments ends up having too many problems. I have been told this approach will change in a couple of releases.

If you want, you can test on x86 nvidia-docker. Deepstream runs great on that. Then when the Tegra side of things is fixed you can make the necessary changes, which shouldn’t be much.

1 Like

Thanks for you advice , really appreciate it. Yes , I have been using x86 nvidia-docker and it is really smooth . Since I wanted to deploy remotely on jetson nano , thought docker might be a good idea to start with

thanks for sharing.

@amycao Its not a solution to a problem , but a problem itself

Which Jetpack version you used? from your command, seems you using DS 4.0.2, right?
deepstream-app -c deepstream_sdk_v4.0.2_jetson/samples/configs/deepstream-app/source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt
not sure why deepstream-app linked to cudart 10.2 library, your customization code needs cuda 10.2 environments? we have Jetpack 4.4 and DS 5.0 released, which corporated with cuda 10.2, you can give a try.

also noted, I can run successful using same docker image as you.
nvcr.io/nvidia/deepstream-l4t:4.0.2-19.12-samples

root@56d6a7aa9dca:~/deepstream_sdk_v4.0.2_jetson# vim samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
root@56d6a7aa9dca:~/deepstream_sdk_v4.0.2_jetson# deepstream-app -c samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
Creating LL OSD context new
0:00:03.523211487 18 0x7f24002390 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]:useEngineFile(): Failed to read from model engine file
0:00:03.523472874 18 0x7f24002390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]:initialize(): Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:01:20.972485512 18 0x7f24002390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_jetson/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_int8.engine
0:01:21.085870683 18 0x7f24002390 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]:useEngineFile(): Failed to read from model engine file
0:01:21.085995520 18 0x7f24002390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]:initialize(): Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:02:29.133585719 18 0x7f24002390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_jetson/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_int8.engine
0:02:29.208832671 18 0x7f24002390 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]:useEngineFile(): Failed to read from model engine file
0:02:29.208942211 18 0x7f24002390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]:initialize(): Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:03:34.139067710 18 0x7f24002390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_jetson/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_int8.engine
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
0:03:34.331772096 18 0x7f24002390 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:03:34.331914406 18 0x7f24002390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:04:07.446219268 18 0x7f24002390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_jetson/samples/models/Primary_Detector/resnet10.caffemodel_b4_int8.engine

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg) FPS 1 (Avg) FPS 2 (Avg) FPS 3 (Avg)
**PERF: 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
** INFO: <bus_callback:189>: Pipeline ready

Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
Creating LL OSD context new
** INFO: <bus_callback:175>: Pipeline running

KLT Tracker Init
KLT Tracker Init
KLT Tracker Init
KLT Tracker Init
**PERF: 32.68 (32.68) 32.68 (32.68) 32.68 (32.68) 32.68 (32.68)
**PERF: 29.96 (31.23) 29.96 (31.23) 29.96 (31.23) 29.96 (31.23)

Yes , I was not using relevant deepstream image as I thought I am using Jetpack 3 , but I was actually using jetpack 4.