How to run deepstream-app command at other folder?

• Hardware Platform (Jetson / GPU)
NVIDIA Jetson Nano (Developer Kit Version)
• DeepStream Version
5.0.0
• JetPack Version (valid for Jetson only)
4.4[L4T 32.4.3]
• TensorRT Version
7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only)
I don’t know how to find GPU Driver Version on Jetson nano

When I run deepstream-app -c test.txt at path /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/ , it will run OK.

test.txt :

# Copyright (c) 2019 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=1
width=640
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtsp://127.0.0.1:8554/test
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtsp://127.0.0.1:8554/test
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=5
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_nano.txt

[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0

[tests]
file-loop=0

But if I run at other folder, for example /home/user, it will occur following errors.

Error: Could not parse model engine file path
Failed to parse group property
** ERROR: <gst_nvinfer_parse_config_file:1242>: failed

Using winsys: x11 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:00.246000790 19195     0x38d32200 WARN                 nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<primary_gie> error: Configuration file parsing failed
0:00:00.246075115 19195     0x38d32200 WARN                 nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<primary_gie> error: Config file path: /home/user/config_infer_primary_nano.txt
** ERROR: <main:655>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Configuration file parsing failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(766): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/user/config_infer_primary_nano.txt
App run failed

I try to copy config_infer_primary_nano.txt to /home/user path and modify config to using deepstream install path:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/resnet10.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/labels.txt
batch-size=8
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
force-implicit-batch-dim=1
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/libnvdsparsebbox.so
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
#cluster-mode=1
#scaling-filter=0
#scaling-compute-hw=0

#Use these config params for group rectangles clustering mode
[class-attrs-all]
pre-cluster-threshold=0.2
group-threshold=1
eps=0.2
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

But it still not working, how to solve this problem?

I’ve installed DeepStream 5.0 GA and DeepStream 5.0 Python by @Fiona.Chen in this post

Please use absolute path instead of relative path for the config file, then your can run it anywhere. This can be easily known by you error log. Does “/home/user/config_infer_primary_nano.txt” exist in your device?

0:00:00.246075115 19195 0x38d32200 WARN nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<primary_gie> error: Config file path: /home/user/config_infer_primary_nano.txt

An example: deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source1_usb_dec_infer_resnet_int8.txt

Yes, I copy config_infer_primary_nano.txt from /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app to /home/user , but I still got same error.

I also use absolute path in config file (which is at /home/user) e.g. model-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel but it still occur same error.

and I find that if I use absolute path instead of relative path in the original config file (which is at /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app), it will occur error too, but using relative path is OK.

This problem occurs on my Jetson AGX Xavier platform too.

So I think there might be a issue.

Scenario 1:

  • Use relative path in the original config file (which is at /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app )

  • Place test.txt at /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/

  • Run deepstream-app -c test.txt at /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/
    And I got following log and run OK:

    Using winsys: x11
    gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
    gstnvtracker: Optional NvMOT_RemoveStreams not implemented
    gstnvtracker: Batch processing is OFF
    gstnvtracker: Past frame output is OFF
    ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine open error
    0:00:05.415460394 7477 0x15ab400 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine failed
    0:00:05.415618693 7477 0x15ab400 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine failed, try rebuild
    0:00:05.415654959 7477 0x15ab400 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
    INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
    INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
    0:00:38.790237051 7477 0x15ab400 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b2_gpu0_fp16.engine successfully
    INFO: [Implicit Engine Info]: layers num: 3
    0 INPUT kFLOAT input_1 3x272x480
    1 OUTPUT kFLOAT conv2d_bbox 16x17x30
    2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x17x30

    0:00:38.918091103 7477 0x15ab400 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary_nano.txt sucessfully

============================================================================

Scenario 2:

  • Use absolute path in the original config file (which is at /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app )

  • Place test.txt at /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/

  • Run deepstream-app -c test.txt at /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/
    And I got following ERROR:

    Error: Could not parse model engine file path
    Failed to parse group property
    ** ERROR: <gst_nvinfer_parse_config_file:1242>: failed

    Using winsys: x11
    gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
    gstnvtracker: Optional NvMOT_RemoveStreams not implemented
    gstnvtracker: Batch processing is OFF
    gstnvtracker: Past frame output is OFF
    0:00:00.789128569 9829 0x9a41200 WARN nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<primary_gie> error: Configuration file parsing failed
    0:00:00.789264664 9829 0x9a41200 WARN nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary_nano.txt
    ** ERROR: main:655: Failed to set pipeline to PAUSED
    Quitting
    ERROR from primary_gie: Configuration file parsing failed
    Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(766): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
    Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary_nano.txt
    App run failed

Have you read “config_infer_primary_nano.txt” file, it uses relative directories in it, you need to modify it if you want to move it to other folders. I don’t know what you have put in your “test.txt”, please check the content by yourself.
Please read every config file before you use it.

I read, and I change all relative paths to absolute, or it is must to use relative paths?

The original contents related to path in config_infer_primary_nano.txt is:

model-file=../../models/Primary_Detector_Nano/resnet10.caffemodel
proto-file=../../models/Primary_Detector_Nano/resnet10.prototxt
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine
labelfile-path=../../models/Primary_Detector_Nano/labels.txt

And config_infer_primary_nano.txt is from /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app , so I think I can change them as following:

model-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/resnet10.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano/labels.txt

Is there anything wrong? Or I make some mistakes of these path?

After I try, I find that I just need to comment model-engine-file in config file, then I can use absolute path and run it anywhere, since deepstream will generate engine file normally.

Is the engine file generated in the folder after you run successfully for the first time?

Yes, it will generate engine file at /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector_Nano

The configuration is OK.