How to run deepstream-app without building engine?

Setup:

**• Jetson Orin Nano **
• DeepStream 6.4
**• Jetpack 6.0 **
**• TensorRt 8.6 **

In my Windows machine, I trained a custom YoloV8 detection model and exported to TensorRt engine format, I copied the model to my Jetson device and tried to run a sample deepstream-app for single video source, What I did was copy the source2_1080p_dec_infer-resnet_demux_int8.txt and modify it as follow:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///home/souf/Desktop/custom/vid.mp4
num-sources=1
#drop-frame-interval=2
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source1]
> enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://…/…/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[sink0]
#source0 output as filesink
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
encoder type 0=Hardware 1=Software
enc-type=0
sync=1
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10

set profile only for hw encoder, sw encoder selects profile based on sw-preset

profile=0
output-file=out_source0.mp4
source-id=0

[sink1]
#source1 output as filesink
> enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
encoder type 0=Hardware 1=Software
enc-type=0
sync=1
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10

set profile only for hw encoder, sw encoder selects profile based on sw-preset

profile=0
output-file=out_source1.mp4
source-id=1

#[sink0]
#enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
#type=4
#1=h264 2=h265
#codec=1
encoder type 0=Hardware 1=Software
#enc-type=0
#sync=1
#bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10

set profile only for hw encoder, sw encoder selects profile based on sw-preset

#profile=0

set below properties in case of RTSPStreaming

#rtsp-port=8554
#udp-port=5400
#source-id=0

#[sink1]
#enable=0
#Type - 1=FakeSink 2=EglSink 3=File
#type=2
#sync=1
#source-id=1
#gpu-id=0
#nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=33000

Set muxer output width and height

width=1920
height=1080
#enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

If set to TRUE, system timestamp will be attached as ntp timestamp

If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached

attach-sys-ts-as-ntp=1

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
> model-engine-file=/home/user/Desktop/custom/best.engine
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
> config-file=/home/user/Desktop/custom/config_infer_primary.txt

[tests]
file-loop=0

then I copied & modified the config_infer_primary.txt:

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
> model-engine-file=best.engine
> labelfile-path=labels.txt
batch-size=30
process-mode=1
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
> num-detected-classes=1
interval=0
gie-unique-id=1
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
force-implicit-batch-dim=1
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/this/directory/libnvds_infercustomparser.so

1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2
#scaling-filter=0
#scaling-compute-hw=0
infer-dims=3;544;960

#Use the config params below for dbscan clustering mode
#[class-attrs-all]
#detected-min-w=4
#detected-min-h=4
#minBoxes=3

#Use the config params below for NMS clustering mode
[class-attrs-all]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.2

Per class configurations

[class-attrs-0]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.4

#[class-attrs-1]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5

#[class-attrs-2]
#pre-cluster-threshold=0.1
#eps=0.6
#dbscan-min-score=0.95

#[class-attrs-3]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5

Then I ran the command:

/opt/nvidia/deepstream/deepstream/bin/deepstream-app -c source2_1080p_dec_infer-resnet_demux_int8.txt

The following errors are thrown:

** INFO: <create_encode_file_bin:366>: Could not create HW encoder. Falling back to SW encoder
ERROR: [TRT]: 1: [runtime.cpp::parsePlan::314] Error Code 1: Serialization (Serialization assertion plan->header.magicTag == rt::kPLAN_MAGIC_TAG failed.)
ERROR: Deserialize engine failed from file: /home/user/Desktop/custom/best.engine
0:00:06.908494174 6082 0xaaaad6852600 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/home/user/Desktop/custom/best.engine failed
0:00:07.294681612 6082 0xaaaad6852600 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/home/user/Desktop/custom/best.engine failed, try rebuild
0:00:07.294752842 6082 0xaaaad6852600 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:12.948179049 6082 0xaaaad6852600 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:00:13.346400395 6082 0xaaaad6852600 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2212> [UID = 1]: build backend context failed
0:00:13.346468809 6082 0xaaaad6852600 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:00:13.346540006 6082 0xaaaad6852600 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:13.350417191 6082 0xaaaad6852600 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Config file path: /home/user/Desktop/custom/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:716: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/user/Desktop/custom/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

Why is deepstream-app trying to build the engine file when I already produced it? Is it because the original config had encoded tlt models and I incorrectly modified it?

Any help is appreciated.

What are the parameters of the engine file you exported? These parameters must be consistent with the configuration file, otherwise deepstream cannot load the model.

Thank you for your reply @junshengy.

I updated batch-size to the default value 16 since I did not specify any when training the model, as for the network-mode, not sure what the default value is so I just tried the 3 possibles values (0,1,2) but all gave the same error.

Is your original model onnx ?

Exporting the engine has nothing to do with the parameters used for training, they should be the parameters of onnx_tensorrt (if you use this package). If you use trtexec, you need to specify it in the command line parameters.

My original model was pt, what I tried now was export it to onnx using yolo export command:

yolo export model=best.pt int8=True format=onnx

And then in my Jetson I ran trtexec to convert it to engine:

/usr/src/tensorrt/bin/trtexec --onnx=best.onnx --saveEngine=custom.engine --int8

For some reason the “–batch=” argument is not recognized, so I just left it to take the default value.

I then modified the config_infer_primary.txt batch-size=1 and network-mode=1, I also modified batch-size=1 in the source2… .txt file, here is the new output:

** INFO: <create_encode_file_bin:366>: Could not create HW encoder. Falling back to SW encoder
0:00:06.548453934  6043 0xaaaacbe94410 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/user/Desktop/custom/custom.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT images          3x640x640       
1   OUTPUT kFLOAT output0         5x8400          

WARNING: Backend context bufferIdx(0) request dims:1x3x544x960 is out of range, [min: 1x3x640x640, max: 1x3x640x640]
0:00:06.956800358  6043 0xaaaacbe94410 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2048> [UID = 1]: backend can not support dims:3x544x960
0:00:06.956845832  6043 0xaaaacbe94410 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2201> [UID = 1]: deserialized backend context :/home/user/Desktop/custom/custom.engine failed to match config params, trying rebuild
0:00:06.975986385  6043 0xaaaacbe94410 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:13.132221388  6043 0xaaaacbe94410 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:00:13.562520155  6043 0xaaaacbe94410 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2212> [UID = 1]: build backend context failed
0:00:13.562595198  6043 0xaaaacbe94410 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:00:13.562672033  6043 0xaaaacbe94410 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:13.567042210  6043 0xaaaacbe94410 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Config file path: /home/user/Desktop/custom/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:716>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/user/Desktop/custom/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one. Thanks

I think you can’t use int8 due to missing calibration file, so there should be a problem with the converted model. Regarding int8, you can refer to this document

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.