Setup:
**• Jetson Orin Nano **
• DeepStream 6.4
**• Jetpack 6.0 **
**• TensorRt 8.6 **
In my Windows machine, I trained a custom YoloV8 detection model and exported to TensorRt engine format, I copied the model to my Jetson device and tried to run a sample deepstream-app for single video source, What I did was copy the source2_1080p_dec_infer-resnet_demux_int8.txt and modify it as follow:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///home/souf/Desktop/custom/vid.mp4
num-sources=1
#drop-frame-interval=2
gpu-id=0(0): memtype_device - Memory type Device
(1): memtype_pinned - Memory type Host Pinned
(2): memtype_unified - Memory type Unified
cudadec-memtype=0
[source1]
> enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://…/…/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0(0): memtype_device - Memory type Device
(1): memtype_pinned - Memory type Host Pinned
(2): memtype_unified - Memory type Unified
cudadec-memtype=0
[sink0]
#source0 output as filesink
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
encoder type 0=Hardware 1=Software
enc-type=0
sync=1
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10set profile only for hw encoder, sw encoder selects profile based on sw-preset
profile=0
output-file=out_source0.mp4
source-id=0[sink1]
#source1 output as filesink
> enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
encoder type 0=Hardware 1=Software
enc-type=0
sync=1
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10set profile only for hw encoder, sw encoder selects profile based on sw-preset
profile=0
output-file=out_source1.mp4
source-id=1#[sink0]
#enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
#type=4
#1=h264 2=h265
#codec=1
encoder type 0=Hardware 1=Software
#enc-type=0
#sync=1
#bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10set profile only for hw encoder, sw encoder selects profile based on sw-preset
#profile=0
set below properties in case of RTSPStreaming
#rtsp-port=8554
#udp-port=5400
#source-id=0#[sink1]
#enable=0
#Type - 1=FakeSink 2=EglSink 3=File
#type=2
#sync=1
#source-id=1
#gpu-id=0
#nvbuf-memory-type=0[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=33000Set muxer output width and height
width=1920
height=1080
#enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0If set to TRUE, system timestamp will be attached as ntp timestamp
If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
attach-sys-ts-as-ntp=1
config-file property is mandatory for any gie section.
Other properties are optional and if set will override the properties set in
the infer config file.
[primary-gie]
enable=1
gpu-id=0
> model-engine-file=/home/user/Desktop/custom/best.engine
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
> config-file=/home/user/Desktop/custom/config_infer_primary.txt[tests]
file-loop=0
then I copied & modified the config_infer_primary.txt:
[property]
gpu-id=0
net-scale-factor=0.00392156862745098
> model-engine-file=best.engine
> labelfile-path=labels.txt
batch-size=30
process-mode=1
model-color-format=00=FP32, 1=INT8, 2=FP16 mode
network-mode=1
> num-detected-classes=1
interval=0
gie-unique-id=1
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
force-implicit-batch-dim=1
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/this/directory/libnvds_infercustomparser.so1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=2
#scaling-filter=0
#scaling-compute-hw=0
infer-dims=3;544;960#Use the config params below for dbscan clustering mode
#[class-attrs-all]
#detected-min-w=4
#detected-min-h=4
#minBoxes=3#Use the config params below for NMS clustering mode
[class-attrs-all]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.2Per class configurations
[class-attrs-0]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.4#[class-attrs-1]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5#[class-attrs-2]
#pre-cluster-threshold=0.1
#eps=0.6
#dbscan-min-score=0.95#[class-attrs-3]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5
Then I ran the command:
/opt/nvidia/deepstream/deepstream/bin/deepstream-app -c source2_1080p_dec_infer-resnet_demux_int8.txt
The following errors are thrown:
** INFO: <create_encode_file_bin:366>: Could not create HW encoder. Falling back to SW encoder
ERROR: [TRT]: 1: [runtime.cpp::parsePlan::314] Error Code 1: Serialization (Serialization assertion plan->header.magicTag == rt::kPLAN_MAGIC_TAG failed.)
ERROR: Deserialize engine failed from file: /home/user/Desktop/custom/best.engine
0:00:06.908494174 6082 0xaaaad6852600 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/home/user/Desktop/custom/best.engine failed
0:00:07.294681612 6082 0xaaaad6852600 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/home/user/Desktop/custom/best.engine failed, try rebuild
0:00:07.294752842 6082 0xaaaad6852600 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:12.948179049 6082 0xaaaad6852600 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:00:13.346400395 6082 0xaaaad6852600 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2212> [UID = 1]: build backend context failed
0:00:13.346468809 6082 0xaaaad6852600 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:00:13.346540006 6082 0xaaaad6852600 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:13.350417191 6082 0xaaaad6852600 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Config file path: /home/user/Desktop/custom/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:716: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/user/Desktop/custom/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
Why is deepstream-app trying to build the engine file when I already produced it? Is it because the original config had encoded tlt models and I incorrectly modified it?
Any help is appreciated.