Using custom model in deepstream

hi everyone can any one help me for useing custom trained model which is converted into onnx format in deepstream and is it possible to use yolo trained and converted into onnx format and then use it in deep stream python_apps plz help me with this

We have c/c++ yolo deepstream sample yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/yolo_deepstream (github.com)

And DeepStream python binding is in NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications (github.com)

You can refer to the samples.

how can add onnx format model to the deepstream_python_apps so plz any one can elaborate
the process it helps me lot

You can deploy ONNX model with gst-nvinfer (Gst-nvinfer — DeepStream documentation 6.4 documentation). Add you can use pyds to construct DeepStream pipeline with gst-nvinfer like deepstream_python_apps/apps/deepstream-test1/deepstream_test_1.py at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

You can try with deepstream_python_apps/apps/deepstream-test1/deepstream_test_1.py at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com) first.

Please refer to Python Sample Apps and Bindings Source Details — DeepStream documentation 6.4 documentation to start with DeepStream python APIs and samples first

ok thank for your quick response I’m encountering another problem with Jetson inference. I’m using SSD-MobilenetV2 on a Jetson Orin Nano 8GB with JetPack 6.0, but while running, I get the following error:
``
[cuda] cudaEventElapsedTime(&cuda_time, mEventsGPU[evt], mEventsGPU[evt+1])
[cuda] device not ready (error 600) (hex 0x258)
[cuda] /home/trinity/jetson-inference/build/aarch64/include/jetson-inference/tensorNet.h:769


Additionally, the model isn't detecting any objects. I'm getting the following output:
Image info:
Detected 0 objects in image <cudaImage object>
   -- ptr:      0x208ebd000
   -- size:     1836000
   -- width:    1020
   -- height:   600
   -- channels: 3
   -- format:   rgb8
   -- mapped:   true
   -- freeOnDelete: false
   -- timestamp:    36.055622

Box, labels, confidence:
Detected 0 objects in image

For jetson-inference related issue, please raise topic in Latest Jetson & Embedded Systems/Jetson Orin Nano topics - NVIDIA Developer Forums

ok thank for you help if i get an error i can get back to you

i geting error

im runing it on jetpack6 L4T 36.3.0 in jetson orin nano 8gb of ram

trinity@trinity-desktop:/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test3$ python3 deepstream_test_3.py -i ‘rtsp://admin:admin123@192.168.1.28:554/cam/realmonitor?channel=1&subtype=0’
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
ERROR: [TRT]: 1: [runtime.cpp::parsePlan::314] Error Code 1: Serialization (Serialization assertion plan->header.magicTag == rt::kPLAN_MAGIC_TAG failed.)
ERROR: Deserialize engine failed from file: /home/trinity/AIML/yolov8m.engine

WARNING: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine opened error
0:02:18.583993485 149047 0xaaab117c6400 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2136> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:02:19.009611188 149047 0xaaab117c6400 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseLabelsFile() <nvdsinfer_context_impl.cpp:548> [UID = 1]: Could not open labels file:/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1/labels.txt
ERROR: parse label file:/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1/labels.txt failed, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: init post processing resource failed, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:02:19.024250096 149047 0xaaab117c6400 WARN nvinfer gstnvinfer.cpp:912:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:02:19.024305618 149047 0xaaab117c6400 WARN nvinfer gstnvinfer.cpp:912:gst_nvinfer_start: error: Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

**PERF: {‘stream0’: 0.0}

Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(912): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

it was yolov8 model converted into engine file
i have changed modelenginefile path and label file path rest are keep as it is below is file for reference################################################################################

SPDX-FileCopyrightText: Copyright (c) 2019-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

SPDX-License-Identifier: Apache-2.0

Licensed under the Apache License, Version 2.0 (the “License”);

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

Apache License, Version 2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an “AS IS” BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

################################################################################

Following properties are mandatory when engine files are not specified:

int8-calib-file(Only in INT8)

Caffemodel mandatory properties: model-file, proto-file, output-blob-names

UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names

ONNX: onnx-file

Mandatory properties for detectors:

num-detected-classes

Optional properties for detectors:

cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)

custom-lib-path,

parse-bbox-func-name

Mandatory properties for classifiers:

classifier-threshold, is-classifier

Optional properties for classifiers:

classifier-async-mode(Secondary mode only, Default=false)

Optional properties in secondary mode:

operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),

input-object-min-width, input-object-min-height, input-object-max-width,

input-object-max-height

Following properties are always recommended:

batch-size(Default=1)

Other optional properties:

net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),

model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,

mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),

custom-lib-path, network-mode(Default=0 i.e FP32)

The values in the config file are overridden by values set through GObject

properties.

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
tlt-model-key=tlt_encode
tlt-encoded-model=…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
#model-engine-file=/home/trinity/AIML
labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
#labelfile-path=/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1/labels.txt
int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
#scaling-filter=0
#scaling-compute-hw=0
cluster-mode=2
infer-dims=3;544;960

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

For yolov8 models, you should follow this DeepStream-Yolo.

You didn’t set the yolo model in the configuration file.

Please refer to DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums for how to configure the gst-nvinfer with your model.

As to our experience, you need to customize the yolov8 postprocessing function by yourself.

where i have to define this postprocessing function

i getting this error also WARNING: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine opened error
0:02:24.873462813 5698 0xaaab30604270 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2136> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:02:25.305612279 5698 0xaaab30604270 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseLabelsFile() <nvdsinfer_context_impl.cpp:548> [UID = 1]: Could not open labels file:/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1/labels.txt
ERROR: parse label file:/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test1/labels.txt failed, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: init post processing resource failed, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:02:25.317461577 5698 0xaaab30604270 WARN nvinfer gstnvinfer.cpp:912:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:02:25.317516075 5698 0xaaab30604270 WARN nvinfer gstnvinfer.cpp:912:gst_nvinfer_start: error: Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

**PERF: {‘stream0’: 0.0}

Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(912): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

There is already samples for the postprocessing customization in yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/yolo_deepstream · GitHub. The “NvDsInferParseCustomYoloV7” in yolo_deepstream/deepstream_yolo/nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp at main · NVIDIA-AI-IOT/yolo_deepstream (github.com) is customized for Yolov7 model postprocessing. You can customize your own postprocessing by yourself.

Why did you use the deepstream-test1 label file with your new model?

i not used the deepstream-test1 label file i just confused and put wrong path but currently i have used new path for labelfile and curently getting some errors and warnings as shown below

ERROR: [TRT]: 1: [runtime.cpp::parsePlan::314] Error Code 1: Serialization (Serialization assertion plan->header.magicTag == rt::kPLAN_MAGIC_TAG failed.)
ERROR: Deserialize engine failed from file: /home/trinity/AIML/yolov8m.engine
0:00:06.896368925 6474 0xaaab4c2c1ab0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2083> [UID = 1]: deserialize engine from file :/home/trinity/AIML/yolov8m.engine failed
0:00:07.295591530 6474 0xaaab4c2c1ab0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2188> [UID = 1]: deserialize backend context from engine from file :/home/trinity/AIML/yolov8m.engine failed, try rebuild
0:00:07.301760284 6474 0xaaab4c2c1ab0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 1]: Trying to create engine from model files

{‘input’: [‘rtsp://admin:admin123@192.168.1.28:554/cam/realmonitor?channel=1&subtype=0’], ‘configfile’: None, ‘pgie’: None, ‘no_display’: False, ‘file_loop’: False, ‘disable_probe’: False, ‘silent’: False}

below is my dstest3_config.txt
[property]
gpu-id=0
net-scale-factor=0.00392156862745098
tlt-model-key=tlt_encode
tlt-encoded-model=…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt
#model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
model-engine-file=/home/trinity/AIML/yolov8m.engine
#labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
labelfile-path=/home/trinity/AIML/labels.txt
int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=30
process-mode=1
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
num-detected-classes=80
interval=0
gie-unique-id=1
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
#scaling-filter=0
#scaling-compute-hw=0
cluster-mode=2
infer-dims=3;544;960

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1
its running now but not getting correct output as excepted

Where and how did you get the /home/trinity/AIML/yolov8m.engine file?

i getting this file from exporting pre-trained yolov8 model by using yolo commands

i geting this when i clone repo [NVIDIA-AI-IOT yolo_deepstream] which have mentioned above link and now i want to get .so file that is custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so so i have move to directory to nvdsinfer_custom_impl_Yolo run make command and get this below shown error trinity@trinity-desktop:~/yolo_deepstream/deepstream_yolo/nvdsinfer_custom_impl_Yolo$ sudo make
[sudo] password for trinity:
g++ -c -o nvdsparsebbox_Yolo.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes/ -I/usr/local/cuda/include nvdsparsebbox_Yolo.cpp
nvdsparsebbox_Yolo.cpp:32:10: fatal error: nvdsinfer_custom_impl.h: No such file or directory
32 | include “nvdsinfer_custom_impl.h”
| ^~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:47: nvdsparsebbox_Yolo.o] Error 1
can you help me to solve

any update regarding this error

Please refer to yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/yolo_deepstream · GitHub