Please provide complete information as applicable to your setup.
Running in a container
nvcr.io/nvidia/deepstream-l4t:6.1.1-samples
• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 6.1.1-samples container
• JetPack Version (valid for Jetson only) 5.0.2 (part of deepstream container)
• TensorRT Version 8.4.1-1+cuda11.4 (part of the deepstream container
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) Take etlt file generated from tao export and try to run it with the deepstream-test5-app
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I have trained a custom model using tao and yolov4_tiny. When I export the model using tao and try to use the etlt file in my deepstream application, I’m getting the following output.
*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***
WARN !! Hardware mode deprecated. Prefer GPU mode instead
(deepstream-test5-app:1): GLib-CRITICAL **: 18:26:58.264: g_strrstr: assertion 'haystack != NULL' failed
nvds_msgapi_connect : connect success
Opening in BLOCKING MODE
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
0:00:06.074411373 1 0xaaab2463aa00 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/app/resources/custom_models/../custom_models/yolov4_cspdarknet_tiny_epoch_200.etlt_b4_gpu0_fp32.engine failed
0:00:06.143553366 1 0xaaab2463aa00 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/app/resources/custom_models/../custom_models/yolov4_cspdarknet_tiny_epoch_200.etlt_b4_gpu0_fp32.engine failed, try rebuild
0:00:06.144035480 1 0xaaab2463aa00 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
parseModel: Failed to parse ONNX model
ERROR: Failed to build network, error in model parsing.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:09.108453273 1 0xaaab2463aa00 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::61] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating
Here is my inference config file:
# Copyright (c) 2018 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# enable-dbscan(Default=false), interval(Primary mode only, Default=0)
# custom-lib-path,
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.
[property]
workspace-size=2500
gie-unique-id=1
gpu-id=0
net-scale-factor=1
offsets=103.939;116.779;123.68
#infer-dims=3;544;960
#infer-dims=3;384;384
infer-dims=3;384;1248
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
#0=RGB, 1=BGR, 2=GRAY
model-color-format=1
model-engine-file=../custom_models/yolov4_cspdarknet_tiny_epoch_200.etlt_b4_gpu0_fp32.engine
tlt-encoded-model=../custom_models/yolov4_cspdarknet_tiny_epoch_200.etlt
tlt-model-key=<model-key>
num-detected-classes=1
labelfile-path=../custom_models/labels.txt
uff-input-order=0
uff-input-blob-name=Input
## 0=Detector
network-type=0
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=2
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so
enable-dla=1
use-dla-core=0
batch-size=4
[class-attrs-all]
pre-cluster-threshold=0.90
nms-iou-threshold=0.20