Model not working after upgrading from TensorRT 8.6 to TensorRT10.3

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version: 10.3.0
GPU Type: Jetson Orin AGX
Nvidia Driver Version:
CUDA Version: 12.6
CUDNN Version: 9.3
Operating System + Version: Ubuntu 22.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Baremetal

Hi,
I was running a person detect YOLOv8 model as PGIE and person attribute model as SGIE in deepstream 7.0, jetpack 6.0 and tensorRT 8.6. Now I have upgraded it as per configuration mentioned above. I have build the new engine files for both models using deepstream7.1. I found that YOLOV8 model is working properly, however the person attribute model is not able to recognize any attribute.
To debug I print confidence in custom parser and found that all the values are less than 0.2 while earlier with 7.0 I was getting values higher then 0.55.

Relevant Files

model files are available at

solider_model.onnx_b1_gpu0_fp32.engine - tensorRT 10.3 and deepstream 7.1, model was generated automatically by deepstream-app (This model inference has very low confidence score)

sim_solider.onnx_b1_gpu0_fp32.engine - tensorRT 8.6 and deepstream7.0, model was automatically generated by deepstream-app (This model working properly)

shared library and yolo model is also in this folder.

Steps To Reproduce

deepstream-app -c main_config.txt

Below is the configuration I am using.

solider_config.txt

[property]
gpu-id=0
onnx-file=solider_model.onnx
model-engine-file=solider_model.onnx_b1_gpu0_fp32.engine
batch-size=1
net-scale-factor=0.00392156862745098
offsets=123.675;116.28;103.53
model-color-format=0
network-mode=0
process-mode=2
network-type=1
labelfile-path=labels.txt
interval=0
infer-dims=3;256;192
maintain-aspect-ratio=1
symmetric-padding=0
custom-lib-path=parser/libnvdsinfer_custom_impl.so
parse-classifier-func-name=NvDsInferParseSolider
classifier-threshold=0.55

yolo_config.txt

[property]
gpu-id=0
net-scale-factor=0.0039215686274509
model-engine-file=yolov8s.onnx_b1_gpu0_fp32.engine
#onnx-file=yolov8n.onnx
batch-size=1
network-mode=0
process-mode=1
num-detected-classes=80
interval=0
#gie-unique-id=2
symmetric-padding=1
cluster-mode=2
parse-bbox-func-name=NvDsInferParseCustomYoloV8
custom-lib-path=parser/libnvdsinfer_custom_impl.so
infer-dims=3;640;640
maintain-aspect-ratio=1
#output-tensor-meta=1

[class-attrs-all]
pre-cluster-threshold=0.75
#topk=20
nms-iou-threshold=0.25

main_config.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///any-mp4-with-person.mp4
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink/nv3dsink (Jetson only) 3=File
type=1
container=1
sync=0
codec=1
enc-type=0
#source-id=1
gpu-id=0
nvbuf-memory-type=0


[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
#sync=0
#iframeinterval=10
#bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
# set profile only for hw encoder, sw encoder selects profile based on sw-preset
profile=0
output-file=clip2_db.mp4
source-id=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=10
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
display-bbox=1
display-text=1

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=1
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached


[primary-gie]
enable=1
gpu-id=0
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=0;0;1;1
bbox-border-color1=0;0;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;0;1;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=0
config-file=yolo_config.txt

[secondary-gie1]
enable=1
gpu-id=0
batch-size=1
gie-unique-id=5
input-tensor-meta=0
operate-on-gie-id=1
nvbuf-memory-type=0
config-file=solider_config.txt

[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=640
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_IOU.yml
#ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvSORT.yml
# ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_DeepSORT.yml
#gpu-id=0
display-tracking-id=1

[tests]
file-loop=0

I am looking forward for a reply on this. Has someone started looking at this ticket.

There are two output layers in this model and the order of the layers changed with TensorRT10.3 and my parser code was looking at wrong layer. After getting the data from correct output layer the issue got resolved.

Pleased you were able to resolve this - thanks for letting us know! Have a great day.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.