How to get the correct output label for my inference in a tensorflow model loaded in a triton server in deepstream 5.0

Hi, I’m working in deepstream 5.0 with triton inference server. I’ve loaded a TensorFlow model for plates text recognition. The problem I’m facing is that after I load the model with the proper config for tensorflow_graphdef in Triton Server. The result that I get when I run deepstream-app -c config.txt is wrong. It should label the six letters on the plate, but instead of that, I get just one letter in the output label. First, the model I’m using a model which has a -1x24x94x3 input, and a 1x6 output. I can show you the graph if needed. Also, these are my config for triton and deepstream:
config.pbtxt
name: “lpr_model”
platform: “tensorflow_graphdef”
max_batch_size: 8
input [
{
name: “input”
data_type: TYPE_FP32
dims: [24, 94, 3 ]
}
]
output [
{
name: “d_predictions”
data_type: TYPE_INT32
dims: [ 6 ]
label_filename: “lpr_labels.txt”
}
]

source1_primary classifier_modified for my tensorflow model with RTSP

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=kitti-trtis

[tiled-display]
enable=1
rows=1
columns=1
#width=1280
#height=720
width=376
height=96
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://…/…/streams/lpr_complete_test.mp4
#uri=file://…/…/streams/classification_test_video.mp4
num-sources=1
#drop-frame-interval=2
gpu-id=0
#(0): memtype_device - Memory type Device
#(1): memtype_pinned - Memory type Host Pinned
#(2): memtype_unified - Memory type Unified
cudadec-memtype=0

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
#set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
##Set muxer output width and height
#width=1280
#height=720
width=376
height=96
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

#config-file property is mandatory for any gie section.
#Other properties are optional and if set will override the properties set in
#the infer config file.
[primary-gie]
enable=1
#(0): nvinfer; (1): nvinferserver
plugin-type=1
#infer-raw-output-dir=trtis-output
batch-size=1
interval=0
gie-unique-id=1
#config-file=config_infer_primary_classifier_densenet_onnx.txt
#config-file=config_infer_primary_classifier_inception_graphdef_postprocessInDS.txt
#config-file=config_infer_primary_classifier_inception_graphdef_postprocessInTrtis.txt
config-file=config_infer_primary_classifier_lpr_model_postprocessInTrtis.txt
#config-file=config_infer_primary_classifier_lpr_saved_model.txt

[tests]
file-loop=1

config of primary gie:

infer_config {
unique_id: 5
gpu_ids: [0]
max_batch_size: 1
backend {
inputs: [ {
name: “input”
}]
outputs: [
{name: “d_predictions”}
]
trt_is {
model_name: “lpr_model”
version: -1
model_repo {
root: “…/…/trtis_model_repo”
strict_model_config: true
tf_gpu_memory_fraction: 0.35
tf_disable_soft_placement: 0
}
}
}

preprocess {
network_format: IMAGE_FORMAT_RGB
tensor_order: TENSOR_ORDER_NHWC
maintain_aspect_ratio: 0
frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
frame_scaling_filter: 1
normalize {
scale_factor: 0.0078125
channel_offsets: [128, 128, 128]
}
}

postprocess {
trtis_classification {
topk: 1
threshold:0.1
}
}
}

input_control {
process_mode: PROCESS_MODE_FULL_FRAME
interval: 0
}

Any suggestions pls. Thank you.

Please provide complete information as applicable to your setup.

**• Hardware Platform: Tesla T4 in Ubuntu server 18.04
**• DeepStream Versio: 5.0
**• NVIDIA GPU Driver Version (valid for GPU only): 450.51.06

Hi @tubarao0705,
Please provide your setup info as other ticket.

Does “?” in ?x24x94x3 mean variable ? If it does, try changeing “dims: [24, 94, 3 ]” to “dims: [24, 94, 3, -1 ]”.

Referring to Documentation - Latest Release :: NVIDIA Deep Learning Triton Inference Server Documentation

Yes, the ? is just referring that there goes the batch sizing. Because I used the batch_size property in the config file, I just put [24,94,3] cause according to the docs it is going to be [-1,24,94,3] so that much my expected input I guess. Where I have the real problem is in the output that is displayed in the infer. I guess that gst-nvinferserver is applying a softmax so it just shows me one letter of the plate. I want to clarify that my output is a tensor [1,6] where each is element is the number of a class. Not a probability of being something.

some proposals.

  1. try

rtis_classification {
topk: 6
threshold:0.1
}

  1. look into pasrser code - /opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvdsinfer_customparser

  2. rewrite your own parser code

classification {
threshold: 0
custom_parse_classifier_func: “RewriteCustomFunc”
}

Many thanks, the custom parse file saves my day.