Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU) Tesla T4 • DeepStream Version 6.2 (docker image) • TensorRT Version 8.5.2 • NVIDIA GPU Driver Version (valid for GPU only) 525.85.12
Hello, I’ve a pipeline as follows:
4 class detector → tracker → vehicle color classifier → vehicle make classifier
The problem lies with “vehicle make classifier”.
The dstest2_sgie2_config.txt (from deepstream-test2) works fine and gives expected results, but when I use the config written below, the engine doesn’t run any inference. There are no errors shown either.
Objective of this exercise is to understand how models from NGC can be integrated to my DS pipeline.
sgie2_config_vehiclemake.txt (This is the config which is causing the problems)
[property]
gpu-id=0
net-scale-factor=1
offsets=103.939;116.779;123.68
tlt-model-key=tlt_encode
tlt-encoded-model=/root/ngc_assets/vehiclemakenet_vpruned_v1.0.1/resnet18_vehiclemakenet_pruned.etlt
labelfile-path=/root/ngc_assets/vehiclemakenet_vpruned_v1.0.1/labels.txt
int8-calib-file=/root/ngc_assets/vehiclemakenet_vpruned_v1.0.1/vehiclemakenet_int8.txt
uff-input-order=1
infer-dims=3;224;224;
uff-input-blob-name=input_1
batch-size=1
network-mode=0
network-type=1
num-detected-classes=20
model-color-format=1
process-mode=2
gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=0
output-blob-names=predictions/Softmax