While converting the ONNX model to a TensorRT engine using the following configuration:[property]
gpu-id=0
net-scale-factor=0.01735207357279195
offsets=123.675;116.28;103.53
model-color-format=0
Set the model file path
onnx-file= /root/data/REID_V1.0/REID_DS7_MALAD/REID/MODELS/EMBEDDING/ImageClassifier.onnx
Set the model input layer name
model-engine-file=/root/data/REID_V1.0/REID_DS7_MALAD/REID/MODELS/EMBEDDING/ImageClassifier.onnx_b1_gpu0_fp32.engine
input-dims=3;256;128;0
input-object-min-width=30
input-object-min-height=50
batch-size=8
process-mode=2
output-blob-names=modelOutput
model-color-format=0
operate-on-gie-id=1
gie-unique-id=2
classifier-threshold=0
output-tensor-meta=1
maintain-aspect-ratio=0
Extract embedding on “people” class only
operate-on-class-ids=0
network-type=100 , The engine was successfully generated, but during inference, the engine is not producing embeddings. When I print the embeddings, it shows None. The ONNX model used is ImageClassifier.onnx. please provide solution