• How to reproduce the issue ?
I have a vehiclemakenet modified that classify 35 car brands. I have exported the model after trained in tao classification tf1:
# Generate .onnx file using tao container
!tao model classification_tf1 export \
-m $USER_EXPERIMENT_DIR/retrain_pruned/weights/resnet_010.hdf5 \
-o $USER_EXPERIMENT_DIR/export/final_model \
-e $SPECS_DIR/retrain_car_make.cfg \
--classmap_json $USER_EXPERIMENT_DIR/retrain_pruned/classmap.json \
--gen_ds_config
I use this Onnx file to inference the net in Deepstream, but I do not have any output from this net. (If I use the catalog pretrained model it works, but I need more output classes). When deepstream generate the engine, I obtain that:
Hi @josemiad
After trained, may I know the tao model classification_tf1 inference xxx can run inference as expected? You can run inference against your training images or test images.
After tao model classification_tf1 export, you will get the onnx file. To narrow down, please run tao deploy classification_tf1 gen_trt_engine xxx and then tao deploy classification_tf1 inference to check if it is expected.
I think the error is there:
when I export the model it print the output node is Using output nodes: ['predictions/Softmax'] but when I generate the tensorRT engine I get Output 'predictions' with shape (-1, 35).
sorry, fp32 yes, it went good on my PC. I take this onnx to deepstream and I change the labels txt because the tao export a wrong format. Now, my net run on deepstream and get output labels but the results are wrong. I do not now if this is because it is not taken the predictions/Softmax output instead of predictions layer only.