Classification FT2 - Question about eporting trt engine

Please provide the following information when requesting support.

• Hardware (RTX)
• Network Type (Classification)
• TLT Version (5.5.0)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I’m having some difficulties exporting a SGIE/TF2 classifier to my Deepstream application. With previous versions I had no problems with the process, it was exporting and using the TAO converter to convert an .etlt to a .trt engine.

With the TAO Image classification (TF2) everything in the ipynb notebook is clear untill step 10
Note: I’m training a model with QAT.

When exporting the QAT model with:

Convert QAT model to TensorRT engine

!mkdir -p $LOCAL_EXPERIMENT_DIR/export_qat
!sed -i “s|EXPORTDIR|$USER_EXPERIMENT_DIR/export_qat|g” $LOCAL_SPECS_DIR/spec_retrain_qat.yaml
!tao model classification_tf2 export -e $SPECS_DIR/spec_retrain_qat.yaml

It outputs only a efficientnet-b0.qat.onnx

If I want to convert the trained model for a Jetson (deepstream app), do I use the onnx model as input for the TAO-converter?
The TAO-converter only specifies how to use a etlt as input.
Or do I need to use TRTexec?

Im kinda lost Thnanks in advance for the help!

In latest deepstream, you can set the onnx file into the spec file.
onnx-file=xxx.onnx

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.