Suggestion for tao toolkit config setup error

export:
checkpoint: ‘EVALMODEL’
onnx_file: ‘EXPORTDIR/efficientnet-b0.onnx’
data_type: ‘fp32’

In the Creating an Experiment Spec File - Specification File for Classification description on this homepage, after setting data_type: ‘fp32’ and running tao model classification_tf2 export -e specs_path.yaml, an error like the capture below occurred. It seems that the data_type parameter is not supported, so it seems that it needs to be modified on the homepage.

So, how should I set calib (int8, fp16, fp32) when converting onnx?

Moving to TAO forum.

Thanks for the finding. Yes, it is an additional parameter.

It is not needed to set data_type when convert to onnx. It is needed to set when generate tensorrt engine.
See tao_tensorflow2_backend/nvidia_tao_tf2/cv/classification/config/default_config.py at main · NVIDIA/tao_tensorflow2_backend · GitHub.

Okey tao deploy is generated engine file. But is it available to deepstream?

I know that deepstream’s deploy options are onnx or etlt.

So, i understand that It is not available engine.

In deepstream config file, it is also a parameter to set to fp32 or fp16 or int8. It is also related to tensorrt engine generation.

According to the explanation here, it appears that the engine file is built using the onnx file.

However, when creating an onnx file, int8, fp16, and fp32 are set and created, but it seems that int8, fp16, and fp32 cannot be set when creating an onnx file with tao model classification_tf2 export.

It is not needed to set during exporting. Please run export without any setting about int8/fp16/fp32.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.