Inference using onnx and TAO?

Hello,

Can TAO containers do inference using the ONNX model that is exported from HDF5 instead of using the HDF5 itself?

For instance, using:

!tao model faster_rcnn inference --gpu_index $GPU_INDEX \
                                   -e $SPECS_DIR/specs.txt \
                                   -m $USER_EXPERIMENT_DIR/model.onnx

instead of:

!tao model faster_rcnn inference --gpu_index $GPU_INDEX \
                                   -e $SPECS_DIR/specs.txt \
                                   -m $USER_EXPERIMENT_DIR/model.hdf5

Or, do they only infer using the HDF5 and the trt engine?

Thanks

Hi @nasserha ,
Yes, it can only infer using the hdf5 and the trt engine.

Thanks @Morganh

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.