Using pth files with DeepStream

Hello.

Can I use pth file models with DeepStream SDK? Does it need to be an engine file?

You can convert it to ONNX model file and use it directly, you can refer to the option onnx-file in nvinfer Gst-nvinfer — DeepStream documentation 6.4 documentation (nvidia.com).

Are pth files not supported?

Yes. You can refer to the FAQ Gst-Nvinfer source code diagram to check the input of the TensorRT Model Builer.

Is there a way to use an engine converted to TensorRT using torch2trt with DeepStream?

You can use PyTorch to transform the pt model to onnx model. You can refer to export_simple_model_to_onnx_tutorial.