Can deepstream run inference directly with onnx model?

I just want to clarify if deepstream support inferencing directly with model in onnx format or not (without converting to trt engine). As I can see we can not do it with nvinfer backend, only nvinfer-server supported?

we can support onnx model with nvinfer, refer Gst-nvinfer — DeepStream 5.1 Release documentation

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.