I just want to clarify if deepstream support inferencing directly with model in onnx format or not (without converting to trt engine). As I can see we can not do it with nvinfer backend, only nvinfer-server supported?
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.