Can deepstream run inference directly with onnx model?

I just want to clarify if deepstream support inferencing directly with model in onnx format or not (without converting to trt engine). As I can see we can not do it with nvinfer backend, only nvinfer-server supported?