Jetson-inference TensorRT onnx model

hi,

I train SSD-MobileNet v2 via jetson-inference, export it as onnx model and use it in Jetson nano. I have a question here.

jetson-inference uses the TensorRT format from the github description to export the final model to onnx. Can I call this TensorRT format(.trt)?

I want to compare jetson-inference’s SSD-MobileNet v2 and Darknet’s YOLOv4-tiny.
But, YOLO’s format is .trt.
(Train with Darknet and convert weight->onnx->trt via tenssort_demos.)

Is it appropriate compare the two model?

In other words, does jetson-inference’s onnx model include TensorRT?

Thank you:)

Hi @3629701, yes, jetson-inference uses TensorRT to run the ONNX models. The first time it loads a new model, it optimizes it with TensorRT and saves the TensorRT engine to disk (in a .engine file, probably the same as what you mean with .trt file)

The pre/post-processing for detection models in jetson-inference isn’t setup for supporting YOLO, but if you have other code that runs your YOLO model, yes you can compare the inferencing performance with SSD-Mobilenet. It sounds like both will have been run with TensorRT.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.