Onnx model to Tensorrt conversion

Hi there ! Hope you all are fine. I am using Jetson Xavier nx developer kit
I am trying to convert onnx to trt engine for jetson but how can I get a way round this. First, the following is the warning

[TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size will enable more tactics, please check verbose output for requested sizes.

The model is converted to trt but how can I verify it?
Secondly, the above warning would have any impact on the converted trt model?

How can I inference from the model ?
I am new to this and having a hard time understanding the inference part of the tensorrt model . Remember I am short on space having 3GB’s of memory remaining in my jetson.

Hi,

If you convert it with trtexec, it also shows some profiling results which indicates the model can work well.
The warning is harmless if TensorRT still can pick an algorithm to run.

To infer the model, please check below for a sample:
https://elinux.org/Jetson/L4T/TRT_Customized_Example#OpenCV_with_PLAN_model

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.