Running YoloV4 INT8 Version on Jetson NX Xavier: Compatibility of TensorRT Engine Generated on x86 Platform

• Hardware (Xavier/Nano)
• Network Type (Yolo_v4)

Hello,
I would like to run the INT8 version of YoloV4 on Jetson NX Xavier.

In this documentation [TAO Deploy Installation - NVIDIA Docs], it states " Due to memory issues, you should first run the gen_trt_engine subtask on the x86 platform to generate the engine; you can then use the generated engine to run inference or evaluation on the Jetson platform and with the target dataset.".
However, I understand that the conversion from ONNX to TensorRT should be done on the platform where the inference is actually performed.
Can the TensorRT engine generated on an x86 platform be directly used on the Jetson platform?
Thank you.

Please use the steps mentioned in GitHub - NVIDIA/tao_deploy: Package for deploying deep learning models from TAO Toolkit.

You can also use trtexec to generate tensorrt engine. Optimizing and Profiling with TensorRT - NVIDIA Docs.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.