• Hardware (Xavier/Nano)
• Network Type (Yolo_v4)
Hello,
I would like to run the INT8 version of YoloV4 on Jetson NX Xavier.
In this documentation [TAO Deploy Installation - NVIDIA Docs], it states " Due to memory issues, you should first run the gen_trt_engine
subtask on the x86 platform to generate the engine; you can then use the generated engine to run inference or evaluation on the Jetson platform and with the target dataset.".
However, I understand that the conversion from ONNX to TensorRT should be done on the platform where the inference is actually performed.
Can the TensorRT engine generated on an x86 platform be directly used on the Jetson platform?
Thank you.