How to onnx to engine convert on jetson orin

My yolo_v4_tiny network onnx file could infere with tao infer, which I trained on my x86 rtx2070 computer

Now I want to convert it on my jetson orin to engine. I use latest jetpack and deepstream 6.3, cuda 11.4, cudnn 8.6.0

Should I use tao converter? Trtexec? Deepstream? DonI need TrtOss? Can I run trtexec on container, would it still be jetson orin specific? If run on container, does any cuda container run trtexec? Tao containers or tensorrt, or which container ? Which method is latest and best?

Hi,

Could you check if the model can work with trtexec?

$ /usr/src/tensorrt/bin/trtexec --onnx=[file]

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.