Pre-trained model conversion to tensorrt format from etlt using tlt-converter on Orin NX

Hi

I want to convert this (yolov4_tiny_usa_deployable.etlt) model to tensorrt format to run on Orin NX.

From what I understand one way to do it is using use tlt-converter. However my Orin NX runs Jetpack (5.1.1-b56) with cuda (11.4.19-1) and tensorrt (8.5.2.2-1+cuda11.4) and tlt-convert doesn’t support this version (as seen here) on Orin NX.

Can someone suggest how to convert the pre trained etlt yolov4 model to tensorrt format in Orin NX?

Thanks!

You can refer to TAO Converter | NVIDIA NGC and download v4.0.0_trt8.5.2.2_aarch64 version for your case.

wget --content-disposition 'https://api.ngc.nvidia.com/v2/resources/org/nvidia/team/tao/tao-converter/v4.0.0_trt8.5.2.2_aarch64/files?redirect=true&path=tao-converter' -O tao-converter

You can also decode the .etlt model to .onnx file(refer to Fpenet retraining output file onnx but deepstream is using tlt - #12 by Morganh) and then use trtexec to generate tensorrt engine. Refer to TRTEXEC with YOLO_v4_tiny - NVIDIA Docs.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.