Error while converting model using TAO

Can you try official demo etlt model file?

wget -O

./tao-converter -k nvidia_tlt -d 3,544,960 -p image_input,1x3x544x960,1x3x544x960,1x3x544x960 -o BatchedNMS -e /export/trt.fp16.engine -t fp16 -i nchw -m 8 yolov4_resnet18.etlt

Try the official demo etlt model file and still get the following error

But I successfully converted the ssd model file under the same conditions and deployed it to deepstream

Please modify
-p image_input,
-p Input,

it doesn’t work

Can you run
$ md5sum yolov4_resnet18.etlt


Can you double check?
On my side, the generation is successful in NX.
$ ./tao-converter -k nvidia_tlt -d 3,544,960 -p Input,1x3x544x960,1x3x544x960,1x3x544x960 -o BatchedNMS -e /export/trt.fp16.engine -t fp16 -i nchw -m 8 yolov4_resnet18.etlt

Yesterday, another forum user also ran it successfully with this yolo_v4_resnet18.etlt.
See Error in Yolov4 engine conversion, - #41 by Morganh

More, what is your Jetpack version ?
$ apt-cache show nvidia-jetpack

And did you download the correct tao-converter?

my jetpack version is 4.4

I successfully converted the ssd etlt model file, so the tao-converter should be correct

Can you try
$ ./tao-converter -k nvidia_tlt -d 3,544,960 -p Input,1x3x544x960,2x3x544x960,4x3x544x960 -o BatchedNMS -e trt.fp16.engine -t fp16 -i nchw -m 8 yolov4_resnet18.etlt

Same error again,

I will prepare a new jetpack to try

It should not be related to Jetpack version.
On my NX, it is also Jetpack 4.4, it works well against this yolo_v4 etlt model.

Can you show
$ ll /usr/lib/aarch64-linux-gnu/*

Can you run
$ md5sum tao-converter


It is different. Mine is
$ md5sum tao-converter
aa79fad3ba09edb086bbbcfc2934c49b tao-converter

Can you try
$ wget
then unzip it and chmod +x

ohhhhhhhhhhhhhhh,it works.Thank you very much.

1 Like

Refer to TensorRT — TAO Toolkit 3.0 documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.