Tlt-converter on jetson nano is stuck

Hi,
I’am trying to use tlt3.0.
My aim is to use it on my own dataset using the notebook yolov4.
To test it before I followed the notebook provide by nvidia yolov4 on the kitti dataset which recognized pedestrian, car and bicycle. On my machine all steps have been passed successfully. So I try to export my model on jetson (XavierNX and Nano). I installed the corresponding tlt-converter on each according to the jetpack I used. For the Xavier I obtained the .trt file (after 15-20 min ). Unfortunately for the Jetson Nano (4Go) the tlt converter start but even after 1 day I do not retrieve my .trt file.

Is there an another way to convert my etlt file outside the Nano but still using it on Nano ???

Sincerely,

Is there any log?

here are the log when I am running the tlt command.
thx

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[INFO] ModelImporter.cpp:135: No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.
[INFO] builtin_op_importers.cpp:3659: Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace: 
[INFO] builtin_op_importers.cpp:3676: Successfully created plugin: BatchedNMSDynamic_TRT
[INFO] Detected input dimensions from the model: (-1, 3, 384, 1248)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 384, 1248) for input: Input
[INFO] Using optimization profile opt shape: (8, 3, 384, 1248) for input: Input
[INFO] Using optimization profile max shape: (16, 3, 384, 1248) for input: Input
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.



Can you share the command?

On the Jetson Xavier it works but not on my Jetson Nano

./tlt-converter my/path/for_tlt_converter/cuda10.2_trt7.1_jp4.4/yolov4_resnet18_epoch_080.etlt -k &KEY  -p Input,1x3x384x1248,8x3x384x1248,16x3x384x1248 -e /my/path/cuda10.2_trt7.1_jp4.4/trt.engine.trt -t fp16

Please try
./tlt-converter my/path/for_tlt_converter/cuda10.2_trt7.1_jp4.4/yolov4_resnet18_epoch_080.etlt -k &KEY -p Input,1x3x384x1248,1x3x384x1248,1x3x384x1248 -e /my/path/cuda10.2_trt7.1_jp4.4/trt.engine.trt -t fp16 -m 1 -w 100000000

thx it works now !!! Just for my information. Can you explain what was the issue ?

See ./tlt-converter -h for more info.
Just try to increase the workspace size and decrease the batch size. Also set lower optimization profile shapes.

Thanks a lot

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.