Failed to convert etlt file to ONXX engine file in Jetson AGX Xavier Machine

**• Hardware Platform : Jetson AGX Xavier
**• DeepStream Version: 5.0
**• JetPack Version: 4.4[L4T 32.4.3]
**• TensorRT Version:
**• NVIDIA GPU Driver Version : R32 (release), REVISION: 4.3, GCID: 21589087, BOARD: t186ref, EABI: aarch64
**• Failed to convert etlt file to ONXX engine file for the respective model
**• How to reproduce the issue ? I have downloaded the model file as mentioned in the nvidia tlt page

In the above link refer : Download and prepare the models
mkdir -p /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPR
cd /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPR
#create an empty label file
echo > labels_us.txt

Then I have downloaded the tlt converter tool from this link :
After extracting it I just run the command as:

./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -d 3,368,640 -e /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPR/lpr_us_onnx_b16.engine

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] Tensor DataType is determined at build time for tensors not marked as input or output.
[INFO] Detected input dimensions from the model: (-1, 3, 48, 96)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 48, 96) for input: image_input
[INFO] Using optimization profile opt shape: (4, 3, 48, 96) for input: image_input
[INFO] Using optimization profile max shape: (16, 3, 48, 96) for input: image_input
[WARNING] DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
[INFO] --------------- Layers running on DLA:
[INFO] --------------- Layers running on GPU:
[INFO] (Unnamed Layer* 53) [Constant] + (Unnamed Layer* 54) [Shuffle], (Unnamed Layer* 55) [Reduce], tf_op_layer_Sum/Sum_reduce_min, conv1 + re_lu_clip, re_lu/Relu:0_pooling, res2a_branch2a + re_lu_1_clip, res2a_branch2b, res2a_branch1 + tf_op_layer_add/add + re_lu_2_clip, res2b_branch2a + re_lu_3_clip, res2b_branch2b + tf_op_layer_add_1/add_1 + re_lu_4_clip, res3a_branch2a + re_lu_5_clip, res3a_branch2b, res3a_branch1 + tf_op_layer_add_2/add_2 + re_lu_6_clip, res3b_branch2a + re_lu_7_clip, res3b_branch2b + tf_op_layer_add_3/add_3 + re_lu_8_clip, res4a_branch2a + re_lu_9_clip, res4a_branch2b, res4a_branch1 + tf_op_layer_add_4/add_4 + re_lu_10_clip, res4b_branch2a + re_lu_11_clip, res4b_branch2b + tf_op_layer_add_5/add_5 + re_lu_12_clip, res5a_branch2a + re_lu_13_clip, res5a_branch2b, res5a_branch1 + tf_op_layer_add_6/add_6 + re_lu_14_clip, res5b_branch2a + re_lu_15_clip, res5b_branch2b, {(Unnamed Layer* 51) [Constant],(Unnamed Layer* 52) [Constant],(Unnamed Layer* 70) [Constant],tf_op_layer_add_7/add_7,re_lu_16_clip,Transpose2 + flatten_feature + Transpose,(Unnamed Layer* 62) [Constant] + (Unnamed Layer* 63) [Shuffle],lstm,(Unnamed Layer* 66) [Constant] + (Unnamed Layer* 67) [Shuffle],lstm_1,(Unnamed Layer* 71) [TripLimit],(Unnamed Layer* 72) [Iterator],lstm_2,(Unnamed Layer* 80) [Recurrence],(Unnamed Layer* 78) [Shuffle],(Unnamed Layer* 82) [Matrix Multiply],(Unnamed Layer* 81) [Matrix Multiply],(Unnamed Layer* 83) [ElementWise],(Unnamed Layer* 84) [ElementWise],(Unnamed Layer* 85) [Slice],(Unnamed Layer* 88) [Slice],(Unnamed Layer* 91) [Slice],(Unnamed Layer* 97) [Slice],(Unnamed Layer* 87) [Activation],(Unnamed Layer* 90) [Activation],(Unnamed Layer* 93) [Activation],(Unnamed Layer* 99) [Activation],(Unnamed Layer* 94) [ElementWise],(Unnamed Layer* 95) [ElementWise],(Unnamed Layer* 96) [ElementWise],(Unnamed Layer* 100) [Activation],(Unnamed Layer* 101) [ElementWise],(Unnamed Layer* 105) [LoopOutput]}, Squeeze + Transpose1, td_dense_reshape_0 + (Unnamed Layer* 120) [Shuffle], dense + (Unnamed Layer* 126) [Constant] + (Unnamed Layer* 127) [Shuffle] + unsqueeze_node_after_(Unnamed Layer* 126) [Constant] + (Unnamed Layer* 127) [Shuffle] + Add, copied_squeeze_after_Add, softmax, (Unnamed Layer* 139) [Shuffle], Max_reduce_min, ArgMax_argmax, (Unnamed Layer* 145) [Shuffle],
[INFO] Detected 1 inputs and 2 output network tensors.

After that if I go to the respective folder and see the engine file is not created

LPR App git link :

Can anyone please help to fix this issue.


Do you have the write permission of /opt/nvidia/deepstream/deepstream-5.0/samples/models/LP/LPR/?

Usually, the folder under /opt/ requires a root authority.
Could you try to execute the command with sudo or write the file to ${HOME}?


1 Like

Yes that is the reason the engine file is not converted. Now it worked
Thank you