TLT Converter

Hi,
I am working on Lisence Plate Recognition model on deepstream. I downloaded .etlt file as well as tlt- converter. When I try to execute ./tlt-converter, it is showing warning and I am unable to create .engine file using tlt converter.

vamsisiddharthasiddhu2041@linux:/opt/nvidia/deepstream/deepstream-5.1/samples/models/LP/LPR$ ./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt -t fp16 -e lpr_us_onnx_b16.engine
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] Tensor DataType is determined at build time for tensors not marked as input or output.
[INFO] Detected input dimensions from the model: (-1, 3, 48, 96)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 48, 96) for input: image_input
[INFO] Using optimization profile opt shape: (4, 3, 48, 96) for input: image_input
[INFO] Using optimization profile max shape: (16, 3, 48, 96) for input: image_input
[INFO] Detected 1 inputs and 2 output network tensors.
vamsisiddharthasiddhu2041@linux:/opt/nvidia/deepstream/deepstream-5.1/samples/models/LP/LPR$
I am enclosing screenshot of the error. Please help me to sort out this error.
Thanks…

I don’t see any error in the attached info.
Please run it with higher verbosity and post the log

Sir,
After executing tlt converter command, it will give .engine file as output right. But, when I execute that command, i could not find any output file .

Please check if you have write access to this path. Try to chmod it.

Yeah, I tried that also, still could not get .engine file after executing tlt converter.

Can you run below?
$ ll -sh /opt/nvidia/deepstream/deepstream-5.1/samples/models/LP/LPR