Tlt-converter failed on Jetson Nano with Cuda Error in loadKernel: 702

Description

tlt-converter failed to convert model gazenet (https://ngc.nvidia.com/catalog/models/nvidia:tlt_gazenet) on jetson nano with error:

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

Environment

TensorRT Version: 7.1.3
GPU Type: 128-core NVIDIA Maxwell
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: JetPack 4.5.1
Python Version (if applicable): NA
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): Baremetal JetPack 4.5.1

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Running on runlevel 3 (console mode) also tried adding to xorg.conf this Option “Interactive” “off” to minimize watchdog but always getting same errors.

wget https://developer.nvidia.com/cuda102-trt71-jp45

wget -O ~/tlt_gazenet.etlt https://api.ngc.nvidia.com/v2/models/nvidia/tlt_gazenet/versions/deployable_v1.0/files/model.etlt

tlt-converter -k nvidia_tlt -t fp16 -p input_left_images:0,1x1x224x224,1x1x224x224,2x1x224x224 -p input_right_images:0,1x1x224x224,1x1x224x224,2x1x224x224 -p input_face_images:0,1x1x224x224,1x1x224x224,2x1x224x224 -p input_facegrid:0,1x1x625x1,1x1x625x1,2x1x625x1 -e ~/gazenet.plan ~/tlt_gazenet.etlt

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (-1, 1, 224, 224)
[INFO] Detected input dimensions from the model: (-1, 1, 224, 224)
[INFO] Detected input dimensions from the model: (-1, 1, 224, 224)
[INFO] Detected input dimensions from the model: (-1, 1, 625, 1)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 1, 224, 224) for input: input_left_images:0
[INFO] Using optimization profile opt shape: (1, 1, 224, 224) for input: input_left_images:0
[INFO] Using optimization profile max shape: (2, 1, 224, 224) for input: input_left_images:0
[INFO] Using optimization profile min shape: (1, 1, 224, 224) for input: input_right_images:0
[INFO] Using optimization profile opt shape: (1, 1, 224, 224) for input: input_right_images:0
[INFO] Using optimization profile max shape: (2, 1, 224, 224) for input: input_right_images:0
[INFO] Using optimization profile min shape: (1, 1, 224, 224) for input: input_face_images:0
[INFO] Using optimization profile opt shape: (1, 1, 224, 224) for input: input_face_images:0
[INFO] Using optimization profile max shape: (2, 1, 224, 224) for input: input_face_images:0
[INFO] Using optimization profile min shape: (1, 1, 625, 1) for input: input_facegrid:0
[INFO] Using optimization profile opt shape: (1, 1, 625, 1) for input: input_facegrid:0
[INFO] Using optimization profile max shape: (2, 1, 625, 1) for input: input_facegrid:0
[ERROR] /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (460) - Cuda Error in loadKernel: 702 (the launch timed out and was terminated)
[ERROR] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 702 (the launch timed out and was terminated)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)

Hi,
This looks like a Jetson issue. We recommend you to raise it to the respective platform from the below link

Thanks!