Tlt-converter failed on Jetson Nano with Cuda Error in loadKernel: 702

Description

tlt-converter failed to convert model gazenet (https://ngc.nvidia.com/catalog/models/nvidia:tlt_gazenet) on jetson nano with error:

[ERROR] /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (460) - Cuda Error in loadKernel: 702 (the launch timed out and was terminated)

Environment

TensorRT Version: 7.1.3
GPU Type: 128-core NVIDIA Maxwell
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: JetPack 4.5.1
Python Version (if applicable): NA
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): Baremetal JetPack 4.5.1

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Running on runlevel 3 (console mode) also tried adding to xorg.conf this Option “Interactive” “off” to minimize watchdog but always getting same errors.

wget https://developer.nvidia.com/cuda102-trt71-jp45

wget -O ~/tlt_gazenet.etlt https://api.ngc.nvidia.com/v2/models/nvidia/tlt_gazenet/versions/deployable_v1.0/files/model.etlt

tlt-converter -k nvidia_tlt -t fp16 -p input_left_images:0,1x1x224x224,1x1x224x224,2x1x224x224 -p input_right_images:0,1x1x224x224,1x1x224x224,2x1x224x224 -p input_face_images:0,1x1x224x224,1x1x224x224,2x1x224x224 -p input_facegrid:0,1x1x625x1,1x1x625x1,2x1x625x1 -e ~/gazenet.plan ~/tlt_gazenet.etlt

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (-1, 1, 224, 224)
[INFO] Detected input dimensions from the model: (-1, 1, 224, 224)
[INFO] Detected input dimensions from the model: (-1, 1, 224, 224)
[INFO] Detected input dimensions from the model: (-1, 1, 625, 1)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 1, 224, 224) for input: input_left_images:0
[INFO] Using optimization profile opt shape: (1, 1, 224, 224) for input: input_left_images:0
[INFO] Using optimization profile max shape: (2, 1, 224, 224) for input: input_left_images:0
[INFO] Using optimization profile min shape: (1, 1, 224, 224) for input: input_right_images:0
[INFO] Using optimization profile opt shape: (1, 1, 224, 224) for input: input_right_images:0
[INFO] Using optimization profile max shape: (2, 1, 224, 224) for input: input_right_images:0
[INFO] Using optimization profile min shape: (1, 1, 224, 224) for input: input_face_images:0
[INFO] Using optimization profile opt shape: (1, 1, 224, 224) for input: input_face_images:0
[INFO] Using optimization profile max shape: (2, 1, 224, 224) for input: input_face_images:0
[INFO] Using optimization profile min shape: (1, 1, 625, 1) for input: input_facegrid:0
[INFO] Using optimization profile opt shape: (1, 1, 625, 1) for input: input_facegrid:0
[INFO] Using optimization profile max shape: (2, 1, 625, 1) for input: input_facegrid:0
[ERROR] /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (460) - Cuda Error in loadKernel: 702 (the launch timed out and was terminated)
[ERROR] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 702 (the launch timed out and was terminated)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)

Move this topic from Nano forum into TLT forum.

Which gpu card did you use?

As i mentioned i am running all the commands in a Jetson Nano 4GB, trying to convert the gazenet model from etlt to tensorrt.

Maybe your cuda is out of memory. Reference:

Can you reboot and retry?

BTW, I cannot reproduce on my Nano.

nvidia@nvidia:~/morganh/cuda10.2_trt7.1_jp4.4$ ./tlt-converter -k nvidia_tlt -t fp16 -p input_left_images:0,1x1x224x224,1x1x224x224,2x1x224x224 -p input_right_ima ges:0,1x1x224x224,1x1x224x224,2x1x224x224 -p input_face_images:0,1x1x224x224,1x1x224x224,2x1x224x224 -p input_facegrid:0,1x1x625x1,1x1x625x1,2x1x625x1 gazenet.plan model.etlt
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (-1, 1, 224, 224)
[INFO] Detected input dimensions from the model: (-1, 1, 224, 224)
[INFO] Detected input dimensions from the model: (-1, 1, 224, 224)
[INFO] Detected input dimensions from the model: (-1, 1, 625, 1)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 1, 224, 224) for input: input_left_images:0
[INFO] Using optimization profile opt shape: (1, 1, 224, 224) for input: input_left_images:0
[INFO] Using optimization profile max shape: (2, 1, 224, 224) for input: input_left_images:0
[INFO] Using optimization profile min shape: (1, 1, 224, 224) for input: input_right_images:0
[INFO] Using optimization profile opt shape: (1, 1, 224, 224) for input: input_right_images:0
[INFO] Using optimization profile max shape: (2, 1, 224, 224) for input: input_right_images:0
[INFO] Using optimization profile min shape: (1, 1, 224, 224) for input: input_face_images:0
[INFO] Using optimization profile opt shape: (1, 1, 224, 224) for input: input_face_images:0
[INFO] Using optimization profile max shape: (2, 1, 224, 224) for input: input_face_images:0
[INFO] Using optimization profile min shape: (1, 1, 625, 1) for input: input_facegrid:0
[INFO] Using optimization profile opt shape: (1, 1, 625, 1) for input: input_facegrid:0
[INFO] Using optimization profile max shape: (2, 1, 625, 1) for input: input_facegrid:0
[INFO] Detected 4 inputs and 3 output network tensors.

I tried one more time and now this time worked , just noticed that you used jetpack 4.4 and i am using Jetpack 4.5.1. Now i was able to generate the fiel

Thanks