I am facing a very strange error and I’m not able to determine where it originates from.
The system:
- Device: Nvidia Jetson Nano 4GB
- Image: JetPack 4.5 with JetBot 0.4.3
- Programming language: Python
- Python Libraries (installed global):
torch @ file:///home/jetbot/torch-1.7.0a0-cp36-cp36m-linux_aarch64.whl
torch2trt==0.2.0
torchvision==0.9.1
tensorboard==2.5.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow-estimator==2.4.0
tensorrt==7.1.3.0
I am new to working with CUDA and working on understanding creating a self driving AI. I started of with this project https://github.com/gsurma/jetson. Running this example gives no error and works fine. I then started my own version of it and therefore created a new project and copied the files over there and started modifying it. In my new project directory I get the error
[TensorRT] ERROR: ../rtExt/cuda/cudaTiledPoolingRunner.cpp (117) - Cuda Error in execute: 719 (unspecified launch failure)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception
.....
RuntimeError: CUDA error: unspecified launch failure
sometimes I also get
[TensorRT] ERROR: ../rtSafe/runnerUtils.cpp (442) - Cudnn Error in safeCudnnAddTensor: 8 (CUDNN_STATUS_EXECUTION_FAILED)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception
which is caused by the line
output = self.model_trt(preprocessed_frame).detach().clamp(-1.0, 1.0).cpu().numpy().flatten()
in the file
autopilot_testing.py
.
I also wanted to create a nice project structure with sub-directories which looks as like this:
myproject
– programs
Inside “myproject” are some python scripts which load programs placed in the “programs” folder.
If I create this structure in the project https://github.com/gsurma/jetson the same error occurs.
I have read many posts in forums and git issues but I am not able to solve the problem.
Does anyone can help me or guide me in the right direction?
Thanks in advance!