I’m trying to run my custom ResNet based model with the help of jetson-inference. The model is trained in PyTorch 1.7.0, and then exported to ONNX opset version 11. I’m able to benchmark it with trtexec, but when using ./segnet-console or ./segnet-console.py with appropriate arguments for model, input_blob, output_blob, labels, and colors, I’m getting the following error:
[TRT] binding to input 0 image.1 binding index: 0 [TRT] binding to input 0 image.1 dims (b=1 c=3 h=1024 w=2048) size=25165824 [TRT] binding to output 0 391 binding index: 8 [TRT] binding to output 0 391 dims (b=1 c=12 h=1024 w=2048) size=100663296 [TRT] [TRT] device GPU, /home/user/models/file_opset11_2048x1024.onnx initialized. [TRT] segNet outputs -- s_w 2048 s_h 1024 s_c 12 [image] loaded 'images/warehouse.jpg' (2048x1024, 3 channels) [TRT] ../rtSafe/cuda/cudaConvolutionRunner.cpp (457) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED) [TRT] FAILED_EXECUTION: std::exception [TRT] failed to execute TensorRT context on device GPU segnet: failed to process segmentation [image] imageLoader -- End of Stream (EOS) has been reached, stream has been closed segnet: shutting down... [cuda] an illegal memory access was encountered (error 700) (hex 0x2BC) [cuda] /home/user/dev/jetson-inference/utils/image/imageLoader.cpp:105 [TRT] ../rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700 (an illegal memory access was encountered) terminate called after throwing an instance of 'nvinfer1::CudaError' what(): std::exception  18973 abort (core dumped) ./segnet-console --model=/home/user/models/file_opset11_2048x1024.onnx
Could you please advise me on how to successfully solve this issue? Thanks!