Engine.create_execution_context() is resulting in segmentation fault

Hi all, I tried inferencing a trt model which resulted in the segmentation fault
I have referred the sample code from the link which is pasted in the attached file link_file
link_file.txt (116 Bytes)
.txt

to convert an onxx model to trt format and tried inferencing it.
Please refer to the cell numbers 7 and 8 in the section header
4. What TensorRT path am I using to convert my model?
The contents from cell number 8 are
The contents of cell number 8 are
import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit

f = open(“resnet_engine_pytorch.trt”, “rb”)
runtime = trt.Runtime(trt.Logger(trt.Logger.WARNING))

engine = runtime.deserialize_cuda_engine(f.read())
context = engine.create_execution_context()

and the same is copied and pasted in the file trt_infer.py and tried executing it with below command

python3.6 trt_infer.py

But it gave the below error

[TensorRT] ERROR: 1: [hardwareContext.cpp::terminateCommonContext::141] Error Code 1: Cuda Runtime (context is destroyed)
[TensorRT] INTERNAL ERROR: [defaultAllocator.cpp::free::85] Error Code 1: Cuda Runtime (invalid argument)
Segmentation fault (core dumped)

Let me know what could be the issue and how to fix it.

Thanks and Regards

Nagaraj Trivedi

Hi,

Please check if you can infer the model with trtexec first.

$ /usr/src/tensorrt/bin/trtexec --loadEngine=resnet_engine_pytorch.trt

If yes, please revise your implementation.
Below is an example for your reference:

https://elinux.org/Jetson/L4T/TRT_Customized_Example#OpenCV_with_PLAN_model

Thanks.

Hi, it worked with
/usr/src/tensorrt/bin/trtexec --loadEngine=resnet_engine_pytorch.trt

But I want to know from where it has taken the test data to perform inferance.
If the intention behind running command is to just load the engine then let me know how to feed the new test data (image) to it.

Thanks and Regards

Nagaraj Trivedi

Hi, it worked with
/usr/src/tensorrt/bin/trtexec --loadEngine=resnet_engine_pytorch.trt

But I want to know from where it has taken the test data to perform inferance.
If the intention behind running command is to just load the engine then let me know how to feed the new test data (image) to it.

Thanks and Regards

Nagaraj Trivedi

Hi,

Could you try to infer the model with the sample shared above?
The example reads images with OpenCV and feeds them into TensorRT which allows you to feed the custom dataset.

Thanks.

Hi, thank you for your suggestion. I tried it but got the error as
AttributeError: ‘tensorrt.tensorrt.ICudaEngine’ object has no attribute ‘get_tensor_shape’

Let me know what is alternative to get_tensor_shape(). The tensorrt version used is 8.0.1.6

Below is the logs of the execution

python3.6 infer.py
[TensorRT] INFO: [MemUsageChange] Init CUDA: CPU +346, GPU +0, now: CPU 500, GPU 4049 (MiB)
[TensorRT] INFO: Loaded engine size: 121 MB
[TensorRT] INFO: [MemUsageSnapshot] deserializeCudaEngine begin: CPU 500 MiB, GPU 4049 MiB
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +226, GPU +225, now: CPU 727, GPU 4274 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +307, GPU +313, now: CPU 1034, GPU 4587 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1034, GPU 4587 (MiB)
[TensorRT] INFO: [MemUsageSnapshot] deserializeCudaEngine end: CPU 1034 MiB, GPU 4587 MiB
Traceback (most recent call last):
File “infer.py”, line 65, in
engine = PrepareEngine()
File “infer.py”, line 49, in PrepareEngine
size = trt.volume(engine.get_tensor_shape(binding)) * batch
AttributeError: ‘tensorrt.tensorrt.ICudaEngine’ object has no attribute ‘get_tensor_shape’
[TensorRT] INTERNAL ERROR: [defaultAllocator.cpp::free::85] Error Code 1: Cuda Runtime (invalid argument)

Thanks and Regards

Nagaraj Trivedi

It also gave one more error as object has no attribute ‘get_tensor_mode’

Please suggest alternative to both ‘get_tensor_mode’() and get_tensor_shape(). The tensorrt version used is 8.0.1.6

File “infer.py”, line 55, in PrepareEngine
if engine.get_tensor_mode(binding)==trt.TensorIOMode.INPUT:
AttributeError: ‘tensorrt.tensorrt.ICudaEngine’ object has no attribute ‘get_tensor_mode’

Hi, I have fixed these two issues on my own and made it working.
Instead of get_tensor_shape() I used get_binding_shape()
and instead of get_tensor_mode() I have used binding_is_input() which will return a boolean value True if the binding is related to input and false otherwise.

The sample you have shared and asked me to try is working. Thank you for your timely help.

Thanks and Regards

Nagaraj Trivedi

Hi,

Good to know it works now.

The example is verified with TensorRT 8.5.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.