nvinfer1::IExecutionContext cannot destory

Description

nvinfer1::iexecutioncontext cannot destroy

Environment

TensorRT Version: 8.0
GPU Type: nvidia xavier nx
Nvidia Driver Version: jetpack 4.6
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

I call loadModel first, and then call the unloadModel function immediately. The program reports a Segmentation fault, which is reported in execution_context->destroy(). I try to put execution_context->destroy() in the loadModel function, and it works normally. There is no operation between loadModel and unloadModel, why is this happening?
bool nvidia_normal::UnloadModel() {
execution_context->destroy();
qDebug()<<“22”;
checkRuntime(cudaStreamDestroy(stream));
stream==nullptr;
checkRuntime(cudaFreeHost(input_data_host));
input_data_host=nullptr;
checkRuntime(cudaFreeHost(output_data_host));
output_data_host=nullptr;
checkRuntime(cudaFree(input_data_device));
input_data_device=nullptr;
checkRuntime(cudaFree(output_data_device));
output_data_device=nullptr;
engine->destroy();
runtime->destroy();
}

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

class nvidia_normal : public AlgModelInterface
{
public:
nvidia_normal();
virtual bool LoadModel(mode_param param);
virtual bool Detect(cv::Mat image, QList &list);
virtual bool UnloadModel();
bool classification{false};
private:

int input_batch;
 int input_channel;
 int input_height;
 int input_width;
 int input_numel;
 float* input_data_host;
 float* input_data_device;
 int output_numbox;
 int output_numprob;
 int num_classes;
 int output_numel;
 float* output_data_host;
 float* output_data_device;
 nvinfer1::IRuntime* runtime;
 nvinfer1::ICudaEngine *engine;
 nvinfer1::IExecutionContext * execution_context;
 cudaStream_t stream;

};
This is the definition of the class

This is the calling code, which does not do anything in between, but the destory can be run in the loadModel function, and when I call the unload function, an error will be reported:
nvidia_normal *model =new nvidia_normal();
model->LoadModel(modelparam);
model->UnloadModel();

Hi,

This looks like a Jetson issue. Please refer to the below samples in case useful.

For any further assistance, we will move this post to to Jetson related forum.

Thanks!