In my project, there are two networks need to inference In the whole pipeline. So I parser the two model into different engines, contexts, runtime and all that related class have two instances .But when I run it, it will crash in parsering the origin model. The first one will build OK, and the second one will failed to crash , It is the same in reverse models. Why did it happen ? Is there any class may be singleton that I can not create twice or some state need to clear?
apologize for the delay. Hello, can you provide details on the platforms you are using?
Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version
Our engineers recommend trying with TensorRT 6.0? We believe this was fixed with a change to track inputs to make sure they are only accessed by their own networks.