create more than one engine will cause crash

In my project, there are two networks need to inference In the whole pipeline. So I parser the two model into different engines, contexts, runtime and all that related class have two instances .But when I run it, it will crash in parsering the origin model. The first one will build OK, and the second one will failed to crash , It is the same in reverse models. Why did it happen ? Is there any class may be singleton that I can not create twice or some state need to clear?

Thanks.

Hello,

to help us debug, can you please share a small repro that demonstrates the symptoms you are seeing? AFAIK, TRT is multi-thread friendly.

regards,
NVES

Hi,NVES,

Thanks for your reply.I upload the project to the github [url]https://github.com/lewes6369/tensorRTWrapper[/url]. please take a look to the sample named “runTwoNets”.

PS: If exporting the engine to file and load from file. Then I can create more than one engine, otherwise will failed to crash.

Thanks.

Hello,

apologize for the delay. Hello, can you provide details on the platforms you are using?

Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version

Hi,NVES

the platforms info as below:

Ubuntu 16.04
TensorRT 4.0.1.6
CUDA 9.2
GPU NV GTX 1060

Used C++ and no tf.

Hello,

Our engineers recommend trying with TensorRT 6.0? We believe this was fixed with a change to track inputs to make sure they are only accessed by their own networks.

regards,
NVIDIA Enterprise Support

Is there a version of TensorRT 6.0 for the TX2? Perhaps included in some Jetpack version?