Running two tensorrt models [ Cuda Error:400 Invalid Resource Handle]

Hi,

I have a object detection pipeline which uses two tiny-yolov4. The first model takes 608-608 input and then
the 2nd takes the bounding boxes from the first output and predicts the final output. I can run the pipeline using onxx models and also run them separately after optimizing with Tensorrt. Now i want to run the two optimized models one after another.

When i used the pipeline i was using for onnx model i got following error
Cuda Error:400 Invalid Resource Handle

How should i approach this problem.
Thank you

Hi @sgn.kayastha,
Can you please share the code where you are building the pipeline, because the error is basically you are trying to access something you haven’t shown or described.
Looking at the code might help to understand this better.

Thanks!

Hi @AakankshaS,

The pipeline basically is a custom license plate recognition,

  1. read image
    2… load model 1 (Vehicle model, that detects license plate along with vehicles) (Tiny-yolo converted using trtexec)
  2. load model 2 (Character model, that detects two classes) (Tiny-yolo converted using trtexec)
  3. infer in model 1 (detects license plate and extracts it to send model)
  4. infer in model 2 (detects two classes in the license plate image)

We have been using onnx models for these pipelines and those work fine. Since we want a jetson tx2 board for deployment we wanted to use trt models. I cannot share the whole pipeline code, but I have attached two files. First containing the TRT_model class (reference form Class) which is used to read the engine file, and has a function for detection and returning output, and the second file has a snippet of code.

I can run the two models separately, but can’t run them one after another.

If these files are not enough i can send some extra files, with details. Thank you for your help.

trt_error.zip (35.0 KB)

Hi @sgn.kayastha ,
Please refer to the below link,
https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html#thread-safety
Also, recommend you to try Deep stream.

Thanks!