Warning from TensorRT model with batch size setting > 1 for streamux and engine

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
• DeepStream Version 6.0
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 495.44
• Issue Type( questions, bugs)

Hello,

I have onnx model export with dynamic batch size. it ok when export to tensorRT directly by nvinfer plugin.

I got this log warning Cuda failure: status=1 in cuResData at line 348 from application when using engine have batch size > 1 and setup batch size streammux > 1. The program run ok but the logging and large resource be used.

There is no above log when running program with:

  • streammux batch size = 1 and engine batch size > 1
  • streammux batch size = 1 and engine batch size = 1
  • streammux batch size > 1 and engine batch size = 1
    but the performance is lower than setup batch > 1 for both streammux and engine.

please help!! thank you.

Sorry for the late response, is this still an issue to support?

Thanks

1 Like

Hi @trild-vietnam ,
I think “Cuda failure: status=1 in cuResData at line 348” is from your code, right? If it’s, it indicates there is bug in your code.

the logging and large resource be used.

why “large resource”?

1 Like

Hello @mchi. No that log is not from my code. I wonder its from streammux or nvinfer but the error do not crash so I could not trace which error from.

Normal model just get 30-40% Utilizer of GPU but run that model it got 100% Utilizer.

Got, yes, it’s form our code for video frame transform.

What’s the GPU?

can you reproduce it with the YoloV3 sample under DeepStream package - /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo ?

I tested on 1060, 1080, A5000, and T4.

let me updated later

1 Like

Any update?

Hi @kayccc .

I could not reproduce the error with the Yolov3 sample.