Issue with Tao Toolkit Detection on T4 in NVIDIA DeepStream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) : question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I need to determine what type of image it is using a model trained with classification_tf1 from the TAO Toolkit after detecting an image with YOLOv8 in deepstream.

I created Docker containers using the same Docker image on t4 and v100 and used the same test video, the same TAO Toolkit model for sgie, and the same yolov8 model for pgie.

However, it worked fine on v100 but didn’t detect properly on t4. Why might this be happening? Could you tell me how can I resolve it?

t4
Nvidia Driver Version: 470.82.01
cuda : 11.4
deepstream 6.0

v100
Nvidia Driver Version: 535.86.05
cuda : 12.2
deepstream 6.0

Do you mean there is no bbox output from yolov8 on T4?

Which docker are you working on?

Have you tried TensorRT pipeline first?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks