DeepStream application freezes

My deepstream application has the following pipeline: a character detector then a character tracker then a character classifier. All the pipeline works good until I add the following line to the classifier config file:

output-tensor-meta=1

If the frame contains a limited number of characters (below 8 for example), the application works fine. But when the number of characters increases, the deep stream application freezes in this frame and became unresponsive.

How to debug my application?

This is my configuration:

  • NVIDIA Jetson Xavier NX (Developer Kit Version)
    • Jetpack UNKNOWN [L4T 32.4.4]
    • NV Power Mode: MODE_15W_6CORE - Type: 2
    • jetson_stats.service: active
  • Libraries:
    • CUDA: 10.2.89
    • cuDNN: 8.0.0.180
    • TensorRT: 7.1.3.0
    • Visionworks: 1.6.0.501
    • OpenCV: 4.1.1 compiled CUDA: NO
    • VPI: 0.4.4
    • Vulkan: 1.2.70

You can refer to Raw tensor output - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

A workaround may need you to modify the nvinfer codes, to enlarge “outputBufferPoolSize” value in gst_nvinfer_start() in /opt/nvidia/deepstream/deepstream-5.0/sources/gst-plugins/gst-nvinfer/gstnvinfer.cpp, and rebuild the plugin. The value is decided by the actual maximum objects number. This can not fix the problem, it is just a workaround. We are investigating the solution internally.

1 Like