Raw tensor output

Hi,
I have finally found a solution for this strange problem. I’ll try to explain briefly why the pipeline freezes and what my solution is. As I mentioned above, when I set “output-tensor-meta=1", the pipeline freezes after a couple of detections. The reason is simple but hard to find in gst-nvinfer plugin code.

When I enable exporting of tensor metadata, the NvDsInferTensorMeta keeps reference to tensor_out_object (GstNvInferTensorOutputObject). It causes that all resources allocated by batch are released only if the whole batch is processed. But it also uses nvinfer->pool, which is of limited size, and each object from the batch gets one resource from the pool. In the situation when there is more objects in the batch than the pool size is, deadlock happens. The reason is that the next object in the batch waits for the resource from the pool but the pool is not freed until the whole batch is processed. In other words, the code in function gst_nvinfer_process_objects should check the number of objects according to the pool size. I have found this problem just because my cluster settings weren’t perfect and the primary inference sent too many objects to secondary inference but it should have worked anyway.

I have fixed the code myself but I hope this short description allows you to improve the gst-nvinfer plugin in some of the next releases.

Regards,
Daniel

7 Likes