Syncpts and threads leak using gstreamer plugin nvv4l2decoder

We use gstreamer to decode video and each time we recreate the pipeline in our program some NVIDIA related threads are recreated without destroying the old ones. Furthermore, we think that one of those threads may be holding a syncpt and never freeing it, which causes our program to fail when the system runs out of syncpts.

It’s possible to reproduce it running gst-launch-1.0 for the encoder pipeline and gstd for the decoder pipeline:

gst-launch-1.0 videotestsrc is-live=true pattern=18 do-timestamp=TRUE ! capsfilter caps=“video/x-raw,format=(string)UYVY,width=(int)1280,height=(int)720,framerate=(fraction)50/1” ! nvvidconv ! capsfilter caps=“video/x-raw(memory:NVMM),format=(string)I420” ! nvv4l2h264enc ! queue ! rtph264pay ! udpsink host=127.0.0.1 port=5000 sync=FALSE async=FALSE &
gstd &
gstd-client pipeline_create dec udpsrc port=5000 ! application/x-rtp,encoding-name=H264,media=video,clock-rate=90000 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! fakesink

And then playing and stopping the decoder pipeline various times:

gstd-client pipeline_play dec
sleep 3
gstd-client pipeline_stop dec
sleep 1
gstd-client pipeline_play dec
sleep 3
gstd-client pipeline_stop dec
sleep 1
gstd-client pipeline_play dec
sleep 3
gstd-client pipeline_stop dec
sleep 1

Reading /sys/kernel/debug/tegra_host/status_all we don’t see anything wrong until the third run. From that moment, each run (play+stop of the decoding pipeline) leaves a synpt blocked (not freed).

And if we run ‘cat inspect /proc/$(pidof gstd)/task/*/comm’, we see that each run has left 4 threads with the names:

NVMDecBufProcT
NVMDecDisplayT
NVMDecFrmStatsT
NVMDecVPRFlrSzT

That are never destroyed until gstd is killed.

Can you please help us to find a solution?

Best regards, Dani.

Hi,
Looks like you switch the pipeline in STOP and PLAYING states. For better stability, we would suggest unref the pipeline and re-initialize a new one .Please check if you can run this case.

Deleting and re-creating the decoding pipeline with gstd-client pipeline_delete and gstd-client pipeline_create doesn’t resolve the issue, each run block a syncpt and create 4 new threads that aren’t destroyed.

Hi,
Do you run latest JP4.5 or previous release?

Yes, we use the JP4.5 on a jetson nano production module (emmc instead of sdcard) with a custom carrier board.

Hi,
What error is hit in running the use-case? would be great if you can share the error log for reference.

When the decoding pipeline is started/stoped enough times, we run out of synpts and see these errors:

[ 3451.357795] host1x 50000000.host1x: nvhost_get_syncpt: failed to find free syncpt
[ 3451.357810] falcon 54340000.vic: nvhost_get_syncpt_host_managed: failed to get syncpt
[ 3451.358263] host1x 50000000.host1x: nvhost_syncpt_wait_timeout: invalid syncpoint id 0
[ 3452.377881] host1x 50000000.host1x: nvhost_get_syncpt: failed to find free syncpt
[ 3452.377895] falcon 54340000.vic: nvhost_get_syncpt_host_managed: failed to get syncpt
[ 3452.378335] host1x 50000000.host1x: nvhost_syncpt_wait_timeout: invalid syncpoint id 0
[ 3453.382186] host1x 50000000.host1x: nvhost_get_syncpt: failed to find free syncpt
[ 3453.382199] falcon 54340000.vic: nvhost_get_syncpt_host_managed: failed to get syncpt
[ 3453.382629] host1x 50000000.host1x: nvhost_syncpt_wait_timeout: invalid syncpoint id 0
[ 3454.386351] host1x 50000000.host1x: nvhost_get_syncpt: failed to find free syncpt
[ 3454.386363] falcon 54340000.vic: nvhost_get_syncpt_host_managed: failed to get syncpt
[ 3454.386516] host1x 50000000.host1x: nvhost_syncpt_wait_timeout: invalid syncpoint id 0

We think that this is only the symptom, the cause is that a syncpt is not beeing freed correctly somewhere that may or may not be related with these messages.