Deadlock when setting `output-tensor-meta` to TRUE

I am trying to run FaceNet + FPENet on Orin NX with DeepStream 6.4.

I want to take output from FPENet to draw facial landmarks using NVDSOSD, so I am setting output-tensor-meta option of second nvinfer to TRUE.

Everything seems to work fine, until I add H.264 encoder, something like this:

gst-launch-1.0 \
    v4l2src ! \
    nvvideoconvert ! \
    video/x-raw\(memory:NVMM\),format=NV12,width=640,height=480 ! nvstreammux.sink_0 \
    nvstreammux name=nvstreammux live-source=true width=640 height=480 batch-size=1 ! \
    nvinfer config-file-path=facenet.yml unique-id=1 ! \
    nvinfer config-file-path=fpenet.yml unique-id=2 ! \
    tee name=tee ! \
    queue ! nvegltransform ! \
    nveglglessink \
    tee. ! queue ! nvv4l2h264enc ! fakesink

facenet+fpenet.zip (7.8 MB)

Once added, pipeline will freeze once second face appears in field of view. If I remove encoder or set output-tensor-meta to FALSE, pipeline will continue to work with multiple faces detected.

what is FPENet model used to do? is it a classification model? will the the application run well only with “nvv4l2h264enc ! fakesink” branch(without nveglglessink branch)?

It will freeze even without nveglglessink branch (when only encoder is present) once second face appears in field of view. Regarding version, I believe I took deployable_v3.0 from Facial Landmarks Estimation | NVIDIA NGC

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
the deepstream_faciallandmark_app code already supports encoding. could you compare the gst-launch pipeline with the code to narrow down this issue?

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.