dGPU version
Deepstream 5.1
Nvidia Quatro RTX 6000
CUDA 11.3
Driver version 465.19.01
TensorRT 8.0
Hello,
I am trying to build a pipeline based on the examples given in the SDK.
Actually, I am able to perform inference on a v4l2 stream and export it as jpeg frames + json (multifilesink / nvmsgbroker) while displaying it on my screen (using nvosd).
I would now like to replace the nvosd thread with an UDP stream.
The main issue is that I cannot manage to convert the Nvidia stream (NVMM, batched) to a standard one and thus convert it to h.264 (using x264enc) before using it into an udpsink.
You will find the GST debug pipeline attached to this post, from which I removed the frames and json export, to focus only on the x264 → udpsink issue.
You will also find the whole GST logs with GST_DEBUG=5.
gst_log.txt (7.9 MB)
My hint is that the demuxer is supposed to output a non-batched stream, but the graph and logs say otherwise, so I do not understand where the issue is…
videoencoder gstvideoencoder.c:678:gst_video_encoder_setcaps:<x264enc> rejected caps video/x-raw, framerate=(fraction)5/1, width=(int)1280, height=(int)720, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)NV12
Thanks for your help !