Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) dGPU: Tesla T4
• DeepStream Version 5.0.2
• JetPack Version (valid for Jetson only) N/A
• TensorRT Version 7.1.3.4
• NVIDIA GPU Driver Version (valid for GPU only) 450.51.06
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) N/A
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) N/A
In our application we are feeding YUV NV12 buffers via appsrc
to a pipeline containing nvinfer
and other necessary elements. The buffer is loaded into GstBuffer
created by gst_buffer_new_and_alloc
, and supposed to live in system memory. It is then converted by nvvideoconvert
to provide caps specified as video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=1/0
before pushed to nvstreammux
.
Currently the YUV buffer is direct dump from output of nvv4l2decoder
, which marks colorFormat as NVBUF_COLOR_FORMAT_NV12_709
or NVBUF_COLOR_FORMAT_NV12
depending on different rtsp streams as input. However the previously mentioned nvvideoconvert
always produces NVBUF_COLOR_FORMAT_NV12_709
surface.
Is there any chance for specifying the input color format somewhere, or the conversion output color format is consistant and reliable behavior?