Incorrect conversion from GRAY8 to RGBA in nvvideoconvert

• Hardware Platform (Jetson / GPU) Jetson TX2
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) question

I have two pipelines:

PNG -> RGB -> GRAY8 -> RGBA NVMM:

multifilesrc location=<some_path>/frame_%05d.png caps=image/png !
pngdec ! video/x-raw,format=RGB !
videoconvert ! video/x-raw,format=GRAY8 !
nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA,colorimetry=bt709 !
nvinfer .....

and PNG -> RGB -> GRAY8 -> RGBA -> RGBA NVMM

multifilesrc location=<some_path>/frame_%05d.png caps=image/png !
pngdec ! video/x-raw,format=RGB !
videoconvert ! video/x-raw,format=GRAY8 !
videoconvert ! video/x-raw,format=RGBA !
nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA,colorimetry=bt709 !
nvinfer .....

These pipelines are used for preprocessing for some detection model.
Using code from this topic, I dumped preprocessed tensors from nvinfer and got green tensors from the first pipeline:

and normal from the second:

I tried to use different colorimetry in both pipelines and nothing changed.
How to fix it and get normal tensors from nvvideoconvert without extra videoconvert?

DeepStream 5.1 is a too old version. can you reproduce this issue on the latter versions? Thanks!

No, I can’t. We use this version in production and on our Jetsons

Seems it is related with nvvideoconvert. If the issue can’t be reproduced on the latter versions. I suggest using the workaround(adding extra videoconvert) because nvvideoconvert is not opensource. Or may I know your company name? Maybe you can contact sales or after-sales.

Extra videoconvert greatly slows down pipeline. Are there any other solutions? Maybe there are some parameters except colorimetry, which can fix problem?

After decoding by pngdec, is the data RGB or GRAY8? if RGB, you don’t need to covert GRAY8 first.

What should I do, if I need grayscale image for my model?

nvinfer supports converting to gray8. you can set model-color-format=2 in configuration file of nvinfer.
if the source is png with gray8. the pipeline is

multifilesrc location=<some_path>/frame_%05d.png caps=image/png !pngdec ! video/x-raw,format=GRA8 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA,colorimetry=bt709 ! nvstreammux ! nvifner (model-color-format=2) .....

if the source is png with RGB. the pipeline is

multifilesrc location=<some_path>/frame_%05d.png caps=image/png ! pngdec ! video/x-raw,format=RGB ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA,colorimetry=bt709 ! nvstreammux ! nvifner (model-color-format=2) .....

Our model works with grayscaled images with 3 channels

My question is only about images conversion. Is it possible to get correct image output the nvvideoconvert in this pipeline without additional videoconvert?

By the way, output tensor from nvvideoconvert got only Green-channel. Others are equal to zero

for a workaround, since nvinstream and nvinfer supports NV12 format, you can try another solution GRAY8->NV12 instead of GRAY->RGBA.

multifilesrc location=<some_path>/frame_%05d.png caps=image/png ! pngdec ! video/x-raw,format=RGB ! videoconvert ! video/x-raw,format=GRAY8 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12,colorimetry=bt709 ! nvstreammux ! nvinfer .....

According to this topic, NV12 causes critical losses of accuracy. So, we want to use RGBA

As far as I can see, green RGBA-tensor from nvvideoconvert has only Green channel, while others are zeros

sorry for the late reply! nvvideoconvert is not opensource. if the latter versions works, you can contact sales or after-sales for a fix. there is no more new solutions.