Failed color conversion.

I’ve been thinking for some time how to ask this since I haven’t been able to create an independent piece of code that reproduces this error. Having failed in that, I’m gonna post the errors I’m getting here hoping someone can help. The nvvidconv element in my pipeline is displaying these strange errors:

0:00:05.393382265 17194 0x715c00  ERROR  nvvidconv gstnvvconv.c:1690:gst_nvvconv_do_transform: gst_nvvconv_do_transform: Failed to configure NvDdkVic Source Surface
0:00:05.393457337 17194 0x715c00  ERROR  nvvidconv gstnvvconv.c:3650:gst_nvvconv_transform: gst_nvvconv_transform: Rmsurface colorspace conversion failed

This is the relevant piece of code:

void SomeCamera::build_sampled_appsink(GstElement *bin,
                                        GstElement *tee,
                                        const char *input_color_space)
{
  GstElement *queue = gst_element_factory_make("queue", "sampled-queue0");
  GstElement *converter = gst_element_factory_make("nvvidconv", "sampled-conv0");
  this->sampled_appsink = gst_element_factory_make("appsink", "sampled-appsink0");

  gst_bin_add_many(GST_BIN(bin), queue, converter, this->sampled_appsink, NULL);

  if (!gst_element_link(queue, converter)) {
    FinalLog("Could not link queue and converter.");
  }

  GstCaps *caps = gst_caps_new_simple("video/x-raw",
                                      "format", G_TYPE_STRING, "RGBA",
                                      NULL);
  if (!caps) {
    FinalLog("Could not create caps.");
  }

  if (!gst_element_link_filtered(converter, this->sampled_appsink, caps)) {
    FinalLog("Could not link converter and appsink.");
  }

  g_object_set(this->sampled_appsink, "emit-signals", TRUE, NULL);

  GstPad *tee_pad = gst_element_get_request_pad (tee, "src_%u");
  GstPad *pad = gst_element_get_static_pad(queue, "sink");
  if (gst_pad_link(tee_pad, pad) != GST_PAD_LINK_OK) {
    FinalLog("Could not link pads.");
  }

  gst_object_unref(tee_pad);
  gst_object_unref(pad);
}

The data received from the tee pad has the following caps: video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)25/1

Hi,
One suspicion is your source is not valid ‘video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)25/1’.

Try to simulate the case in one pipeline and it works fine:

$ gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM)' ! queue ! tee ! nvvidconv ! 'video/x-raw,format=RGBA' ! appsink

I forgot to mention that without the branch the above function creates, the rest of the pipeline works fine, so the video data is valid. I have also confirmed the caps by dumping a dot file and looking at the graph.

As to the pipeline you suggested I try, it seems to work fine:

$ gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM)' ! queue ! tee ! nvvidconv ! 'video/x-raw,format=RGBA' ! appsink

Setting pipeline to PAUSED ...

Available Sensor modes :
3864 x 2174 FR=60.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1932 x 1094 FR=120.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1288 x 734 FR=120.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1288 x 546 FR=240.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
Pipeline is live and does not need PREROLL ...

NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 0 WxH = 3864x2174 FrameRate = 60.000000 ...

Setting pipeline to PLAYING ...
New clock: GstSystemClock

Can you tell me what the error messages I mentioned (about NvDdkVic and Rmsurface) even mean? I googled for them but I found no relevant results.

Hi,
The error shows your input is not valid video/x-raw(memory:NVMM). The input source should be from camera(nvcamerasrc) or decoder(omxh264dec, omxh265dec, …). Do you run any other sources?

The video indeed comes from a camera, nveglstreamsrc to be precise, and like I said, without that branch in the tee, it actually works, feeding the data to omxh265enc, so it couldn’t be invalid, could it?

Here’s a diagram of the pipeline. It’s got simplified a bit (hence the element not connected to anything, and some tees with only one output). The tee in question is the one in the input bin.

Hi, nveglstreamsrc is eglstream consumer. Is your producer in RGBA format or YUV420(I420 or NV12)?

The producer is an argus camera which produces I420.

Could it be that using tee is the issue here and I need to use nvtee? I tried replacing tee with nvtee in my code, but then gst_element_get_request_pad returns null.

I just took a look at the gst-inspect-1.0 output for nvtee and it looks like that nvtee does not have any request pads, so maybe this is not the right use case for it.

Hi,
Please set NV12 for Argus source.

A relevant post:
[url]https://devtalk.nvidia.com/default/topic/1043857/jetson-tx2/problem-with-modifying-gstreamer-pipeline-in-gstvideoencode-sample-code-in-tegra-multimedia-api-/post/5295907/#5295907[/url]

Okay, so this occasionally works. It’s all very weird. When it doesn’t work (which is most of the times) I get error messages like these:

0:00:07.125020725 22022  0x526590 ERROR  nveglstream gstnveglstreamsrc.c:563:gst_nvconsumer_buffer_pool_release_buffer:<source> Failed to release EGLStream Frame.
0:00:07.125080661 22022  0x526590 ERROR  nveglstream gstnveglstreamsrc.c:563:gst_nvconsumer_buffer_pool_release_buffer:<source> Failed to release EGLStream Frame.
0:00:07.125111124 22022  0x526590 ERROR  nveglstream gstnveglstreamsrc.c:563:gst_nvconsumer_buffer_pool_release_buffer:<source> Failed to release EGLStream Frame.
0:00:07.125141492 22022  0x526590 ERROR  nveglstream gstnveglstreamsrc.c:563:gst_nvconsumer_buffer_pool_release_buffer:<source> Failed to release EGLStream Frame.

Any ideas what that means and why it happens (seemingly) non-deterministically?

I just realized if I disable gstreamer logs (except for ERROR logs) I get those errors much less frequently, but they still happen on every few runs.

Hi,
Please check if below settings helps:
1 execute jetson_clocks.sh to run GPU at max clocks.
2 Configure ‘nvvidconv output-buffers=10’

Setting output-buffers makes no discernible difference, but then running jetson_clocks.sh seems to fix this, or at least it hasn’t shown up yet. This makes me a bit uncomfortable since I really don’t think the clock speed should affect the behavior of my program. May I ask what you think is happening here?

Hi,
For hooking Argus and gstreamer, we suggest you refer to
[url]CLOSED. Gst encoding pipeline with frame processing using CUDA and libargus - Jetson TX1 - NVIDIA Developer Forums

nveglstreamsrc is sensitive to timing. If downstream elements does not return buffers back in time, it probably hits the issue you are facing, and inaccurate timestamps mentioned at
[url]https://devtalk.nvidia.com/default/topic/1042097/jetson-tx2/creating-a-working-nveglstreamsrc-gt-omxh264enc-gstreamer-pipeline/post/5287243/#5287243[/url]

If you must use nveglstreamsrc, suggest you run GPU at max frequency.

Thanks for the explanation. I’ve marked your original post about using NV12 as the answer, as that seems to be the main issue. Just one more thing I would like to know. What the output-buffers property on nvvidconv? Does it indicate some sort of buffering perhaps?

Hi,
The output-buffers property is to configure number of buffers in source pad of nvvidconv.