Currently we only support RGBA color Format error

Hi,
I’m working with python sample: deepstream-test3 using my own model and:
Jetson Xavier NX
Deepstream 5.0.1

I trained my own model using TLT and configred it for test3 sample. It worked fine.
Now, I need to add the function of saving the labeled frame to local disk. For that I’m using part of the deepstream_imagedata-multistream.py sample.
However, I keep getting the following error:
n_frame=pyds.get_nvds_buf_surface(hash(gst_buffer),frame_meta.batch_id)
RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format

Just for the record, the deepstream_imagedata-multistream.py sample works fine in my system. The only difference I saw in both scripts (Apart from the model used) is this line:

sink.set_property(“sync”, 0)

Which was set to ‘src’ in test3 sample. If I change it to ‘sync’, along with the rest of the lines regarding the sink I don’t get the error, but the frames are not saved either.
Also, test3.py sample builds the pipeline with queues between each step, whereas multistream.py doesn’t.

Can you help me with this? What is the meaning of this behavior? Could this be it? Or maybe there’s something to do with my custom model?

Thanks in advance.

Can you upload your code?

Sure!
By the way, other differences between vanilla multistream.py and my code:

  • My sample has only one source, so saved_count is not a dictionary but an integer
  • My custom model has only two classes: con_tapabocas, sin_tapabocas. This model works fine with test3.py right until I add the lines to save image from multistream.py

deepstream_test3_re_clean.py (16.8 KB)

Have you noticed that in deepstream_imagedata-multistream.py, tiler_sink_pad_buffer_probe is attached to the sink pad of nvmultistreamtiler? In this place(after nvvideoconvert which converts the video to RGBA format), the input video format is RGBA.

But in deepstream_test_3.py, tiler_src_pad_buffer_probe is attached to pgie(nvinfer) src pad, where the video format is still NV12. So that is why the " pyds.get_nvds_buf_surface " does not work. The error message has told you the reason.

1 Like

You’re absolutely right!
Images are now being saved without problem!

However, now that the confidence value is being written I see it’s generating negative values… This only happens with my custom model because with the original model multistream.py values are ok: [0-1]

I just updated to deepstream 5.0.1 where I read this issue was resolved, but my model was trained before that. Should I retrain it? Or what could be the problem?

Do you mean the confidence value is negative?

Yes, obj_meta.confidence is negative in my model…

DeepStream itself will not change the confidence value. You need to check your model itself.

Sorry to take some time to answer, I had this project on hold for a while.

To review what I was doing, I changed mi code again to use the original resnet10 .prototxt model that detects 4 classes, and modify it to save the image.
Whenever I print obj_meta.confidence, the result is the same: -0.10 for people and cars always.
Attached one of the saved images for reference.

It is the test3 sample? Have you modified any code? If yes, please upload your code.

It’s actually more simple: test1 modified to save the image.
This is the code:
deepstream_test_1_save.py (12.2 KB)

Please try DBSCAN or NMS clustering method (set in dstest1_pgie_config.txt). See example here: deepstream_python_apps/dstest_imagedata_config.txt at 46bfd6b89c618f4c41d01d2d3e8b1d8d9628ef0b · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

Confidence values are only valid when using those clustering methods.

cluster-mode=2
Worked for me!
Although it seems like NMS and DBSCAN are a bit more compute complex because my fps dropped down a lot, I can now see confidence values as expected [0,1]