Nvarguscamerasrc Buffer Metadata is missing

I understand all that actually. But thanks.

I am given a reference in Python to a C pointer to struct. I am looking into ctypes to see if I can perhaps deference it directly using it’s id(). But so far no luck.

There has GOT to be a better way than this though, right? Otherwise NVIDIA is just not implementing this correctly (I have no idea why these aren’t GstMetas which seems like a natural fit to me).

Yes, I agree! GstMetas would be so much easier to handle. But this is how they implemented it, maybe NVIDIA could change the approach for following releases.

This takes me back to the original questions for NVIDIA folks:

  1. Do you know why the enable-meta property is not part of nvarguscamerasrc?

  2. Is there a way to activate the metadata in nvarguscamerasrc (not using silent=false, that is not usable from software perspective)?

Thanks,

Alright, I found it but now I see frame drops:

from ctypes import *

class GSTBUFMETADATA(Structure):
    _fields_ = [("frame_num", c_int64), ("timestamp", c_int64), ("sensor_data", POINTER(c_int))]

Defines the ctype then for each buffer:

mobject = buf.mini_object
quark = GLib.quark_from_static_string("GstBufferMetaData") 
gpointer = mobject.get_qdata(quark)  
gst = cast(gpointer, POINTER(GSTBUFMETADATA)) 
print(gst.contents.frame_num, gst.contents.timestamp)


29 537157864934000
31 537157898253000
33 537157931604000
35 537157964929000
37 537157998249000
39 537158031575000
41 537158064917000
43 537158098250000
45 537158131579000
47 537158164902000
49 537158198245000
51 537158231582000
53 537158264901000
55 537158298244000
57 537158331582000
59 537158364891000
62 537158398244000
63 537158431571000
65 537158464889000
67 537158498225000
69 537158531544000

Do you see something similar? Why am I dropping frames?

Timestamp deltas look excellent now!!! But I am missing about 20 frames from my 10 second clip. There seems to be some frame drop which I am not sure why?

Does “identity” drop frames if the callback is too slow? Kinda seems unlikely to me (I am literally just storing a few values and returning).

Also, where did you get that structure definition above with timestamp and frame_num?

Well, I take it back:

259
300
302
299
299
300
298

Other than the initial file, it looks like I see about 300 frames every 10 seconds at 30fps which is right. But why do the frame_numbers skip? (why aren’t they just 0,1,2,3,4… etc.).

Hi CarlosR92,
We will review to revive enable-meta property in nvarguscamerasrc. Thanks for reporting it.

For a quick solution on r32.1, you may check
https://devtalk.nvidia.com/default/topic/1028387/jetson-tx1/closed-gst-encoding-pipeline-with-frame-processing-using-cuda-and-libargus/post/5256753/#5256753

Can you explain if the reported frame numbers are continuous (1,2,3,4,5) or can there be gaps between them?

Also, what is stored in sensor_data above?

@alex.sack, yes, frame numbers are continuous for me! Not sure why they are not for you.

@DaneLLL, yes, in LibArgus it is quite straight forward to query the Metadata information, thanks!

Do you have an expected release date for a new version of nvarguscamerasrc including the enable-meta property?

@CarlosR92: So you know what is stored in sensor_data?

Hi CarlosR92,

We just filed the request. It may take time in review/evaulation. Will update once it is scheduled. Thanks.

@DaneLLL, thanks for submitting the request! It is a valuable feature and it is a blocker for us not having it in JP 4.2. Looking forward for the fix.

@alex.sack, no, I don’t know what is stored in sensor_data, I haven’t used it before, it might be related to the sensor configuration.

@DaneLLL, Do you know what is stored in sensor_data in the metadata returned by nvcamerasrc (in Jetpack 3.3)?

Hi,
sensor_data is clarified to be private in
https://devtalk.nvidia.com/default/topic/1056912/jetson-tx2/gst-plugin-question-do-timestamp-true-enable-meta-true-/post/5359462/#5359462

Hi DaneLLL,

Is there any update on the status of enabling metadata in nvarguscamerasrc or an expected release date?

Hi,
We should have it ready in next r32.3 release.

Hi @DaneLLL, do you have an ETA for Jetpack 4.3 release including this feature?

Also, is there any chance you can share a binary of nvarguscamerasrc including this feature? It would be really helpful, because it is becoming a showstopper for us.

Thanks for your help.

Same here, I’d like to know if this is going to be added as well as any other fixes.

Hi,
Please try attachment on r32.2.1. It is verified by adding prob callback to the sample

static GQuark gst_buffer_metadata_quark = 0;
typedef struct AuxBufferData {
  gint64 frame_num;
  gint64 timestamp;
} AuxData;
static GstPadProbeReturn
nvargus_src_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
    AuxData *meta = NULL;
    GstBuffer *buf = (GstBuffer *) info->data;
    gst_buffer_metadata_quark = g_quark_from_static_string ("GstBufferMetaData");
    meta = (AuxData *) gst_mini_object_get_qdata (GST_MINI_OBJECT_CAST (buf),
							   gst_buffer_metadata_quark);

   printf(">>> Gstreamer:Frame #%lu : Timestamp: %lu\n", meta->frame_num, meta->timestamp);
   return GST_PAD_PROBE_OK;
}
GstElement* src = gst_bin_get_by_name(GST_BIN(gst_pipeline), "mysource");
GstPad *src_pad = gst_element_get_static_pad (src, "src");
gst_pad_add_probe (src_pad, GST_PAD_PROBE_TYPE_BUFFER,
    nvargus_src_pad_buffer_probe, NULL, NULL);

r32_21_TEST_libgstnvarguscamerasrc.zip (28.7 KB)

Hi @DaneLLL, thank you so much for the binary and the example code, it worked great, we will integrate it into our application!

Just as a note, consider creating a GstMeta with this information instead of a quark, since it is more standard, and DeepStream makes use of GstMetas very well, so would be nice to have a single way to handle metas over the pipeline when using the whole NVIDIA’s GStreamer ecosystem.

Thanks again for you help!

Hi @DaneLLL, thank you so much for the binary and the example code, it worked great, we will integrate it into our application!

Just as a note, consider creating a GstMeta with this information instead of a quark, since it is more standard, and DeepStream makes use of GstMetas very well, so would be nice to have a single way to handle metas over the pipeline when using the whole NVIDIA’s GStreamer ecosystem.

Thanks again for you help!