Facing Issues with the NVIDIA.DSANALYTICS meta

Please provide complete information as applicable to your setup.

• Hardware Platform (T4 GPU)
• DeepStream Version 7.0
• TensorRT Version 8.6.1
• NVIDIA GPU Driver Version 535

I’ve built a deepstream pipeline with python bindings having a PGIE, 3 SGIE, NvDSAnalytics and RTSPSink.
This is what the pipeline looks like:

    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(sgie1)
    pipeline.add(sgie2)
    pipeline.add(sgie3)
    pipeline.add(nvanalytics)
    pipeline.add(tiler)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(nvvidconv_postosd)
    pipeline.add(caps)
    pipeline.add(encoder)
    pipeline.add(rtppay)
    pipeline.add(sink)


And this is how they are linked:

    streammux.link(queue1)
    queue1.link(pgie)
    pgie.link(queue2)
    queue2.link(tracker)
    tracker.link(queue3)
    queue3.link(sgie1)
    sgie1.link(queue4)
    queue4.link(sgie2)
    sgie2.link(queue5)
    queue5.link(sgie3)
    sgie3.link(queue6)
    queue6.link(nvanalytics)
    nvanalytics.link(queue7)
    queue7.link(tiler)
    tiler.link(queue8)
    queue8.link(nvvidconv)
    nvvidconv.link(queue9)
    queue9.link(nvosd)
    nvosd.link(queue10)
    queue10.link(nvvidconv_postosd)
    nvvidconv_postosd.link(queue11)
    queue11.link(caps)
    caps.link(queue12)
    queue12.link(encoder)
    encoder.link(queue13)
    queue13.link(rtppay)
    rtppay.link(queue14)
    queue14.link(sink)

In this pipeline, I was trying to extract object level line crossing metadata, for which I was using

            while l_user_meta:
                try:
                    user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
                    
                    if user_meta.base_meta.meta_type == pyds.nvds_get_user_meta_type("NVIDIA.DSANALYTICSOBJ.USER_META"):             
                        user_meta_data = pyds.NvDsAnalyticsObjInfo.cast(user_meta.user_meta_data)
                        l_class = obj_meta.classifier_meta_list
                        if user_meta_data.lcStatus:

But the number of times the loop enters the user_meta_data.lcStatus is different from the objLCCumCnt that I get from this code:

        l_user = frame_meta.frame_user_meta_list
        while l_user:
            try:
                user_meta = pyds.NvDsUserMeta.cast(l_user.data)
                if user_meta.base_meta.meta_type == pyds.nvds_get_user_meta_type("NVIDIA.DSANALYTICSFRAME.USER_META"):
                    user_meta_data = pyds.NvDsAnalyticsFrameMeta.cast(user_meta.user_meta_data)

user_meta_data.objLCCumCnt['entry']

What could be the reason for that. While running the pipeline, I sometimes see this warning in the logs. Could this be the reason for this issue:

0:01:21.402813104     1 0x7d0bc0004b30 WARN               rtspmedia rtsp-media.c:4935:gst_rtsp_media_set_state: media 0x7d0bc0008e00 was not prepared
0:01:21.521380748     1 0x7d0bc000b300 WARN              rtspstream rtsp-stream.c:4442:gst_rtsp_stream_get_rtpinfo: Could not get payloader stats
0:01:21.560479975     1 0x7d0bc000b300 WARN               rtspmedia rtsp-media.c:4623:gst_rtsp_media_suspend: media 0x7d0bc8004960 was not prepared
0:01:21.601600416     1 0x7d0bc0013cb0 WARN                basesink gstbasesink.c:1249:gst_base_sink_query_latency:<appsink0> warning: Pipeline construction is invalid, please add queues.
0:01:21.601623017     1 0x7d0bc0013cb0 WARN                basesink gstbasesink.c:1249:gst_base_sink_query_latency:<appsink0> warning: Not enough buffering available for  the processing deadline of 0:00:00.020000000, add enough queues to buffer  0:00:00.020000000 additional data. Shortening processing latency to 0:00:00.000000000.
0:01:21.601606830     1 0x7d0bc0004b30 WARN                basesink gstbasesink.c:1249:gst_base_sink_query_latency:<appsink0> warning: Pipeline construction is invalid, please add queues.
0:01:21.601696481     1 0x7d0bc0004b30 WARN                basesink gstbasesink.c:1249:gst_base_sink_query_latency:<appsink0> warning: Not enough buffering available for  the processing deadline of 0:00:00.020000000, add enough queues to buffer  0:00:00.020000000 additional data. Shortening processing latency to 0:00:00.000000000.
0:01:21.601884542     1 0x7d0bc0004b30 WARN               rtspmedia rtsp-media.c:3281:default_handle_message: 0x7d0bc8004960: got warning Pipeline construction is invalid, please add queues. (../libs/gst/base/gstbasesink.c(1249): gst_base_sink_query_latency (): /GstPipeline:media-pipeline/GstAppSink:appsink0:
Not enough buffering available for  the processing deadline of 0:00:00.020000000, add enough queues to buffer  0:00:00.020000000 additional data. Shortening processing latency to 0:00:00.000000000.)
0:01:21.601940280     1 0x7d0bc0004b30 WARN               rtspmedia rtsp-media.c:3281:default_handle_message: 0x7d0bc8004960: got warning Pipeline construction is invalid, please add queues. (../libs/gst/base/gstbasesink.c(1249): gst_base_sink_query_latency (): /GstPipeline:media-pipeline/GstAppSink:appsink0:
Not enough buffering available for  the processing deadline of 0:00:00.020000000, add enough queues to buffer  0:00:00.020000000 additional data. Shortening processing latency to 0:00:00.000000000.)

0:01:22.770302654     1 0x7d0bc0004b30 WARN                basesink gstbasesink.c:1249:gst_base_sink_query_latency:<appsink0> warning: Pipeline construction is invalid, please add queues.
0:01:22.770330330     1 0x7d0bc0004b30 WARN                basesink gstbasesink.c:1249:gst_base_sink_query_latency:<appsink0> warning: Not enough buffering available for  the processing deadline of 0:00:00.020000000, add enough queues to buffer  0:00:00.020000000 additional data. Shortening processing latency to 0:00:00.000000000.
0:01:22.770489622     1 0x7d0bc0004b30 WARN               rtspmedia rtsp-media.c:3281:default_handle_message: 0x7d0bc8004960: got warning Pipeline construction is invalid, please add queues. (../libs/gst/base/gstbasesink.c(1249): gst_base_sink_query_latency (): /GstPipeline:media-pipeline/GstAppSink:appsink0:
Not enough buffering available for  the processing deadline of 0:00:00.020000000, add enough queues to buffer  0:00:00.020000000 additional data. Shortening processing latency to 0:00:00.000000000.)

how many source do you have? why do you think these two data are the same? lcStatus is in object user meta, objLCCumCnt is in frame user meta. please ignore that Gstreamer warning logs.

I face this issue even with a single source.
I don’t think that this data is the same but shouldn’t the objLCCumCnt increase every time we have an lcStatus for some object?

Can the use of confluent_kafka for to produce messages to kafka cluster instead of using nvmsgbroker cause performance overhead or data loss?
Or
Does making synchronous http API calls in the pipeline cause performance overhead or data loss?

sorry for the late reply. objLCCumCnt is a map, the key is the line crossing label, the value is total cumulative count of Line crossing of key. the cumulative count will increase when the objects cross the line. lcStatus is a vector, it means which line the object crossed.
could you open a new topic for nvmsgbroker because it is not related to the current topic? Thanks!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.