DeepStream6.1 freezes at a specific position

I have a simple DeepStream pipeline consisting of a face_detector and an alignment model. I base my code on the example: deepstream_test_3.py in order to support multiple sources. The problem is the following:

I can process without any problem one video, two and even three. But when I put 4 videos everything works fine until a certain number of frames is reached, then the pipeline freezes, it doesn’t give an error, it just freezes. If I put more videos (4,6, or 7), the same thing happens, but it happens earlier (in a smaller number of frames). It is as if there is a limit to the total number of inferences.

If I just use the detector, then it works fine regardless of the number of source videos.

I am using: a container DeepStream6.1-dev with GPU NVIDIA GeForce GTX 1650

I attach the pgie files of the detector and the classifier:

pgie_det.txt

[property]
gpu-id=0
net-scale-factor=0.0078125
model-file=models/detector.caffemodel
proto-file=models/detector.prototxt
labelfile-path=models/det_labels.txt
force-implicit-batch-dim=1
batch-size=1
network-mode=0
num-detected-classes=2
interval=0
gie-unique-id=1
output-blob-names=boxes;scores
parse-bbox-func-name=NvDsInferParseULFD
custom-lib-path=nvdsinfer_custom_parsers/libnvdsinfer_ulfd_impl.so
offsets=127.5;127.5;127.5
model-color-format=0
classifier-threshold=0.5
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

pgie_align.txt

[property]
gpu-id=0
net-scale-factor=0.0078125
onnx-file=models/onet.onnx
force-implicit-batch-dim=0
batch-size=1
network-mode=0
process-mode=2
input-object-min-width=0
input-object-min-height=0
model-color-format=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=1
output-blob-names=boxes;landmarks
offsets=127.5;127.5;127.5
classifier-async-mode=0
output-tensor-meta=1
is-classifier=1
secondary-reinfer-interval=0

Can you provide the pipeline in detail?

Sure, here it is:

#!/usr/bin/env python3

from ctypes import util
from distutils.command.config import config
from os import stat
import sys
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
from ctypes import *
import sys
import math
from common.bus_call import bus_call
from common.FPS import GETFPS
import common.utils as UTILS
import face_recognition_utils as FACE_UTILS
import pyds

fps_streams={}

PGIE_ALIGN_ID = 2
PGIE_RECOG_ID = 3
GOOD_FOR_RECOG_ID=2
DEBUG_CODE=True

MUXER_OUTPUT_WIDTH=1920
MUXER_OUTPUT_HEIGHT=1080
MUXER_BATCH_TIMEOUT_USEC=4000000
TILED_OUTPUT_WIDTH=1280
TILED_OUTPUT_HEIGHT=720
GST_CAPS_FEATURES_NVMM="memory:NVMM"
OSD_PROCESS_MODE=0
OSD_DISPLAY_TEXT=1

def pgie_align_src_pad_buffer_probe(pad,info,u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        camara_id =  frame_meta.source_id        
        l_obj=frame_meta.obj_meta_list
        # detected faces 
        while l_obj is not None:
            try: 
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break

            tracker_id = obj_meta.object_id 
            # accesing to object metalist 
            l_user = obj_meta.obj_user_meta_list
            while l_user is not None:
                try:
                    user_meta = pyds.NvDsUserMeta.cast(l_user.data)
                except StopIteration:
                    break
                
                if user_meta.base_meta.meta_type != pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META:
                    continue

                tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)               

                # process the align part 
                if tensor_meta.unique_id == PGIE_ALIGN_ID:                                   
                    almost_frontal, (roll, pitch, yaw) = FACE_UTILS.get_face_orientation(tensor_meta=tensor_meta)
                    print("CameraId", camara_id, "FaceID:", tracker_id, "Angles:", roll, pitch, yaw)

                try:
                    l_user = l_user.next
                except StopIteration:
                    break

            try: 
                l_obj=l_obj.next
            except StopIteration:
                break
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
    return Gst.PadProbeReturn.OK

def run(stream_sources):

    number_sources = len(stream_sources)
    for i in range(number_sources): fps_streams["stream{0}".format(i)]=GETFPS(i)

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements */
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()
    is_live = False

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
        return 

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = UTILS.create_gst_element("nvstreammux", "Stream-muxer")
    pipeline.add(streammux)
    for i in range(number_sources):
        print("Creating source_bin ",i," \n ")
        uri_name= stream_sources[i] 
        if uri_name.find("rtsp://") == 0 :
            is_live = True
        source_bin=UTILS.create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)
        padname="sink_%u" %i
        sinkpad= streammux.get_request_pad(padname) 
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        srcpad=source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)

    if is_live:
        print("Atleast one of the sources is live")
        streammux.set_property('live-source', 1)

    queue1 = UTILS.create_gst_element("queue", "queue1")
    pgie_det     =  UTILS.create_gst_element("nvinfer", "pgie_det")    
    queue2 = UTILS.create_gst_element("queue", "queue2")
    pgie_tracker =  UTILS.create_gst_element("nvtracker", "pgie_tracker")
    queue3 = UTILS.create_gst_element("queue", "queue3")
    pgie_align   =  UTILS.create_gst_element("nvinfer", "pgie_align")
    queue4 = UTILS.create_gst_element("queue", "queue4")
    
    # config mux
    streammux.set_property('width', MUXER_OUTPUT_WIDTH)
    streammux.set_property('height', MUXER_OUTPUT_HEIGHT)
    streammux.set_property('batch-size', number_sources)
    streammux.set_property('batched-push-timeout', MUXER_BATCH_TIMEOUT_USEC)

    # config pgies  
    pgie_det.set_property('config-file-path', "face_det_ULFD.txt")
    pgie_batch_size=pgie_det.get_property("batch-size")

    pgie_align.set_property('config-file-path', "onet_align.txt")
    UTILS.configure_tracker(tracker=pgie_tracker,yml_file='tracker_config.txt')

    # TODO(rbt): avoid change the batch size with ONNX models  
    if(pgie_batch_size != number_sources):
        print("WARNING: Overriding infer-config batch-size",pgie_batch_size," with number of sources ", number_sources," \n")
        pgie_det.set_property("batch-size",number_sources)

    # Adding elements to Pipeline 
    pipeline.add(queue1)
    pipeline.add(pgie_det)
    pipeline.add(queue2)
    pipeline.add(pgie_tracker)
    pipeline.add(queue3)
    pipeline.add(pgie_align)
    pipeline.add(queue4)

    # link elements
    streammux.link(queue1); queue1.link(pgie_det)
    pgie_det.link(queue2); queue2.link(pgie_tracker)   
    pgie_tracker.link(queue3); queue3.link(pgie_align)   
    pgie_align.link(queue4); 

    if DEBUG_CODE == False:
        fakesink = UTILS.create_gst_element("fakesink", "fakesink")
        pipeline.add(fakesink)
        queue4.link(fakesink)
    else:
        tiler=UTILS.create_gst_element("nvmultistreamtiler", "nvtiler")
        nvvidconv = UTILS.create_gst_element("nvvideoconvert", "convertor")
        nvosd = UTILS.create_gst_element("nvdsosd", "onscreendisplay")
        sink = UTILS.create_gst_element("nveglglessink", "nvvideo-renderer")
        queue6 = UTILS.create_gst_element("queue", "queue6")
        queue7 = UTILS.create_gst_element("queue", "queue7")
        queue8 = UTILS.create_gst_element("queue", "queue8")

        nvosd.set_property('process-mode',OSD_PROCESS_MODE)
        nvosd.set_property('display-text',OSD_DISPLAY_TEXT)
        tiler_rows=int(math.sqrt(number_sources))
        tiler_columns=int(math.ceil((1.0*number_sources)/tiler_rows))
        tiler.set_property("rows",tiler_rows)
        tiler.set_property("columns",tiler_columns)
        tiler.set_property("width", TILED_OUTPUT_WIDTH)
        tiler.set_property("height", TILED_OUTPUT_HEIGHT)
        sink.set_property("qos",0)

        pipeline.add(tiler)
        pipeline.add(nvvidconv)
        pipeline.add(nvosd)
        pipeline.add(sink)
        pipeline.add(queue6)
        pipeline.add(queue7)
        pipeline.add(queue8)

        queue4.link(tiler); tiler.link(queue6)
        queue6.link(nvvidconv); nvvidconv.link(queue7)
        queue7.link(nvosd); nvosd.link(queue8); 
        queue8.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # add buffer probes
    UTILS.add_probe_callback(element=pgie_align, pad_name="src", funct=pgie_align_src_pad_buffer_probe)    

    # List the sources
    print("Now playing...")
    for i, source in enumerate(stream_sources):
        print(i, ": ", source)

    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    print("Exiting app\n")
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    run(stream_sources=[
        "file:///app/samplevideos/face-demographics-walking-and-pause.mp4",
        "file:///app/samplevideos/head-pose-face-detection-female-and-male.mp4",
        "file:///app/samplevideos/head-pose-face-detection-male.mp4", 
        "file:///app/samplevideos/face-demographics-walking.mp4"
    ])    

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

I am working with nvcr.io/nvidia/deepstream:6.1-devel docker over Kubuntu 20.04 host, so TensorRT version is 8.2.5, DeepStream is 6.1, my gpu driver version is 515.43.04 and my device is GeForce GTX 1650

Hi @roberto.cruz.rdg ,Could you test the deepstream_test_3 with your videos(use 4, 5, 6 filesrc) in your env and see if it is freeze?
As I can see a DEBUG_CODE mode in your code, does it freeze when you use the DEBUG_CODE mode?

No, with “deepstream_test_3.py” it doesn’t freeze. But it also doesn’t freeze with my code if I just use the detector!

On the other hand, something curious happens. I added the DEBUG_CODE global variable because I thought the problem was related to rendering on the screen, however, it happens that if I put DEBUG_CODE=False (not render) the problem worsens (freezes before!) … I’m new to DeepStream, maybe Maybe I’m putting something wrong

In fact, I don’t need or want to render anything on screen, so I’m especially concerned about that part.

OK. So let just debug with the DEBUG_CODE false situation.
What I can see from the code is the pipeline below:
srcbin → streammux → queue → pgie_det → queue -->pgie_tracker -->queue -->pgie_align → queue -->fakesink
Could you attach a log when it freeze(set the env by “export GST_DEBUG=3”
first)? Thanks

1 Like
nvidia_forum.py:97: PyGIDeprecationWarning: Since version 3.11, calling threads_init is no longer needed. See: https://wiki.gnome.org/PyGObject/Threading
  GObject.threads_init()
Creating Pipeline 
 
Creating nvstreammux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating source_bin  1  
 
Creating source bin
source-bin-01
Creating source_bin  2  
 
Creating source bin
source-bin-02
Creating source_bin  3  
 
Creating source bin
source-bin-03
Creating queue 
 
Creating nvinfer 
 
Creating queue 
 
Creating nvtracker 
 
Creating queue 
 
Creating nvinfer 
 
Creating queue 
 
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
WARNING: Overriding infer-config batch-size 1  with number of sources  4  

Creating fakesink 
 
nvidia_forum.py:213: PyGIDeprecationWarning: GObject.MainLoop is deprecated; use GLib.MainLoop instead
  loop = GObject.MainLoop()
Now playing...
0 :  file:///app/samplevideos/face-demographics-walking-and-pause.mp4
1 :  file:///app/samplevideos/head-pose-face-detection-female-and-male.mp4
2 :  file:///app/samplevideos/head-pose-face-detection-male.mp4
3 :  file:///app/samplevideos/face-demographics-walking.mp4
Starting pipeline 

0:00:00.134267685   987      0x27c7930 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<pgie_align> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 2]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
0:00:16.691871197   987      0x27c7930 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<pgie_align> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1946> [UID = 2]: serialize cuda engine to file: /host/REPOS/Video Object Recognition System/modules/FaceRecognitionModule/models/face_recognition/Onet.onnx_b1_gpu0_fp32.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input.1         3x48x48         
1   OUTPUT kFLOAT scores          2               
2   OUTPUT kFLOAT boxes           4               
3   OUTPUT kFLOAT landmarks       10              

0:00:16.706294396   987      0x27c7930 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<pgie_align> [UID 2]: Load new model:onet_align.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:16.733459517   987      0x27c7930 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<pgie_det> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:17.089277471   987      0x27c7930 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<pgie_det> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/host/REPOS/Video Object Recognition System/modules/FaceRecognitionModule/models/face_detectors/RFB-320.caffemodel_b3_gpu0_fp32.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x240x320       
1   OUTPUT kFLOAT boxes           4420x4          
2   OUTPUT kFLOAT scores          4420x2          

0:00:17.100557505   987      0x27c7930 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<pgie_det> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1832> [UID = 1]: Backend has maxBatchSize 3 whereas 4 has been requested
0:00:17.100578082   987      0x27c7930 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<pgie_det> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2009> [UID = 1]: deserialized backend context :/host/REPOS/Video Object Recognition System/modules/FaceRecognitionModule/models/face_detectors/RFB-320.caffemodel_b3_gpu0_fp32.engine failed to match config params, trying rebuild
0:00:17.101090882   987      0x27c7930 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<pgie_det> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
0:00:23.744442860   987      0x27c7930 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<pgie_det> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1946> [UID = 1]: serialize cuda engine to file: /host/REPOS/Video Object Recognition System/modules/FaceRecognitionModule/models/face_detectors/RFB-320.caffemodel_b4_gpu0_fp32.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x240x320       
1   OUTPUT kFLOAT boxes           4420x4          
2   OUTPUT kFLOAT scores          4420x2          

0:00:23.759405813   987      0x27c7930 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<pgie_det> [UID 1]: Load new model:face_det_ULFD.txt sucessfully
0:00:23.759784569   987      0x27c7930 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
Decodebin child added: source 

Decodebin child added: decodebin0 

0:00:23.760289747   987      0x27c7930 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
0:00:23.760485702   987      0x27c7930 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
Decodebin child added: source 

Decodebin child added: decodebin1 

0:00:23.760643535   987      0x27c7930 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
0:00:23.760883815   987      0x27c7930 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
Decodebin child added: source 

Decodebin child added: decodebin2 

0:00:23.761091557   987      0x27c7930 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
0:00:23.761249592   987      0x27c7930 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
Decodebin child added: source 

Decodebin child added: decodebin3 

0:00:23.761480521   987      0x27c7930 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
Decodebin child added: qtdemux0 

Decodebin child added: qtdemux1 

Decodebin child added: qtdemux2 
Decodebin child added: qtdemux3 


0:00:23.764902759   987 0x7fc164037f60 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TIM
0:00:23.764928209   987 0x7fc164037f60 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TSC
0:00:23.764934069   987 0x7fc164037f60 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TSZ
0:00:23.764985302   987 0x7fc164037f60 WARN                 qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux3> failed to find fragment defaults for stream 1
0:00:23.764946991   987 0x7fc15c056cc0 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TIM
0:00:23.765005955   987 0x7fc15c056cc0 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TSC
0:00:23.765036050   987 0x7fc15c056cc0 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TSZ
0:00:23.765048744   987     0x19b55f60 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TIM
0:00:23.765059276   987 0x7fc164037f60 WARN                 qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux3> failed to find fragment defaults for stream 2
0:00:23.765075679   987 0x7fc150014a40 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TIM
0:00:23.765083013   987     0x19b55f60 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TSC
0:00:23.765103269   987     0x19b55f60 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TSZ
0:00:23.765090475   987 0x7fc15c056cc0 WARN                 qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux1> failed to find fragment defaults for stream 1
0:00:23.765129244   987     0x19b55f60 WARN                 qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 1
0:00:23.765144498   987 0x7fc15c056cc0 WARN                 qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux1> failed to find fragment defaults for stream 2
0:00:23.765093397   987 0x7fc150014a40 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TSC
0:00:23.765185919   987 0x7fc150014a40 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type .TSZ
0:00:23.765183815   987     0x19b55f60 WARN                 qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 2
0:00:23.765299964   987 0x7fc150014a40 WARN                 qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux2> failed to find fragment defaults for stream 1
0:00:23.765353488   987 0x7fc150014a40 WARN                 qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux2> failed to find fragment defaults for stream 2
Decodebin child added: multiqueue1 

Decodebin child added: multiqueue0 

Decodebin child added: multiqueue3 
Decodebin child added: multiqueue2 


Decodebin child added: h264parse0 

Decodebin child added: h264parse1 

Decodebin child added: h264parse2 

Decodebin child added: capsfilter0 

Decodebin child added: h264parse3 

Decodebin child added: capsfilter1 

Decodebin child added: capsfilter2 

Decodebin child added: capsfilter3 

Decodebin child added: aacparse0 

Decodebin child added: aacparse1 

Decodebin child added: aacparse2 
Decodebin child added: aacparse3 


Decodebin child added: avdec_aac0 

Decodebin child added: avdec_aac1 
Decodebin child added: avdec_aac3 

Decodebin child added: avdec_aac2 


Decodebin child added: nvv4l2decoder0 

Decodebin child added: nvv4l2decoder1 

Decodebin child added: nvv4l2decoder2 

Decodebin child added: nvv4l2decoder3 

0:00:23.801508913   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.801523732   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:23.801529734   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.801535116   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:23.801547337   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.801547340   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.801552591   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat DVX5
0:00:23.801574555   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.801575230   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.801580685   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat DVX5
0:00:23.801590036   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:23.801586867   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.801599223   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.801598020   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.801608436   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:23.801609808   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:23.801563407   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:23.801611047   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat DVX4
0:00:23.801619757   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.801619611   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.801618924   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.801640363   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.801636761   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:23.801668157   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.801675539   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe minimum capture size for pixelformat DVX5
0:00:23.801629752   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe minimum capture size for pixelformat DVX5
0:00:23.801650528   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat DVX4
0:00:23.801679933   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.801942615   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.801949322   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe maximum capture size for pixelformat DVX5
0:00:23.801952359   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe maximum capture size for pixelformat DVX5
0:00:23.801958672   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.801964364   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe minimum capture size for pixelformat DVX4
0:00:23.801968598   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.801644036   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:23.801952846   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.801983172   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.801984689   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:23.801988239   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat DVX5
0:00:23.801990298   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.801996324   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe minimum capture size for pixelformat DVX4
0:00:23.801997342   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.802009744   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:23.802023331   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat DVX5
0:00:23.801970722   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.802010659   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.802052389   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.802049370   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe maximum capture size for pixelformat DVX4
0:00:23.802026001   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.802081532   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.802091403   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:23.802104962   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.802110950   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:23.802096263   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe maximum capture size for pixelformat DVX4
0:00:23.802122260   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.802151812   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:23.802155452   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.802140427   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat DVX4
0:00:23.802165635   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:23.802189254   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.802196336   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:23.802169351   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:23.802219617   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.802227407   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:23.802241899   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.802247409   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H265
0:00:23.802251876   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.802256503   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H265
0:00:23.802265826   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.802271748   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP90
0:00:23.802276338   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.802281586   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP90
0:00:23.802289224   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.802294862   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP80
0:00:23.802299834   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.802305313   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP80
0:00:23.802313898   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.802319591   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H264
0:00:23.802324795   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:23.802329599   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H264
0:00:23.802592684   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:23.802603684   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat NM12
0:00:23.802609690   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:23.802615225   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat NM12
0:00:23.802622411   987 0x7fc14802c800 WARN                    v4l2 gstv4l2object.c:2394:gst_v4l2_object_add_interlace_mode:0x7fc134018b10 Failed to determine interlace mode
0:00:23.802195501   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.805307568   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:23.802181993   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.802211136   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.805358275   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.805355759   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe minimum capture size for pixelformat H265
0:00:23.805372391   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:23.805372195   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.805390147   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe maximum capture size for pixelformat H265
0:00:23.805380342   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat DVX4
0:00:23.805400810   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.805406541   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe minimum capture size for pixelformat VP90
0:00:23.805411664   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.805416776   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe maximum capture size for pixelformat VP90
0:00:23.805425452   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.805430992   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe minimum capture size for pixelformat VP80
0:00:23.805436111   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.805430767   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.805466827   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe maximum capture size for pixelformat VP80
0:00:23.805474706   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:23.805505153   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.805488485   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.805530164   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe minimum capture size for pixelformat H264
0:00:23.805536314   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:sink> Unable to try format: Unknown error -1
0:00:23.805541451   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:sink> Could not probe maximum capture size for pixelformat H264
0:00:23.805565903   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:23.805569375   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:src> Unable to try format: Unknown error -1
0:00:23.805600226   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:src> Could not probe minimum capture size for pixelformat NM12
0:00:23.805606406   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder2:src> Unable to try format: Unknown error -1
0:00:23.805626449   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder2:src> Could not probe maximum capture size for pixelformat NM12
0:00:23.805632126   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.805658414   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:23.805645115   987 0x7fc144006d80 WARN                    v4l2 gstv4l2object.c:2394:gst_v4l2_object_add_interlace_mode:0x7fc12c0148a0 Failed to determine interlace mode
0:00:23.805650162   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.805664725   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.809040415   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:23.809044779   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:23.809061786   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.809069205   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat H265
0:00:23.809074303   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.809081037   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat H265
0:00:23.809089364   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.809098946   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat VP90
0:00:23.809107981   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.809113316   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat VP90
0:00:23.809061622   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.809120935   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.809129598   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat VP80
0:00:23.809137470   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.809142339   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat VP80
0:00:23.809142847   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe minimum capture size for pixelformat H265
0:00:23.809150826   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.809170463   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.809175934   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat H264
0:00:23.809177014   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe maximum capture size for pixelformat H265
0:00:23.809183335   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:23.809193959   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat H264
0:00:23.809194482   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.809206335   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe minimum capture size for pixelformat VP90
0:00:23.809211778   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.809217748   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe maximum capture size for pixelformat VP90
0:00:23.809225492   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.809232542   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe minimum capture size for pixelformat VP80
0:00:23.809225116   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:src> Unable to try format: Unknown error -1
0:00:23.809237583   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.809246497   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:src> Could not probe minimum capture size for pixelformat NM12
0:00:23.809253120   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:src> Unable to try format: Unknown error -1
0:00:23.809252919   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe maximum capture size for pixelformat VP80
0:00:23.809261083   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:src> Could not probe maximum capture size for pixelformat NM12
0:00:23.809271124   987 0x7fc14802d2a0 WARN                    v4l2 gstv4l2object.c:2394:gst_v4l2_object_add_interlace_mode:0x7fc1380154a0 Failed to determine interlace mode
0:00:23.809276223   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.809285078   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe minimum capture size for pixelformat H264
0:00:23.809291020   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:sink> Unable to try format: Unknown error -1
0:00:23.809296576   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:sink> Could not probe maximum capture size for pixelformat H264
0:00:23.809322043   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:src> Unable to try format: Unknown error -1
0:00:23.809328577   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2941:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:src> Could not probe minimum capture size for pixelformat NM12
0:00:23.809333721   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:3056:gst_v4l2_object_get_nearest_size:<nvv4l2decoder3:src> Unable to try format: Unknown error -1
0:00:23.809339069   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2947:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder3:src> Could not probe maximum capture size for pixelformat NM12
0:00:23.809346187   987 0x7fc15400dcc0 WARN                    v4l2 gstv4l2object.c:2394:gst_v4l2_object_add_interlace_mode:0x7fc140012720 Failed to determine interlace mode
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7fc275d4dc40 (GstCapsFeatures at 0x7fc164035540)>
In cb_newpad

gstname= audio/x-raw
0:00:23.907228840   987 0x7fc14802c800 WARN            v4l2videodec gstv4l2videodec.c:1779:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder0> Duration invalid, not setting latency
0:00:23.907258567   987 0x7fc14802c800 WARN          v4l2bufferpool gstv4l2bufferpool.c:1049:gst_v4l2_buffer_pool_start:<nvv4l2decoder0:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:23.908451463   987 0x7fc13c00af00 WARN          v4l2bufferpool gstv4l2bufferpool.c:1499:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder0:pool:src> Driver should never set v4l2_buffer.field to ANY
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7fc275d4dc40 (GstCapsFeatures at 0x7fc164033e80)>
In cb_newpad

gstname= audio/x-raw
0:00:23.911688121   987 0x7fc144006d80 WARN            v4l2videodec gstv4l2videodec.c:1779:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder2> Duration invalid, not setting latency
0:00:23.911860310   987 0x7fc144006d80 WARN          v4l2bufferpool gstv4l2bufferpool.c:1049:gst_v4l2_buffer_pool_start:<nvv4l2decoder2:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:23.913086391   987 0x7fc134023760 WARN          v4l2bufferpool gstv4l2bufferpool.c:1499:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder2:pool:src> Driver should never set v4l2_buffer.field to ANY
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7fc275d4dc40 (GstCapsFeatures at 0x7fc138011200)>
In cb_newpad

In cb_newpad

gstname= audio/x-raw
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7fc275d4dd00 (GstCapsFeatures at 0x7fc14000f260)>
In cb_newpad

gstname= audio/x-raw
0:00:23.917430682   987 0x7fc15400dcc0 WARN            v4l2videodec gstv4l2videodec.c:1779:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder3> Duration invalid, not setting latency
0:00:23.917453632   987 0x7fc15400dcc0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1049:gst_v4l2_buffer_pool_start:<nvv4l2decoder3:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:23.918466721   987 0x7fc13801ed20 WARN          v4l2bufferpool gstv4l2bufferpool.c:1499:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder3:pool:src> Driver should never set v4l2_buffer.field to ANY
0:00:23.919152679   987 0x7fc14802d2a0 WARN            v4l2videodec gstv4l2videodec.c:1779:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder1> Duration invalid, not setting latency
0:00:23.919175732   987 0x7fc14802d2a0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1049:gst_v4l2_buffer_pool_start:<nvv4l2decoder1:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:23.923324625   987 0x7fc12c01b240 WARN          v4l2bufferpool gstv4l2bufferpool.c:1499:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder1:pool:src> Driver should never set v4l2_buffer.field to ANY

After Ctr+C I got this:

[NvMultiObjectTracker] De-initialized
0:07:13.163486245   987 0x7fc13801ed20 WARN            videodecoder gstvideodecoder.c:2759:gst_video_decoder_prepare_finish_frame:<nvv4l2decoder3> decreasing timestamp (0:00:08.250000000 < 0:00:09.250000000)
0:07:13.164500247   987 0x7fc13c00af00 WARN            videodecoder gstvideodecoder.c:2759:gst_video_decoder_prepare_finish_frame:<nvv4l2decoder0> decreasing timestamp (0:00:08.250000000 < 0:00:09.250000000)
0:07:13.165060602   987 0x7fc134023760 WARN            videodecoder gstvideodecoder.c:2759:gst_video_decoder_prepare_finish_frame:<nvv4l2decoder2> decreasing timestamp (0:00:08.250000000 < 0:00:09.166666666)
0:07:13.165599461   987 0x7fc12c01b240 WARN            videodecoder gstvideodecoder.c:2759:gst_video_decoder_prepare_finish_frame:<nvv4l2decoder1> decreasing timestamp (0:00:08.250000000 < 0:00:09.250000000)

Seems no error message from the log.
Since the pipeline can run well with only detector model, the problem maybe your alignment model part. There is no limit to the total number of inferences, but you should configure the correct paras.
So could you explain your model in detail,expecially the alignment model(e.g., use,layers, input, output, pre-and post-processing)?
Also if you want to use the face_detector result as the input of alignment, you should use the sgie, not the pgie in your code. You can refer to deepstream_test_2.py.
If you want costomize your own model, please set the right paras, refer to https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

This is the align model:

About the align model:
Format: ONNX
Input: 1x3x48x48
ScaleFactor: 1/128
Mean: 127.5,127.5,127.5
RGB: True
Output: 1x10 (landkmarks), 1x4 (boxes), 1x2 (scores)

I can show the code to convert the output of the metadata model to numpy arrays, but in fact the problem occurs even if I don’t register any probe functions. Regarding to the pgie vs spgie I think I used the align model as spgie, I just used wrong (confusing) name for the variable in the code, but you can see it in the config_file content:

[property]
gpu-id=0
net-scale-factor=0.0078125
onnx-file=models/face_recognition/Onet.onnx
#model-engine-file=models/face_recognition/Onet.onnx_b1_gpu0_fp32.engine
force-implicit-batch-dim=0
batch-size=1
network-mode=0
process-mode=2
input-object-min-width=128
input-object-min-height=128
model-color-format=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=1
output-blob-names=boxes;landmarks
offsets=127.5;127.5;127.5
classifier-async-mode=0
output-tensor-meta=1
is-classifier=1
secondary-reinfer-interval=0

If you want I can prepare everything necessary to reproduce the problem and send it to you

Hi, @roberto.cruz.rdg.
From your align model and config, the batch-size may be set wrong. You can try to change it to the number of your video sources in your config file.
You just change the pige_dec inferer batch-size in your code, but not change it in the pige_align plugin.

    if(pgie_batch_size != number_sources):
        print("WARNING: Overriding infer-config batch-size",pgie_batch_size," with number of sources ", number_sources," \n")
        pgie_det.set_property("batch-size",number_sources)
1 Like

I will try that, but I have some questions:

1-> Dynamic batch size is not supported yet in ONNX models, So I guest I need to create many Onet.onnx versions (OnetB1.onnx, OnetB2.onnx, OnetB3.onnx, …) with different batch sizes that match with my number of sources, right ?
2-> Isn’t batch size =1 supposed to work, even though it’s not ideal?

I will try forcing making Onetb4.onnx (onet version for four inputs) and >I will let you know

Update: I tried that solution and It didn’t work, I got:
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch

It has sense since align model work over detected bboxes and the number of bboxes is variable and, in general, different of number of sources

1.We suggest your model should support dynamic batch size, so our TensorRT Engine can config batch size dynamically.

2.We suggest the batch size in the pgie shoule be the source of the video, and in sgie should be more bigger. (e.g., you can see the config files from the deepstream_test_2.py, the sgie batch size is 16)
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-test2/dstest2_sgie1_config.txt

3.If you set the align model infer batch size = 1, but your det model have deteted too many faces, it will be infered very slow.That maybe the reason why you freeze.

You can try the following methods to debug it:

1.You can use the following methods to test the performace of your env and find the best batch size.
https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps#measure-the-inference-perf

2.You can choose some video that have small number of people or even only one people to test.

1 Like

I understand that it is best to use dynamic batch sizing, but it is not compatible with ONNX models, I also have some Caffe versions of Onet model, but I got an error with PReLU layer, tried to solve that a lot with no results. On the other hand, ONet is a very fast model, even on a modest CPU. I will try your suggestion and let you know

Any plan to support dynamic batch sizing with ONNX models?

@roberto.cruz.rdg
===>Any plan to support dynamic batch sizing with ONNX models?

We support set the batch-size by the config file or by code in the deepstream.
But when you use your own ONNX models, you’d better check the “ONNX dynamic batch” and let your model support dynamic batch sizing. I’m sorry that we cannot support that.
If your models cannot support dynamic batch sizing, we suggest that you’d better find the best batch-size for your own models and set to the nvinfer plugin. Thanks

Sorry for the delay:

I prepared a onet model with batch_size=4 and it doesn’t work. I tried it with one video containing only one face and one containing two faces, it crashes in both case. Do you know any example where an ONNX model is used as spgei? All cases I have found where ONNX models are used, use it as pgie not spgie

the “spgie_onet.file”:

[property]
[property]
gpu-id=0
net-scale-factor=0.0078125
onnx-file=models/OnetB4.onnx
#model-engine-file=models/Onet.onnx_b1_gpu0_fp32.engine
force-implicit-batch-dim=0
batch-size=4
network-mode=0
process-mode=2
input-object-min-width=128
input-object-min-height=128
model-color-format=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=1
output-blob-names=boxes;landmarks
offsets=127.5;127.5;127.5
classifier-async-mode=0
output-tensor-meta=1
is-classifier=1
secondary-reinfer-interval=0

and the output

0:00:00.143094098   537      0x18c6520 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<pgie_align> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 2]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
WARNING: [TRT]: Min value of this profile is not valid
0:00:10.086465172   537      0x18c6520 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<pgie_align> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 2]: serialize cuda engine to file: /REPOS/BACK/models/OnetB4.onnx_b4_gpu0_fp32.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 4
0   INPUT  kFLOAT input.1         3x48x48         min: 1x3x48x48       opt: 4x3x48x48       Max: 4x3x48x48       
1   OUTPUT kFLOAT scores          2               min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT boxes           4               min: 0               opt: 0               Max: 0               
3   OUTPUT kFLOAT landmarks       10              min: 0               opt: 0               Max: 0               

0:00:10.089209518   537      0x18c6520 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<pgie_align> [UID 2]: Load new model:onet_align.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:10.096105917   537      0x18c6520 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<pgie_det> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:10.096208689   537      0x18c6520 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<pgie_det> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
0:00:16.353423703   537      0x18c6520 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<pgie_det> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /REPOS/BACK/models/RFB-320.caffemodel_b1_gpu0_fp32.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x240x320       
1   OUTPUT kFLOAT boxes           4420x4          
2   OUTPUT kFLOAT scores          4420x2          

0:00:16.399712108   537      0x18c6520 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<pgie_det> [UID 1]: Load new model:face_det.txt sucessfully
Decodebin child added: source 

Decodebin child added: decodebin0 

Decodebin child added: qtdemux0 

Decodebin child added: multiqueue0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Decodebin child added: aacparse0 

Decodebin child added: avdec_aac0 

Decodebin child added: nvv4l2decoder0 

In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7fa94401f1c8 (GstCapsFeatures at 0x7fa854003240)>
In cb_newpad

gstname= audio/x-raw
ERROR: [TRT]: [shapeMachine.cpp::execute::565] Error Code 7: Internal Error (IShuffleLayer (Unnamed Layer* 21) [Shuffle]: reshaping failed for tensor: onnx::Gemm_43
reshape would change volume
Instruction: RESHAPE{4 288} {4 1152 1 1}
)
ERROR: [TRT]: [executionContext.cpp::enqueueInternal::360] Error Code 2: Internal Error (Could not resolve slots: )
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1643 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:23.002208812   537      0x18641e0 WARN                 nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<pgie_align> error: Failed to queue input batch for inferencing
Error: gst-stream-error-quark: Failed to queue input batch for inferencing (1): gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline0/GstNvInfer:pgie_align
Exiting app

ERROR: [TRT]: [shapeMachine.cpp::execute::565] Error Code 7: Internal Error (IShuffleLayer (Unnamed Layer* 21) [Shuffle]: reshaping failed for tensor: onnx::Gemm_43
reshape would change volume
Instruction: RESHAPE{4 288} {4 1152 1 1}
)
ERROR: [TRT]: [executionContext.cpp::enqueueInternal::360] Error Code 2: Internal Error (Could not resolve slots: )
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1643 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:23.010472529   537      0x18641e0 WARN                 nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<pgie_align> error: Failed to queue input batch for inferencing
ERROR: [TRT]: [shapeMachine.cpp::execute::565] Error Code 7: Internal Error (IShuffleLayer (Unnamed Layer* 21) [Shuffle]: reshaping failed for tensor: onnx::Gemm_43
reshape would change volume
Instruction: RESHAPE{4 288} {4 1152 1 1}
)
ERROR: [TRT]: [executionContext.cpp::enqueueInternal::360] Error Code 2: Internal Error (Could not resolve slots: )
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1643 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:23.012214272   537      0x18641e0 WARN                 nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<pgie_align> error: Failed to queue input batch for inferencing
[NvMultiObjectTracker] De-initialized

Your model maybe not work well in deepstream from the log. You can test it with trtexec. You can learn how to use it from the README.
Also if you want get the landmark or do something with the facial features, you can refer the link followed:deepstream-faciallandmark-app

1 Like

I test the model with trtexec and I don’t get any error … I exported the model again, but using dynamic batch:

x = torch.randn(1,3,48,48)
y = onet(x)
torch.onnx.export(model=onet,args=x, f="Onet.onnx", output_names=["boxes", "landmarks", "scores"],  
                  opset_version=12,   input_names = ['input'], 
                  dynamic_axes={'input' : {0 : 'batch_size'}})

With that change I was able to run the pipeline without any error. Also I was able to include more videos, I still get pipeline frozen with more than 6 videos, but I can use until 6 videos without problems. Thank you

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.