Deepstream6.4 in Ubuntu24 Pipeline Not running

image
the host : Description: Ubuntu 24.04 LTS
deepstream container : ubuntu 22.04 LTS
NVIDIA-SMI 535.183.06
Driver Version: 535.183.06
CUDA Version: 12.2

Previously, when I ran this container on my Ubuntu22.04 machine, everything was normal. However, this time I had to run this container on Ubuntu24 and it encountered an issue.

when I use :export GST_DEBUG=3, No errors were displayed。

The Main functions is :

    def run(self):
        # Standard GStreamer initialization
        Gst.init(None)

        # Create gstreamer elements */
        # Create Pipeline element that will form a connection of other elements
        print("Creating Pipeline \n ")
        self.pipeline = Gst.Pipeline()
        is_live = False
        if not self.pipeline:
            self.logger.error(" Unable to create Pipeline \n")
            return

        print("Creating streammux \n ")
        # Create nvstreammux instance to form batches from one or more sources.
        self.streammux = self._create_nvstreammux()
        if not self.streammux:
            self.logger.error(" Unable to create NvStreamMux \n")
            return
        if self.video_url.find("rtsp://") == 0:
            is_live = True
            print("Atleast one of the sources is live")
            self.streammux.set_property('live-source', 1)

        # Create first source bin and add to pipeline
        source_bin = self._create_uridecode_bin(0, self.video_url)
        if not source_bin:
            print("Failed to create source bin. Exiting. \n")
            self.logger.error("Failed to create source bin. Exiting. \n")
            return
        self.g_source_bins[0] = source_bin
        self.pipeline.add(source_bin)

        print("Creating Pgie \n ")
        # pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
        pgie = self._create_element("nvinfer", "primary-inference", "primary-inference")
        if not pgie:
            print("Failed to create pgie bin. \n")

            self.logger.error(" Unable to create pgie \n")
        # Set pgie, sgie1, and sgie2 configuration file paths
        pgie.set_property('config-file-path', self.inferConfigFile)
        # Set necessary properties of the nvinfer element, the necessary ones are:
        pgie.set_property("batch-size", self.MAX_SOURCE_COUNT)
        # Set gpu IDs of the inference engines
        pgie.set_property("gpu_id", 0)

        print("Creating nvvidconv1 \n ")
        # nvvidconv1 = Gst.ElementFactory.make("nvvideoconvert", "convertor1")
        nvvidconv1 = self._create_element("nvvideoconvert", "convertor1", "nvvideoconvert")
        if not nvvidconv1:
            print("Failed to create nvvidconv1 bin. \n")

            self.logger.error(" Unable to create nvvidconv1 \n")
        print("Creating filter1 \n ")
        caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
        # filter1 = Gst.ElementFactory.make("capsfilter", "filter1")
        filter1 = self._create_element("capsfilter", "filter1", "capsfilter")
        if not filter1:
            print("Failed to create filter1 bin. \n")

            self.logger.error(" Unable to get the caps filter1 \n")
        filter1.set_property("caps", caps1)

        print("Creating nvosd \n ")
        #nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
        nvosd = self._create_element("nvdsosd", "onscreendisplay", "nvdsosd")
        if not nvosd:
            print("Failed to create nvdsosd bin. \n")

            self.logger.error(" Unable to create nvosd \n")
        # Set gpu IDs of tiler, nvvideoconvert, and nvosd
        nvosd.set_property("gpu_id", 0)

        # sink = Gst.ElementFactory.make("fakesink", "fake-sink")
        sink = self._create_element("fakesink", "fake-sink", "fake-sink")
        if not sink:
            print("Failed to create fakesink bin. \n")

            self.logger.error(" Unable to create egl sink \n")
        sink.set_property("sync", 0)
        sink.set_property("qos", 0)

        # We link elements in the following order:
        # sourcebin -> streammux -> nvinfer -> nvtracker -> nvdsanalytics ->
        # nvtiler -> nvvideoconvert -> nvdsosd -> sink
        print("Linking elements in the Pipeline \n")
        self.streammux.link(pgie)
        pgie.link(nvvidconv1)
        nvvidconv1.link(filter1)
        filter1.link(nvosd)
        nvosd.link(sink)

        # create an event loop and feed gstreamer bus mesages to it
        loop = GLib.MainLoop()
        bus = self.pipeline.get_bus()
        bus.add_signal_watch()
        bus.connect("message", self._bus_call, loop)
        print("----------pipeline set_state(Gst.State.PAUSED) ----------")
        self.pipeline.set_state(Gst.State.PAUSED)

        # had got all the metadata.
        osdsinkpad = nvosd.get_static_pad("sink")
        if not osdsinkpad:
            sys.stderr.write(" Unable to get sink pad of nvosd \n")
        osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, self._save_frame, 0)
        osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, self._tiler_sink_pad_buffer_probe, 0)

        print("Starting pipeline \n")
        # start play back and listed to events
        self.pipeline.set_state(Gst.State.PLAYING)

        try:
            loop.run()
        except:
            pass
        # cleanup
        print("Exiting app\n")
        self.pipeline.set_state(Gst.State.NULL)```




When executing the above function, printing appears normal

The problem should be here:
    def _create_uridecode_bin(self, source_id, url):
        def decodebin_child_added(child_proxy, Object, name, user_data):
            print("Decodebin child added:", name, "\n")
            if (name.find("decodebin") != -1):
                Object.connect("child-added", decodebin_child_added, user_data)
            if (name.find("nvv4l2decoder") != -1):
                Object.set_property("gpu_id", 0)
                Object.set_property("drop-frame-interval", self.drop_frame_interval)

        def cb_newpad(decodebin, pad, data):
            print("In cb_newpad\n")
            caps = pad.get_current_caps()
            gststruct = caps.get_structure(0)
            gstname = gststruct.get_name()

            # Need to check if the pad created by the decodebin is for video and not
            # audio.
            print("gstname=", gstname)
            if (gstname.find("video") != -1):
                source_id = data
                pad_name = "sink_%u" % source_id
                print(pad_name)
                # Get a sink pad from the streammux, link to decodebin
                sinkpad = self.streammux.request_pad_simple(pad_name)
                if not sinkpad:
                    print("Decodebin link to pipeline error 1", sinkpad)
                    self.logger.error("Unable to create sink pad bin \n")
                if pad.link(sinkpad) == Gst.PadLinkReturn.OK:
                    print("Decodebin linked to pipeline")
                else:
                    print("Decodebin link to pipeline error 2", sinkpad)

                    self.logger.error("Failed to link decodebin to pipeline\n")

        print("Creating uridecodebin for [%s]" % url)

        # Create a source GstBin to abstract this bin's content from the rest of the
        # pipeline
        bin_name = "source-bin-%02d" % source_id
        print(bin_name)

        # Source element for reading from the uri.
        # We will use decodebin and let it figure out the container format of the
        # stream and the codec and plug the appropriate demux and decode plugins.
        bin = Gst.ElementFactory.make("uridecodebin", bin_name)
        print('----------------bin is {}'.format(bin))
        if not bin:
            print(' Unable to create uri decode bin ')
            self.logger.error(" Unable to create uri decode bin \n")
        # We set the input uri to the source element
        bin.set_property("uri", url)
        # Connect to the "pad-added" signal of the decodebin which generates a
        # callback once a new pad for raw data has been created by the decodebin
        bin.connect("pad-added", cb_newpad, source_id)
        bin.connect("child-added", decodebin_child_added, source_id)

        return bin

The cb_newpad function is not triggered here,no printing, but there are also no errors,
Under normal circumstances, it should have similar printing:: print(“In cb_newpad\n”) 、 print(bin_name)

Can you provide me with some ideas for troubleshooting problems?```
some print like this:

Creating Pipeline 
 
Creating streammux 
 
Creating uridecodebin for [rtmp://127.0.0.1:10936/test/1]
source-bin-00
----------------bin is <__gi__.GstURIDecodeBin object at 0x74059a801640 (GstURIDecodeBin at 0x6524e624e080)> 0 rtmp://127.0.0.1:10936/test/1
Creating Pgie 
 
Creating nvvidconv1 
 
Creating filter1 
 
Creating nvosd 
 
Linking elements in the Pipeline 

----------pipeline set_state(Gst.State.PAUSED) ----------
process_param: {'info': {'video': [{'id': 2, 'task_id': 2, 'stream': '', 'content': '/home/runone/program/folder/video_records/foshan.mp4', 'nvr_record_url': None, 'streamFlag': 0}]}, 'status': 1, 'stop': 0, 'task_id': 2}
add info: {'id': 2, 'task_id': 2, 'stream': '', 'content': '/home/runone/program/folder/video_records/foshan.mp4', 'nvr_record_url': None, 'streamFlag': 0}
process_info /home/runone/program/folder/video_records/foshan.mp4 {'id': 2, 'task_id': 2, 'stream': '', 'content': '/home/runone/program/folder/video_records/foshan.mp4', 'nvr_record_url': None, 'streamFlag': 0}
0:00:00.335387633 18778 0x6524e68b1530 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1243> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
socket thread start 0.0.0.0 8848
0.0.0.0:8848 is unused
!!!!analysis init sucess vith vision:3.1.10
WARNING: [TRT]: TensorRT was linked against cuDNN 8.9.0 but loaded cuDNN 8.7.0
WARNING: [TRT]: TensorRT was linked against cuDNN 8.9.0 but loaded cuDNN 8.7.0
0:00:07.318628548 18778 0x6524e68b1530 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/runone/program/folder/model/yolov8s_exp85_736_11.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 5
0   INPUT  kFLOAT images          3x736x736       min: 1x3x736x736     opt: 8x3x736x736     Max: 16x3x736x736    
1   OUTPUT kINT32 num_dets        1               min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT bboxes          100x4           min: 0               opt: 0               Max: 0               
3   OUTPUT kFLOAT scores          100             min: 0               opt: 0               Max: 0               
4   OUTPUT kINT32 labels          100             min: 0               opt: 0               Max: 0               

0:00:07.461398333 18778 0x6524e68b1530 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/runone/program/folder/model/yolov8s_exp85_736_11.engine
0:00:07.470258694 18778 0x6524e68b1530 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/runone/deepstream-implatform/deepstream-common/config_infer_primary_yoloV8.txt sucessfully
Decodebin child added: source 

Decodebin child added: typefindelement0 

Starting pipeline 

Decodebin child added: decodebin0 

Decodebin child added: queue2-0 

Decodebin child added: flvdemux0 

Decodebin child added: multiqueue0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Decodebin child added: nvv4l2decoder0 

I found that when I ran the official demo directly, it was the same situation:

/opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/deepstream-test1# python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.4/samples/streams/sample_720p.h264```

print like:


Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file /opt/nvidia/deepstream/deepstream-6.4/samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

/opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/deepstream-test1/deepstream_test_1.py:220: DeprecationWarning: Gst.Element.get_request_pad is deprecated
  sinkpad = streammux.get_request_pad("sink_0")
Starting pipeline 

WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine open error
0:00:07.509411368   104 0x5f2126b51330 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine failed
0:00:07.623769306   104 0x5f2126b51330 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:07.623796638   104 0x5f2126b51330 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: [TRT]: Missing scale and zero-point for tensor output_bbox/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor conv1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor conv1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor

=====Repeated warnings=======

WARNING: [TRT]: Missing scale and zero-point for tensor output_bbox/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor output_cov/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor output_cov/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
0:02:10.918268601   104 0x5f2126b51330 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2138> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:02:11.173705263   104 0x5f2126b51330 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully

1.Which docker image are you using? Is it nvcr.io/nvidia/deepstream:6.4-triton-multiarch?
2. Does the native sample test3(In the /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3 directory) work normally ?
The sample test1 doesn’t use the uridecodebin, so try the sample test3.

3.If the native sample test3 works properly, I think it’s the python binding that’s causing the problem.
Try

./user_deepstream_python_apps_install.sh --build-bindings -r v1.1.10

1.yes
2. I tried running this:

root@runone-2288H-V6:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3# ./deepstream-test3-app rtmp://127.0.0.1:10936/test/1 --no-display
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
Now playing: rtmp://127.0.0.1:10936/test/1, --no-display,
0:00:00.118726646   237 0x58567ee2a720 ERROR            egladaption ext/eglgles/gstegladaptation_egl.c:160:gst_egl_adaptation_init_display:<nvvideo-renderer> Could not init EGL display connection
0:00:00.118762323   237 0x58567ee2a720 ERROR            egladaption ext/eglgles/gstegladaptation_egl.c:183:gst_egl_adaptation_init_display:<nvvideo-renderer> EGL call returned error 3000
0:00:00.118769733   237 0x58567ee2a720 ERROR            egladaption ext/eglgles/gstegladaptation_egl.c:185:gst_egl_adaptation_init_display:<nvvideo-renderer> Couldn't setup window/surface from handle
0:00:00.118776141   237 0x58567ee2a720 ERROR          nveglglessink ext/eglgles/gsteglglessink.c:536:egl_init:<nvvideo-renderer> Couldn't init EGL display
0:00:00.118782434   237 0x58567ee2a720 ERROR          nveglglessink ext/eglgles/gsteglglessink.c:562:egl_init:<nvvideo-renderer> Failed to perform EGL init
Running...

Even though I executed export PLAY=: 0, the result is still,
I am not very familiar with C,
Then I tried running the Python version of the demo again

root@runone-2288H-V6:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test3# python3 deepstream_test_3.py -i rtmp://127.0.0.1:10936/test/1 --no-display
{'input': ['rtmp://127.0.0.1:10936/test/1'], 'configfile': None, 'pgie': None, 'no_display': True, 'file_loop': False, 'disable_probe': False, 'silent': False}
Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
/opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/deepstream-test3/deepstream_test_3.py:237: DeprecationWarning: Gst.Element.get_request_pad is deprecated
  sinkpad= streammux.get_request_pad(padname)
Creating Pgie 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating Fakesink 

WARNING: Overriding infer-config batch-size 30  with number of sources  1  

Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
0 :  rtmp://127.0.0.1:10936/test/1
Starting pipeline 

0:00:07.088730056   276 0x56b8fa0daca0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:07.201040772   276 0x56b8fa0daca0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
0:00:07.210317101   276 0x56b8fa0daca0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source 

Decodebin child added: typefindelement0 


**PERF:  {'stream0': 0.0} 

Decodebin child added: decodebin0 

Decodebin child added: queue2-0 

Decodebin child added: flvdemux0 

0:00:07.303261762   276 0x56b8fa9cfde0 FIXME         rtmpconnection rtmpconnection.c:869:gst_rtmp_connection_handle_protocol_control:<GstRtmpConnection@0x56b91bdf06c0> set peer bandwidth: 2500000, 2
0:00:07.303483581   276 0x56b8fa9cfde0 WARN                 rtmpamf amf.c:802:parse_ecma_array: Expected array with 1 elements, but read 9
Decodebin child added: multiqueue0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Decodebin child added: nvv4l2decoder0 

0:00:07.352428197   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352447068   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:07.352454648   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352461364   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:07.352478771   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352487357   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat AV10
0:00:07.352494875   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352503537   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat AV10
0:00:07.352515347   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352522419   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat DVX5
0:00:07.352526919   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352533206   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat DVX5
0:00:07.352545766   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352551943   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat DVX4
0:00:07.352556218   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352562344   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat DVX4
0:00:07.352572889   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352579399   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:07.352584181   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352592262   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:07.352603671   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352610360   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:07.352616139   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352624216   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:07.352636315   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352643202   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H265
0:00:07.352649045   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352653961   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H265
0:00:07.352663455   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352669356   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP90
0:00:07.352673899   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352679771   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP90
0:00:07.352689637   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352696246   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP80
0:00:07.352700493   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352706577   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP80
0:00:07.352716618   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352722803   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H264
0:00:07.352727241   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.352734688   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H264
0:00:07.353179477   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.353192370   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat Y444
0:00:07.353201547   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.353210619   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat Y444
0:00:07.353232074   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.353242846   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat P410
0:00:07.353252191   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.353261479   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat P410
0:00:07.353278800   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.353288659   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat PM10
0:00:07.353297483   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.353304034   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat PM10
0:00:07.353316027   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.353322950   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat NM12
0:00:07.353327314   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.353333758   276 0x7438d4016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat NM12

**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 

So do I still need to execute that command?

sorry, I couldn’t find where this file is :user_deepstream_python_apps_install.sh

1.Native sample test3 does not support this parameter.

2.This stream seems to have some problems, try the local file first file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264

/opt/nvidia/deepstream/deepstream/

It’s still the error

root@runone-2288H-V6:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3# ./deepstream-test3-app file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Now playing: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264,
0:00:00.119027207   106 0x5a394c7c7b80 ERROR            egladaption ext/eglgles/gstegladaptation_egl.c:160:gst_egl_adaptation_init_display:<nvvideo-renderer> Could not init EGL display connection
0:00:00.119064181   106 0x5a394c7c7b80 ERROR            egladaption ext/eglgles/gstegladaptation_egl.c:183:gst_egl_adaptation_init_display:<nvvideo-renderer> EGL call returned error 3000
0:00:00.119075212   106 0x5a394c7c7b80 ERROR            egladaption ext/eglgles/gstegladaptation_egl.c:185:gst_egl_adaptation_init_display:<nvvideo-renderer> Couldn't setup window/surface from handle
0:00:00.119082250   106 0x5a394c7c7b80 ERROR          nveglglessink ext/eglgles/gsteglglessink.c:536:egl_init:<nvvideo-renderer> Couldn't init EGL display
0:00:00.119094374   106 0x5a394c7c7b80 ERROR          nveglglessink ext/eglgles/gsteglglessink.c:562:egl_init:<nvvideo-renderer> Failed to perform EGL init
Running...

Run xhost + before start docker.

This issue also occurs in test3.py. I think you rtmp://xx stream is not accessable.

# export DISPLAY=:0
# xhost +
xhost:  unable to open display ":0"

I am not connected to the monitor.
Do I need to adjust the code? For example, operations such as replacing with fakesink?
I have tested this stream using ffmpeg and it can be played

1 Like

The fps display is always 0, I think there is no data coming in.

Try unset DISPLAY. If that doesn’t work, try changing nveglglessink to fakesink, then rebuild test3.

I have modified the. c file:

  if (PERF_MODE) {
    sink = gst_element_factory_make ("fakesink", "nvvideo-renderer");
  } else {
    /* Finally render the osd output */
    if(prop.integrated) {
      sink = gst_element_factory_make ("nv3dsink", "nv3d-sink");
    } else {
      sink = gst_element_factory_make ("nveglglessink", "fakesink");
    }
  }

then :

# make clean 
rm -rf deepstream_test3_app.o deepstream-test3-app
# make
cc -c -o deepstream_test3_app.o -I../../../includes -I /usr/local/cuda-12.2/include -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/x86_64-linux-gnu -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include deepstream_test3_app.c
cc -o deepstream-test3-app deepstream_test3_app.o -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0 -L/usr/local/cuda-12.2/lib64/ -lcudart -lnvdsgst_helper -lm -L/opt/nvidia/deepstream/deepstream-6.4/lib/ -lnvdsgst_meta -lnvds_meta -lnvds_yml_parser -lcuda -Wl,-rpath,/opt/nvidia/deepstream/deepstream-6.4/lib/


Outside the container, I executed:

#unset DISPLAY
#echo $DISPLAY

# docker-compose restart
Restarting deepstream_nvidia ... done

then:

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3# ./deepstream-test3-app file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
0:00:00.004084044    48 0x55d7eb053430 WARN               vadisplay gstvadisplay.c:287:_va_warning:<vadisplaydrm0> VA error: vaGetDriverNameByIndex() failed with unknown libva error, driver_name = (null)
0:00:00.004118644    48 0x55d7eb053430 WARN               vadisplay gstvadisplay.c:347:gst_va_display_initialize:<vadisplaydrm0> vaInitialize: unknown libva error
0:00:00.004195865    48 0x55d7eb053430 WARN               vadisplay gstvadisplay.c:287:_va_warning:<vadisplaydrm1> VA error: vaGetDriverNameByIndex() failed with unknown libva error, driver_name = (null)
0:00:00.004205039    48 0x55d7eb053430 WARN               vadisplay gstvadisplay.c:347:gst_va_display_initialize:<vadisplaydrm1> vaInitialize: unknown libva error
Now playing: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264,
0:00:00.126198358    46 0x629a69fc2d90 ERROR            egladaption ext/eglgles/gstegladaptation_egl.c:160:gst_egl_adaptation_init_display:<fakesink> Could not init EGL display connection
0:00:00.126222497    46 0x629a69fc2d90 ERROR            egladaption ext/eglgles/gstegladaptation_egl.c:183:gst_egl_adaptation_init_display:<fakesink> EGL call returned error 3000
0:00:00.126229749    46 0x629a69fc2d90 ERROR            egladaption ext/eglgles/gstegladaptation_egl.c:185:gst_egl_adaptation_init_display:<fakesink> Couldn't setup window/surface from handle
0:00:00.126234961    46 0x629a69fc2d90 ERROR          nveglglessink ext/eglgles/gsteglglessink.c:536:egl_init:<fakesink> Couldn't init EGL display
0:00:00.126239111    46 0x629a69fc2d90 ERROR          nveglglessink ext/eglgles/gsteglglessink.c:562:egl_init:<fakesink> Failed to perform EGL init
Running...

It should be gst_element_factory_make ("fakesink", "fakesink");

cat /proc/$(pidof "gnome-shell")/environ | tr '\0' '\n' | grep ^DISPLAY=
# cat /proc/$(pidof "gnome-terminal-server")/environ | tr '\0' '\n' | grep ^DISPLAY=

Try the above command to get DISPLAY, if that doesn’t work try the following one.

Try starting docker with the following command

docker run --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix  -w /opt/nvidia/deepstream/deepstream-6.4 nvcr.io/nvidia/deepstream:6.4-triton-multiarch?
/home/runone/docker-compose# cat /proc/$(pidof "gnome-shell")/environ | tr '\0' '\n' | grep ^DISPLAY=
DISPLAY=:1

And
the images (deepstream6.4_base:v1.0) only has some Python libraries installed on the official platform

docker run --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix  -w /opt/nvidia/deepstream/deepstream-6.4 deepstream6.4_base:v1.0 /bin/bash

After modifying the. c file:

root@runone-2288H-V6:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3# ./deepstream-test3-app file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Now playing: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264,
0:00:07.012229820   160 0x5c97065c4030 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:07.144258397   160 0x5c97065c4030 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
0:00:07.153727398   160 0x5c97065c4030 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
0:00:07.156393852   160 0x5c97065c4030 WARN                 basesrc gstbasesrc.c:3688:gst_base_src_start_complete:<source> pad not activated yet
Decodebin child added: source

(deepstream-test3-app:160): GLib-GObject-WARNING **: 17:17:19.552: g_object_set_is_valid_property: object class 'GstFileSrc' has no property named 'drop-on-latency'
Decodebin child added: decodebin0
0:00:07.156761260   160 0x5c97065c4030 WARN                 basesrc gstbasesrc.c:3688:gst_base_src_start_complete:<source> pad not activated yet
Running...

Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
0:00:07.200441540   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200465261   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:07.200474545   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200482980   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:07.200504213   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200513355   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat AV10
0:00:07.200520422   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200529672   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat AV10
0:00:07.200545197   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200553865   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat DVX5
0:00:07.200563835   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200573011   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat DVX5
0:00:07.200586338   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200594957   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat DVX4
0:00:07.200602864   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200613153   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat DVX4
0:00:07.200623499   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200631863   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:07.200638608   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200645318   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:07.200658984   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200667200   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:07.200677186   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200685019   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:07.200698245   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200706765   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H265
0:00:07.200715139   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200722899   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H265
0:00:07.200737633   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200745058   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP90
0:00:07.200751429   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200762290   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP90
0:00:07.200772519   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200781850   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP80
0:00:07.200789983   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200799523   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP80
0:00:07.200810384   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200818627   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H264
0:00:07.200826240   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:07.200835840   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H264
0:00:07.201266731   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.201279857   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat Y444
0:00:07.201286125   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.201296612   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat Y444
0:00:07.201311968   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.201320857   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat P410
0:00:07.201329894   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.201338258   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat P410
0:00:07.201352619   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.201361254   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat PM10
0:00:07.201368681   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.201377010   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat PM10
0:00:07.201390143   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.201400313   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat NM12
0:00:07.201406402   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:07.201413608   160 0x5c9705f7c300 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat NM12

And then it keeps outputting empty lines,just like:

The decoder does not work, it should be that the GPU driver is not installed correctly.

If it doesn’t work, nvidia-docker-toolkit also needs to be reinstalled

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Is there any way to verify if there is a problem with Nvidia driver? I usually use nvidia-smi directly
I have tried two different ways to install nvidia-driver,

$ ubuntu-drivers devices
$ apt install nvidia-driver-535

And through deb files:

nvidia-driver-local-repo-ubuntu2404-550.90.07_1.0-1_amd64.deb

They all seem to be able to use nvidia-smi normally, but Deepstream is not running properly

Reboot after reinstall.

Generally speaking, after the driver is installed correctly, CUDA and Deepstream should work properly.

Thank you.

I followed the specified driver provided by the designated website:

nvidia-driver-local-repo-ubuntu2404-550.90.07_1.0-1_amd64.deb

Then :

apt install nvidia-driver-550
apt-get install cuda-toolkit-12-5
 dpkg -l |grep cuda
ii  cuda-cccl-12-5                                12.5.39-1                                amd64        CUDA CCCL
ii  cuda-command-line-tools-12-5                  12.5.1-1                                 amd64        CUDA command-line tools
ii  cuda-compiler-12-5                            12.5.1-1                                 amd64        CUDA compiler
ii  cuda-crt-12-5                                 12.5.82-1                                amd64        CUDA crt
ii  cuda-cudart-12-5                              12.5.82-1                                amd64        CUDA Runtime native Libraries
ii  cuda-cudart-dev-12-5                          12.5.82-1                                amd64        CUDA Runtime native dev links, headers
ii  cuda-cuobjdump-12-5                           12.5.39-1                                amd64        CUDA cuobjdump
ii  cuda-cupti-12-5                               12.5.82-1                                amd64        CUDA profiling tools runtime libs.
ii  cuda-cupti-dev-12-5                           12.5.82-1                                amd64        CUDA profiling tools interface.
ii  cuda-cuxxfilt-12-5                            12.5.82-1                                amd64        CUDA cuxxfilt
ii  cuda-documentation-12-5                       12.5.82-1                                amd64        CUDA documentation
ii  cuda-driver-dev-12-5                          12.5.82-1                                amd64        CUDA Driver native dev stub library
ii  cuda-gdb-12-5                                 12.5.82-1                                amd64        CUDA-GDB
ii  cuda-libraries-12-5                           12.5.1-1                                 amd64        CUDA Libraries 12.5 meta-package
ii  cuda-libraries-dev-12-5                       12.5.1-1                                 amd64        CUDA Libraries 12.5 development meta-package
ii  cuda-nsight-12-5                              12.5.82-1                                amd64        CUDA nsight
ii  cuda-nsight-compute-12-5                      12.5.1-1                                 amd64        NVIDIA Nsight Compute
ii  cuda-nsight-systems-12-5                      12.5.1-1                                 amd64        NVIDIA Nsight Systems
ii  cuda-nvcc-12-5                                12.5.82-1                                amd64        CUDA nvcc
ii  cuda-nvdisasm-12-5                            12.5.39-1                                amd64        CUDA disassembler
ii  cuda-nvml-dev-12-5                            12.5.82-1                                amd64        NVML native dev links, headers
ii  cuda-nvprof-12-5                              12.5.82-1                                amd64        CUDA Profiler tools
ii  cuda-nvprune-12-5                             12.5.82-1                                amd64        CUDA nvprune
ii  cuda-nvrtc-12-5                               12.5.82-1                                amd64        NVRTC native runtime libraries
ii  cuda-nvrtc-dev-12-5                           12.5.82-1                                amd64        NVRTC native dev links, headers
ii  cuda-nvtx-12-5                                12.5.82-1                                amd64        NVIDIA Tools Extension
ii  cuda-nvvm-12-5                                12.5.82-1                                amd64        CUDA nvvm
ii  cuda-nvvp-12-5                                12.5.82-1                                amd64        CUDA Profiler tools
ii  cuda-opencl-12-5                              12.5.39-1                                amd64        CUDA OpenCL native Libraries
ii  cuda-opencl-dev-12-5                          12.5.39-1                                amd64        CUDA OpenCL native dev links, headers
ii  cuda-profiler-api-12-5                        12.5.39-1                                amd64        CUDA Profiler API
ii  cuda-sanitizer-12-5                           12.5.81-1                                amd64        CUDA Sanitizer
ii  cuda-toolkit-12-5                             12.5.1-1                                 amd64        CUDA Toolkit 12.5 meta-package
ii  cuda-toolkit-12-5-config-common               12.5.82-1                                all          Common config package for CUDA Toolkit 12.5.
ii  cuda-toolkit-12-config-common                 12.5.82-1                                all          Common config package for CUDA Toolkit 12.
ii  cuda-toolkit-config-common                    12.5.82-1                                all          Common config package for CUDA Toolkit.
ii  cuda-tools-12-5                               12.5.1-1                                 amd64        CUDA Tools meta-package
ii  cuda-visual-tools-12-5                        12.5.1-1                                 amd64        CUDA visual tools
ii  libcudart12:amd64                             12.0.146~12.0.1-4build4                  amd64        NVIDIA CUDA Runtime Library
ii  nvidia-cuda-dev:amd64                         12.0.146~12.0.1-4build4                  amd64        NVIDIA CUDA development files
ii  nvidia-cuda-gdb                               12.0.140~12.0.1-4build4                  amd64        NVIDIA CUDA Debugger (GDB)
ii  nvidia-cuda-toolkit                           12.0.140~12.0.1-4build4                  amd64        NVIDIA CUDA development toolkit
ii  nvidia-cuda-toolkit-doc                       12.0.1-4build4                           all          NVIDIA CUDA and OpenCL documentation

It seems that the toolkit has been installed repeatedly,
And the nvidia-smi show cuda is CUDA Version: 12.4 ,
And Docker container:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3
if I modify the file Makefile:

CUDA_VER?=12.4 or CUDA_VER?=12.5
The following error occurred:

deepstream_test3_app.c:30:10: fatal error: cuda_runtime_api.h: No such file or directory
   30 | #include <cuda_runtime_api.h>
      |          ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:60: deepstream_test3_app.o] Error 1

Only using 12.2 can compile successfully,
Then, Still the decoder error we discussed before
Is this because of my CUDA disorder?

I choose another installation method, as follows:

chmod 755 NVIDIA-Linux-x86_64-535.183.06.run
./NVIDIA-Linux-x86_64-535.183.06.run --no-cc-version-check
apt-get install cuda-toolkit-12-5
 1386  curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg   && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list |     sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' |     sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
 1387  sed -i -e '/experimental/ s/^#//g' /etc/apt/sources.list.d/nvidia-container-toolkit.list
 1388  apt-get update 
 1389  apt-get install -y nvidia-container-toolkit
 1390  nvidia-ctk runtime configure --runtime=docker
 1391  systemctl restart docker
 1392  export DISPLAY=:0
 1393  apt install x11-xserver-utils
 1394  xhost +

I turned on and off the machine many times on the way,But the result is the same as last time …

It’s not clear what you have installed in your system. I tried using 3060TI on ubuntu 24.04, after installing the GPU driver/nvidia-docker-toolkit, Both DS-6.4 and DS-7.0 docker image worked fine.

Reinstalling the system may be the fastest way to solve the problem

Are there any specific driver or CUDA requirements?
I will try reinstalling a machine, And try to
installing the GPU driver by:

chmod 755 NVIDIA-Linux-x86_64-535.xxx.xx.run
./NVIDIA-Linux-x86_64-535.xxx.xx.run --no-cc-version-check

install nvidia-docker-toolkit by:

apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/3bf863cc.pub
add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/ /"
apt-get update
apt-get install cuda-toolkit-12-5

Next, install other Docker dependencies according to the documentation requirements:
Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.16.0 documentation
Is there any other special operation?

I think no, I did the same.