Deepstream example doesn't work!

Hi

I want the Deepstream example to work.
I entered the command below.

cd /opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app
deepstream-app -c source1_usb_dec_infer_resnet_int8.txt

But I get the below error.

tes@ubuntu:/opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app$ deepstream-app -c source1_usb_dec_infer_resnet_int8.txt
Could not open DRM failed 
** ERROR: <main:716>: Failed to set pipeline to PAUSED
Quitting
ERROR from sink_sub_bin_sink1: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
Debug info: gstbasesink.c(5367): gst_base_sink_change_state (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/GstNvDrmVideoSink:sink_sub_bin_sink1:
Failed to start
App run failed

I had the same case on the forum but no solution.

I have a USB camera connected to Orin.
My Orin environment is below.

help me.
help me… please…

Moving this topic to the DeepStream SDK forum for visibility.

I appreciate any help!

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Have you tested your usb camera with v4l2 tool? Can you show us the result of “v4l2-ctl --list-formats-ext” command?

@Fiona.Chen

Thank you for your help.

  • Hardware Platform (Jetson / GPU)
    JETSON-AGX-ORIN-DEV-KIT
  • DeepStream Version
    6.1.1
  • JetPack Version (valid for Jetson only)
    5.0.2 (L4T 35.1.0)
  • TensorRT Version
    8.4.1.5
  • NVIDIA GPU Driver Version (valid for GPU only)
    I don’t know how to find out
  • Issue Type( questions, new requirements, bugs)
    questions
  • How to reproduce the issue?
    I just want to run the example below.
cd /opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app
deepstream-app -c source1_usb_dec_infer_resnet_int8.txt
  • Requirement details
    I want the above sample to work.

The result of v4l2-ctl --list-formats-ext is below.

ioctl: VIDIOC_ENUM_FMT
        Type: Video Capture

        [0]: 'MJPG' (Motion-JPEG, compressed)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.033s (30.000 fps)
                Size: Discrete 1280x720
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 1024x768
                        Interval: Discrete 0.033s (30.000 fps)
                Size: Discrete 640x480
                        Interval: Discrete 0.008s (120.101 fps)
                Size: Discrete 800x600
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 1280x1024
                        Interval: Discrete 0.033s (30.000 fps)
                Size: Discrete 320x240
                        Interval: Discrete 0.008s (120.101 fps)
        [1]: 'YUYV' (YUYV 4:2:2)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.167s (6.000 fps)
                Size: Discrete 1280x720
                        Interval: Discrete 0.111s (9.000 fps)
                Size: Discrete 1024x768
                        Interval: Discrete 0.167s (6.000 fps)
                Size: Discrete 640x480
                        Interval: Discrete 0.033s (30.000 fps)
                Size: Discrete 800x600
                        Interval: Discrete 0.050s (20.000 fps)
                Size: Discrete 1280x1024
                        Interval: Discrete 0.167s (6.000 fps)
                Size: Discrete 320x240
                        Interval: Discrete 0.033s (30.000 fps)

I had the same error.
I changed the config file “source1_usb_dec_infer_resnet_int8.txt” as follows.

[sink0]
:
type=2 # default 5
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=nvdrmvideosink
:

1 Like

@muller

Thank you!
As you advised, when I changed the type from 5(nvdrmvideosink) to 2(EglSink), the image of the USB camera was displayed on the display!

I checked based on your advice.
I found the following web.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html#sink-group

Setting type=2 seems to be deprecated.
I would like to know the reason why type=5 doesn’t work.

I am deeply grateful to @muller .

I suddenly thought…
Is it necessary for the image format of the USB camera to be H.264?
The cameras I use are MJPEG or YUY2 only.

No.

For the nvdrmvideosink, you need to disable gdm and enable drm driver first. Accelerated GStreamer — Jetson Linux Developer Guide documentation (nvidia.com)

@Fiona.Chen

Thank you!

I entered the following commands in order.
However, with “sudo systemctl stop gdm” command, the display connected to JETSON-AGX-ORIN-DEV-KIT disappeared.
Do I need to SSH into another computer and enter the commands after this?

$ sudo systemctl stop gdm
$ sudo loginctl terminate-seat seat0

For Jetson Orin use
$ sudo modprobe nvidia-drm modeset=1

Yes

@Fiona.Chen

I will try!
…
I tried it, but I ran into a new problem.

$ sudo systemctl stop gdm
$ sudo loginctl terminate-seat seat0
$ sudo modprobe nvidia-drm modeset=1

What is the reason for this?
I don’t want to give up until the end

echo@ubuntu:/opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app$ deepstream-app -c source1_usb_dec_infer_resnet_int8.txt
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:07.401085367  4391 0xffff2c0022c0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:07.614119415  4391 0xffff2c0022c0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
0:00:07.660844974  4391 0xffff2c0022c0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app/config_infer_primary.txt sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)
**PERF:  0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready

** INFO: <bus_callback:180>: Pipeline running

Failed to set plane 
Failed to display frame buffer
0:00:08.202352890  4391 0xaaaac167b5e0 WARN                 nvinfer gstnvinfer.cpp:2300:gst_nvinfer_output_loop:<primary_gie> error: Internal data stream error.
0:00:08.202430303  4391 0xaaaac167b5e0 WARN                 nvinfer gstnvinfer.cpp:2300:gst_nvinfer_output_loop:<primary_gie> error: streaming stopped, reason error (-5)
ERROR from primary_gie: Internal data stream error.
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2300): gst_nvinfer_output_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
streaming stopped, reason error (-5)
Quitting
ERROR from sink_bin_queue: Internal data stream error.
Debug info: gstqueue.c(988): gst_queue_handle_sink_event (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstQueue:sink_bin_queue:
streaming stopped, reason error (-5)
ERROR from src_elem: Internal data stream error.
Debug info: gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstV4l2Src:src_elem:
streaming stopped, reason error (-5)
ERROR from primary_gie_queue: Internal data stream error.
Debug info: gstqueue.c(988): gst_queue_handle_sink_event (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstQueue:primary_gie_queue:
streaming stopped, reason error (-5)
App run failed
echo@ubuntu:/opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app$ 

Hi

I was able to achieve inference from a USB camera with the script on the web below.
However, the result is inferior to yolo…

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.