Xavier AGX deepstream5.1 sample app : no camera available error

Hello.

I’m trying to test deepstream sample app with 4 cameras(e-concsystems e-CAM 130A-CUXVR_QUAD) on our Xavier AGX.

I’m using Jetpack 4.5.1 version.

I modified source1_csi_dec_infer_resnet_int8.txt of deepstream-5.1/samples/configs/deepstream-app.

The code after modification is below. I just added sources and change the numbers.

[application]

enable-perf-measurement=1

perf-measurement-interval-sec=5

#gie-kitti-output-dir=streamscl

[tiled-display]

enable=1

rows=2

columns=2

width=1920

height=1080

[source0]

enable=1

#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI

type=5

camera-width=1920

camera-height=1080

camera-fps-n=60

camera-fps-d=1

camera-csi-sensor-id=0

[source1]

enable=1

#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI

type=5

camera-width=1920

camera-height=1080

camera-fps-n=60

camera-fps-d=1

camera-csi-sensor-id=1

[source2]

enable=1

#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI

type=5

camera-width=1920

camera-height=1080

camera-fps-n=60

camera-fps-d=1

camera-csi-sensor-id=2

[source3]

enable=1

#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI

type=5

camera-width=1920

camera-height=1080

camera-fps-n=60

camera-fps-d=1

camera-csi-sensor-id=3

[sink0]

enable=1

#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay

type=5

sync=0

display-id=0

offset-x=0

offset-y=0

width=0

height=0

overlay-id=1

source-id=0

And Xavier can recognize the cameras.

$ ls /dev/vi*

/dev/video0 /dev/video1 /dev/video2 /dev/video3

$ dmesg | grep -i sensor

[ 8.893491] SENSOR ID=0x0265

[ 9.026729] ar1335 30-0042: Detected ar1335 sensor

[ 11.275624] SENSOR ID=0x0265

[ 11.390509] ar1335 32-0042: Detected ar1335 sensor

[ 13.639013] SENSOR ID=0x0265

[ 13.745773] ar1335 34-0042: Detected ar1335 sensor

[ 15.995029] SENSOR ID=0x0265

[ 16.101364] ar1335 35-0042: Detected ar1335 sensor

But the sample app is not working. Also the original app is not working.

$ deepstream-app -c oml_csi_test.txt

(deepstream-app:13745): GLib-GObject-WARNING **: 14:21:33.106: g_object_set_is_valid_property: object class ‘GstNvArgusCameraSrc’ has no property named ‘maxperf’

(deepstream-app:13745): GLib-GObject-WARNING **: 14:21:33.110: g_object_set_is_valid_property: object class ‘GstNvArgusCameraSrc’ has no property named ‘maxperf’

(deepstream-app:13745): GLib-GObject-WARNING **: 14:21:33.112: g_object_set_is_valid_property: object class ‘GstNvArgusCameraSrc’ has no property named ‘maxperf’

(deepstream-app:13745): GLib-GObject-WARNING **: 14:21:33.113: g_object_set_is_valid_property: object class ‘GstNvArgusCameraSrc’ has no property named ‘maxperf’

ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine open error

0:00:01.888819345 13745 0x3d60e060 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed

0:00:01.889172995 13745 0x3d60e060 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed, try rebuild

0:00:01.889419984 13745 0x3d60e060 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files

INFO: [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2

INFO: [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.

INFO: [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.

INFO: [TRT]:

INFO: [TRT]: --------------- Layers running on DLA:

INFO: [TRT]:

INFO: [TRT]: --------------- Layers running on GPU:

INFO: [TRT]: conv1 + activation_1/Relu, block_1a_conv_1 + activation_2/Relu, block_1a_conv_2, block_1a_conv_shortcut + add_1 + activation_3/Relu, block_2a_conv_1 + activation_4/Relu, block_2a_conv_2, block_2a_conv_shortcut + add_2 + activation_5/Relu, block_3a_conv_1 + activation_6/Relu, block_3a_conv_2, block_3a_conv_shortcut + add_3 + activation_7/Relu, block_4a_conv_1 + activation_8/Relu, block_4a_conv_2, block_4a_conv_shortcut + add_4 + activation_9/Relu, conv2d_cov, conv2d_cov/Sigmoid, conv2d_bbox,

INFO: [TRT]: Detected 1 inputs and 2 output network tensors.

ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine opened error

0:00:16.562224388 13745 0x3d60e060 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine

INFO: [Implicit Engine Info]: layers num: 3

0 INPUT kFLOAT input_1 3x368x640

1 OUTPUT kFLOAT conv2d_bbox 16x23x40

2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:16.577461607 13745 0x3d60e060 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/config_infer_primary.txt sucessfully

Runtime commands:

h: Print this help

q: Quit

p: Pause

r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.

  To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg) FPS 1 (Avg) FPS 2 (Avg) FPS 3 (Avg)

**PERF: 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)

** INFO: <bus_callback:181>: Pipeline ready

** INFO: <bus_callback:167>: Pipeline running

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:645 No cameras available

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:645 No cameras available

(deepstream-app:13745): GStreamer-CRITICAL **: 14:21:49.667: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:645 No cameras available

(deepstream-app:13745): GStreamer-CRITICAL **: 14:21:49.667: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed

(deepstream-app:13745): GStreamer-CRITICAL **: 14:21:49.667: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:645 No cameras available

(deepstream-app:13745): GStreamer-CRITICAL **: 14:21:49.670: gst_mini_object_set_qdata: assertion ‘object != NULL’ failed

** INFO: <bus_callback:204>: Received EOS. Exiting …

Quitting

App run successful

Please give us some advice. Thanks.

Hi,
Please confirm you can launch the cameras through nvarguscamerasrc first:

$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! nvoverlaysink

And senor-id= 1, 2, 3.

Hello, DaneLLL.

Thank you for your quick reply.

The camera didn’t work through nvarguscamerasrc

Here is the result of the command.

$ gst-launch-1.0 nvarguscamerasrc sensor-id=3 ! nvoverlaysink

Setting pipeline to PAUSED …

Pipeline is live and does not need PREROLL …

Setting pipeline to PLAYING …

New clock: GstSystemClock

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:645 No cameras available

Caught SIGSEGV

Got EOS from element “pipeline0”.

Execution ended after 0:00:00.077103199

Setting pipeline to PAUSED …

Setting pipeline to READY …

#0 0x0000007fa17168f0 in syscall () at /lib/aarch64-linux-gnu/libc.so.6

#1 0x0000007fa1866fa0 in () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0

#2 0x0000007f94009740 in ()

Spinning. Please run ‘gdb gst-launch-1.0 12423’ to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.

It’s same in the other camera sensors(0,1,2).

But if we run the ecam_tk1_guvcview of e-consystems, the camera works well as you can see below.

$ ecam_tk1_guvcview

ecam_tk1_guvcview 1.7.2-g5d361af

file guvcview_video.mkv has extension type 1

file guvcview_image.jpg has extension type 0

no codec detected for H264

no codec detected for VP80

no codec detected for theo

no codec detected for MP3 - (lavc)

file guvcview_image.jpg has extension type 0

Video file suffix detected: 0

Image file suffix detected: 0

video device: /dev/video0

VIDIOC_QUERYCAP error: Inappropriate ioctl for device

couldn’t query device /dev/v4l-subdev0

VIDIOC_QUERYCAP error: Inappropriate ioctl for device

couldn’t query device /dev/v4l-subdev1

VIDIOC_QUERYCAP error: Inappropriate ioctl for device

couldn’t query device /dev/v4l-subdev2

VIDIOC_QUERYCAP error: Inappropriate ioctl for device

couldn’t query device /dev/v4l-subdev3

VIDIOC_QUERYCAP error: Inappropriate ioctl for device

couldn’t query device /dev/v4l-subdev4

VIDIOC_QUERYCAP error: Inappropriate ioctl for device

couldn’t query device /dev/v4l-subdev5

VIDIOC_QUERYCAP error: Inappropriate ioctl for device

couldn’t query device /dev/v4l-subdev6

VIDIOC_QUERYCAP error: Inappropriate ioctl for device

couldn’t query device /dev/v4l-subdev7

Unable to find parent usb device.Unable to find parent usb device.Unable to find parent usb device.Unable to find parent usb device.vid:0061

pid:0000

driver:tegra-video

Init. vi-output, ar1335 30-0042 (location: platform:15c10000.vi:0)

{ pixelformat = ‘UYVY’, description = ‘UYVY 4:2:2’ }

{ discrete: width = 640, height = 480 }

Time interval between frame: 1/100,

{ discrete: width = 1280, height = 720 }

Time interval between frame: 1/72,

{ discrete: width = 1920, height = 1080 }

Time interval between frame: 1/72,

{ discrete: width = 3840, height = 2160 }

Time interval between frame: 1/30,

{ discrete: width = 4096, height = 2160 }

Time interval between frame: 1/28,

{ discrete: width = 4192, height = 3120 }

Time interval between frame: 1/19,

{ pixelformat = ‘NV16’, description = ‘Y/CbCr 4:2:2’ }

{ not supported - request format(909203022) support at http://guvcview.sourceforge.net }

checking muxed H264 format support

device doesn’t seem to support uvc H264 (0)

checking format: UYVY

fps is set to 1/100

drawing controls

control 0 [0x00980900] = 0

control 1 [0x00980901] = 4

control 2 [0x00980902] = 18

control 3 [0x0098090c] = 1

control 4 [0x00980910] = 220

control 5 [0x00980913] = 6

control 6 [0x00980914] = 0

control 7 [0x00980915] = 0

control 8 [0x0098091a] = 5000

control 9 [0x0098091b] = 16

control 0 [0x009a0901] = 0

control 1 [0x009a0902] = 333

control 2 [0x009a0908] = 0

control 3 [0x009a0909] = 0

control 4 [0x009a090d] = 100

control 5 [0x009a0924] = 8

control 6 [0x009a0926] = 32896

control 7 [0x009a0928] = 0

control 8 [0x009a092a] = 1

control 9 [0x009a0931] = 32000

control type: 0x00000006 not supported

control type: 0x00000006 not supported

fps is set to 1/100

Checking video mode 640x480@32bpp : OK

plane 0: pitch=1280

Hi,
Looks like you should launch with v4l2src. Please refer to steps in
Jetson Nano FAQ
[Q: I have a USB camera. How can I launch it on Jetson Nano?]

And check if you can launch the cameras.

Hello, DaneLLL.

I got this error during the steps.

$ gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=1920,height=1080,framerate=60/1 ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=NV12’ ! nvoverlaysink

Setting pipeline to PAUSED …

Pipeline is live and does not need PREROLL …

Setting pipeline to PLAYING …

New clock: GstSystemClock

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.

Additional debug info:

gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:

streaming stopped, reason not-negotiated (-4)

Execution ended after 0:00:00.001088600

Setting pipeline to PAUSED …

Setting pipeline to READY …

Setting pipeline to NULL …

Freeing pipeline …

$ gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=100/1 ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=NV12’ ! nvoverlaysink

Setting pipeline to PAUSED …

Pipeline is live and does not need PREROLL …

Setting pipeline to PLAYING …

New clock: GstSystemClock

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.

Additional debug info:

gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:

streaming stopped, reason not-negotiated (-4)

Execution ended after 0:00:00.000562397

Setting pipeline to PAUSED …

Setting pipeline to READY …

Setting pipeline to NULL …

Freeing pipeline …

And the next command, $ gst-launch-1.0 nvv4l2camerasrc device=/dev/video1 ! video/x-raw(memory:NVMM),format=UYVY,width=640,height=480,framerate=30/1 ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=NV12’ ! nvoverlaysink
This command outputs syntax error because of ‘(’ on my AGX.

Hi,
It looks like the camera supports YUYV 1080p60. Please try with fakesink:

$ gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=1920,height=1080,framerate=60/1 ! fakesink

Hi,
It’s also makes this error.

$ gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=1920,height=1080,framerate=60/1 ! fakesink

Setting pipeline to PAUSED …

Pipeline is live and does not need PREROLL …

Setting pipeline to PLAYING …

New clock: GstSystemClock

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.

Additional debug info:

gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:

streaming stopped, reason not-negotiated (-4)

Execution ended after 0:00:00.000434262

Setting pipeline to PAUSED …

Setting pipeline to READY …

Setting pipeline to NULL …

Freeing pipeline …

Hi,
Please make sure 1080p60 is listed in v4l2-ctl --list-formats-ext. Seems like it is not supported.

Hi, DaneLLL
Here is the result. I think it support 1080p 60fps.

$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT

Index : 0

Type : Video Capture

Pixel Format: ‘UYVY’

Name : UYVY 4:2:2

  Size: Discrete 640x480

  	Interval: Discrete 0.010s (100.000 fps)

  Size: Discrete 1280x720

  	Interval: Discrete 0.014s (72.000 fps)

  Size: Discrete 1920x1080

  	Interval: Discrete 0.014s (72.000 fps)

  Size: Discrete 3840x2160

  	Interval: Discrete 0.033s (30.000 fps)

  Size: Discrete 4096x2160

  	Interval: Discrete 0.036s (28.000 fps)

  Size: Discrete 4192x3120

  	Interval: Discrete 0.053s (19.000 fps)

Index : 1

Type : Video Capture

Pixel Format: ‘NV16’

Name : Y/CbCr 4:2:2

  Size: Discrete 640x480

  	Interval: Discrete 0.010s (100.000 fps)

  Size: Discrete 1280x720

  	Interval: Discrete 0.014s (72.000 fps)

  Size: Discrete 1920x1080

  	Interval: Discrete 0.014s (72.000 fps)

  Size: Discrete 3840x2160

  	Interval: Discrete 0.033s (30.000 fps)

  Size: Discrete 4096x2160

  	Interval: Discrete 0.036s (28.000 fps)

  Size: Discrete 4192x3120

  	Interval: Discrete 0.053s (19.000 fps)

Hi,
Please try

$ gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=UYVY,width=1920,height=1080,framerate=72/1 ! fakesink

I tried 72 frames in sample app code, but similarly, the ‘no-cameras available’ error message was printed.
So now I’m flashing the jetpack 4.5.1 again.

I tried the command after the flash done.
But it doesn’t work and stays still for more than 2 hour.

$ gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=UYVY,width=1920,height=1080,framerate=72/1 ! fakesink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
^Chandling interrupt.
Interrupt: Stopping pipeline …
Execution ended after 0:14:14.789651444
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

$ gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=UYVY,width=1920,height=1080,framerate=72/1 ! fakesink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock

Hi,
Looks like it is running. Please run this command and check if fps is printed:

$ gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,format=UYVY,width=1920,height=1080,framerate=72/1 ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0

Hi, DaneLLL.
Yes, The fps was printed.

$ gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,format=UYVY,width=1920,height=1080,framerate=72/1 ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0: sync = false
Setting pipeline to PLAYING …
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080, framerate=(fraction)72/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080, framerate=(fraction)72/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080, framerate=(fraction)72/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080, framerate=(fraction)72/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink: caps = video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080, framerate=(fraction)72/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080, framerate=(fraction)72/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0: sync = false
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 15, dropped: 0, current: 29.22, average: 29.22
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 30, dropped: 0, current: 29.44, average: 29.33
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 45, dropped: 0, current: 29.44, average: 29.37
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 60, dropped: 0, current: 29.44, average: 29.38
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 75, dropped: 0, current: 29.44, average: 29.40
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 90, dropped: 0, current: 29.44, average: 29.40
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 105, dropped: 0, current: 29.44, average: 29.41
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 120, dropped: 0, current: 29.44, average: 29.41
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 135, dropped: 0, current: 29.44, average: 29.41
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 150, dropped: 0, current: 29.44, average: 29.42
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 165, dropped: 0, current: 29.44, average: 29.42
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 180, dropped: 0, current: 29.44, average: 29.42
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 195, dropped: 0, current: 29.45, average: 29.42
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 210, dropped: 0, current: 29.44, average: 29.42
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 225, dropped: 0, current: 29.44, average: 29.42
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 240, dropped: 0, current: 29.44, average: 29.43
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 255, dropped: 0, current: 29.45, average: 29.43
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 270, dropped: 0, current: 29.45, average: 29.43
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 285, dropped: 0, current: 29.44, average: 29.43
^Chandling interrupt.
Interrupt: Stopping pipeline …
Execution ended after 0:00:10.392989599
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

Hi,
Please modify [source0] in source1_usb_dec_infer_resnet_int8.txt:

camera-width=1920
camera-height=1080
camera-fps-n=72
camera-fps-d=1

And run it in deepstream-app. The setting fits the source and should work.

Hi, DaneLLL

After ‘apt upgrade’, the L4T version was upgraded, and the camera driver did not work. So I flash it again.

And without performing any of the various gst-launch-1.0 related commands mentioned above,
I just changed the resolution of the source1_usb~ file to 1080p as you told and executed it.

Then, there was no camera available error and it worked well.
However, there is still ‘No cameras available’ error when i run the Source1_csi~ file.

Can you tell me the difference between these two files?
I wonder why it should not be set with a csi camera, but only when it is set to camerav4l2.

Additionally, if there is a way to reverse the camera output left and right on this example code, please let me know.

Regards

Hi,
For CSI cameras the formats are generally raw formats such as RG10. These kinds of cameras are launched through nvarguscamerasrc. Your camera is a YUV sensor and it should be launched through v4l2src.

Hi, DaneLLL

So, the source1_csi~ sample code was written in anticipation of the camera using the RG10 sensor, but my camera is a YUV sensor, so I can’t use the csi example, Right?

By the way is there any way to reverse the camera output left and right on this example code?

Thanks.