Deepstream-test5 for usb camera fail

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson )
• DeepStream Version 6.3
• JetPack Version 5.1
• Issue Type( questions)
I want to run deepstream-test5 for a USB camera. However when app run I got many repeat error and green tiled window:

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:549: => Surface type not supported for transformation NVBUF_MEM_SYSTEM

How could I solve it?

Below is my config file and source csv:

enable,type,camera-width,camera-height,camera-fps-n,camera-fps-d,camera-v4l2-dev-node,gpu-id,nvbuf-memory-type
1,1,2560,1440,15,1,0,0,4
################################################################################
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

application:
  enable-perf-measurement: 1
  perf-measurement-interval-sec: 5
  #gie-kitti-output-dir: streamscl

tiled-display:
  enable: 1
  rows: 2
  columns: 2
  width: 1280
  height: 720
  gpu-id: 0
  #(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
  #(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
  #(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
  #(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
  #(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
  nvbuf-memory-type: 0


source:
 csv-file-path: sources_4.csv

sink0:
  enable: 1
  #Type - 1=FakeSink 2=EglSink 3=File
  type: 2
  sync: 1
  source-id: 0
  gpu-id: 0
  nvbuf-memory-type: 0

sink1:
  enable: 0
  #Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
  type: 6
  msg-conv-config: dstest5_msgconv_sample_config.yml
  #(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
  #(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
  #(256): PAYLOAD_RESERVED - Reserved type
  #(257): PAYLOAD_CUSTOM   - Custom schema payload
  msg-conv-payload-type: 0
  msg-broker-proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
  #Provide your msg-broker-conn-str here
  msg-broker-conn-str: <host>;<port>;<topic>
  topic: <topic>
  #Optional:
  #msg-broker-config: ../../deepstream-test4/cfg_kafka.txt

sink2:
  enable: 0
  type: 3
  #1=mp4 2=mkv
  container: 1
  #1=h264 2=h265 3=mpeg4
  ## only SW mpeg4 is supported right now.
  codec: 3
  sync: 1
  bitrate: 2000000
  output-file: out.mp4
  source-id: 0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv :  1
message-converter:
  enable: 0
  msg-conv-config: dstest5_msgconv_sample_config.yml
  #(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
  #(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
  #(256): PAYLOAD_RESERVED - Reserved type
  #(257): PAYLOAD_CUSTOM   - Custom schema payload
  msg-conv-payload-type: 0
  # Name of library having custom implementation.
  #msg-conv-msg2p-lib: <val>
  # Id of component in case only selected message to parse.
  #msg-conv-comp-id: <val>

# Configure this group to enable cloud message consumer.
message-consumer0:
  enable: 0
  proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
  conn-str: <host>;<port>
  config-file: <broker config file e.g. cfg_kafka.txt>
  subscribe-topic-list: <topic1>;<topic2>;<topicN>
  # Use this option if message has sensor name as id instead of index (0,1,2 etc.).
  #sensor-list-file: dstest5_msgconv_sample_config.txt

osd:
  enable: 1
  gpu-id: 0
  border-width: 1
  text-size: 15
  text-color: 1;1;1;1
  text-bg-color: 0.3;0.3;0.3;1
  font: Arial
  show-clock: 0
  clock-x-offset: 800
  clock-y-offset: 820
  clock-text-size: 12
  clock-color: 1;0;0;0
  nvbuf-memory-type: 0

streammux:
  gpu-id: 0
  ##Boolean property to inform muxer that sources are live
  live-source: 0
  batch-size: 4
  ##time out in usec, to wait after the first buffer is available
  ##to push the batch even if the complete batch is not formed
  batched-push-timeout: 40000
  ## Set muxer output width and height
  width: 1920
  height: 1080
  ##Enable to maintain aspect ratio wrt source, and allow black borders, works
  ##along with width, height properties
  enable-padding: 0
  nvbuf-memory-type: 0
  ## If set to TRUE, system timestamp will be attached as ntp timestamp
  ## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
  # attach-sys-ts-as-ntp: 1

primary-gie:
  enable: 1
  gpu-id: 0
  batch-size: 4
  ## 0=FP32, 1=INT8, 2=FP16 mode
  bbox-border-color0: 1;0;0;1
  bbox-border-color1: 0;1;1;1
  bbox-border-color2: 0;1;1;1
  bbox-border-color3: 0;1;0;1
  nvbuf-memory-type: 0
  interval: 0
  gie-unique-id: 1
  model-engine-file: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
  labelfile-path: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/labels.txt
  config-file: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_primary.yml
  #infer-raw-output-dir: /opt/nvidia/deepstream/deepstream-6.3/samples/primary_detector_raw_output/

tracker:
  enable: 0
  # For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
  tracker-width: 960
  tracker-height: 544
  ll-lib-file: /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
  # ll-config-file required to set different tracker types
  # ll-config-file: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_IOU.yml
  # ll-config-file: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvSORT.yml
  ll-config-file: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
  # ll-config-file: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
  # ll-config-file: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
  gpu-id: 0
  display-tracking-id: 1

tests:
  file-loop: 0

CSV file is used to input multiple uri, such as file or rtsp.

If you want to use usb camera as input, please use the following item as reference.

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0

You also refer this FAQ for usb camera

Get the same problem even with txt config.

my config

################################################################################
# Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=2560
camera-height=1440
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0

[source1]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file:///opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_1080p_h264.mp4
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=<host>;<port>;<topic>
topic=<topic>
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/labels.txt
config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=/opt/nvidia/deepstream/deepstream-6.3/samples/primary_detector_raw_output/

[tracker]
enable=0
# For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_IOU.yml
# ll-config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvSORT.yml
ll-config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
gpu-id=0
display-tracking-id=1

[tests]
file-loop=0

Please refer to the FAQ mentioned above.

Since the video frames captured by some cameras are mapped to the CPU, Deepstream’s Pipeline is designed by GPU memory. You’d better test with gst-laungh first.

Too many v4l2 camera drivers are not well adapted to the system

You configure a 2x2 video mosaic, if your input is less than 2x2, the rest will be filled with black

For this gst-launch it work well.

gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw ,width=2560,height=1440,framerate=30/1,  format=NV12" ! nvvidconv ! nvegltransform !  nveglglessink

However, even I change all the nvbuf-memory-type=0 and change tile config, I only get a whole green screen and many error.

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:549: => Surface type not supported for transformation NVBUF_MEM_SYSTEM

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:549: => Surface type not supported for transformation NVBUF_MEM_SYSTEM

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:549: => Surface type not supported for transformation NVBUF_MEM_SYSTEM
......

Is there anything I can change?

Below is my new config

################################################################################
# Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=2560
camera-height=1440
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0
nvbuf-memory-type=0

[source1]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file:///opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_1080p_h264.mp4
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=<host>;<port>;<topic>
topic=<topic>
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/labels.txt
config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=/opt/nvidia/deepstream/deepstream-6.3/samples/primary_detector_raw_output/

[tracker]
enable=0
# For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_IOU.yml
# ll-config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvSORT.yml
ll-config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
gpu-id=0
display-tracking-id=1

[tests]
file-loop=0

Below is my camera information

mic-711@ubuntu:~$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'YUYV' (YUYV 4:2:2)
		Size: Discrete 640x360
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
	[1]: 'MJPG' (Motion-JPEG, compressed)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 3840x2160
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 1696x7200
			Interval: Discrete 0.042s (24.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
		Size: Discrete 3840x1920
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
	[2]: 'NV12' (Y/CbCr 4:2:0)
		Size: Discrete 2560x1440
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)

This error means that the memory is applied by malloc, and the GPU cannot access it

1.Does the below command line can run normally ?

gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw ,width=2560,height=1440,framerate=30/1,  format=NV12" ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! nv3dsink

The inference pipeline can only run successfully if the memory is mapped to the GPU.

The command line you provided can only show that the video frames capture was successful.

if not run,you can try YUYV format.

2.Can you dump the pipeline to dot file after run the following command ? It will be show the information of all gst-elements.

export GST_DEBUG_DUMP_DOT_DIR=.

3.Considering you use jetson.
gpu-id/nvbuf-memory-type these two options should not work.It’s only for independent GPU, not SOC.

Is that mean I can not use NV12 format for Deepstream for this USB camera?

No, It also generate green screen with error

0.00.00.184548028-gst-launch.NULL_READY.dot (7.8 KB)
0.00.00.185822721-gst-launch.READY_PAUSED.dot (7.8 KB)
0.00.00.684613795-gst-launch.PAUSED_PLAYING.dot (6.0 KB)

Yes,I guess memory in this format is incompatible

Try the below command line. If ok,you can try change the configuration file width and height to 1920x1080.

 gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=YUY2, width=1920, height=1080, framerate=30/1'  ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! nv3dsink

This command not work

However, below command work

gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw, width=1920, height=1080,  format=(string)YUY2, framerate=(fraction)30/1" ! nvvidconv ! 'video/x-raw(memory:NVMM),format=I420' ! nv3dsink

More weird things is the below command for NV12 also work!!

gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw ,width=2560,height=1440,framerate=30/1,  format=NV12" ! nvvidconv ! 'video/x-raw(memory:NVMM),format=I420' ! nv3dsink

nvvideoconvert is used by both dGPU and Jetson in DeepStream but nvvidconv only used for Jetson Gstreamer app.

1.If you replace nvvidconv with nvvideoconvert, It’s still work ?

If ok,I think add the item to source group, it will be work fine.

video-format=I420
  1. otherwise, try the following command line.
    I change the compute-hw mode from VIC to GPU.
gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=YUY2, width=1920, height=1080, framerate=30/1'  ! nvvideoconvert  compute-hw=1! 'video/x-raw(memory:NVMM),format=NV12' ! nv3dsink

If this method is to work properly, it may be necessary to modify the code.

1.If you replace nvvidconv with nvvideoconvert, It’s still work ?
No, Not work. Below command show green screen.

gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw, width=1920, height=1080,  format=(string)YUY2, framerate=(fraction)30/1" ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=I420' ! nv3dsink


gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw ,width=2560,height=1440,framerate=30/1,  format=NV12" ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=I420' ! nv3dsink
  1. otherwise, try the following command line.
    I change the compute-hw mode from VIC to GPU.

Not work with Error

mic-711@ubuntu:~$ gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=YUY2, width=1920, height=1080, framerate=30/1'  ! nvvideoconvert  compute-hw=1! 'video/x-raw(memory:NVMM),format=NV12' ! nv3dsink

(gst-launch-1.0:14515): GStreamer-CRITICAL **: 10:36:48.365: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed
WARNING: erroneous pipeline: could not set property "compute-hw" in element "nvvideoconvert0" to "1!"

I modify the code in /opt/nvidia/deepstream/deepstream-6.3/sources/apps/apps-common/includes/deepstream_config.h line 59

from
```c
#define NVDS_ELEM_VIDEO_CONV "nvvideoconvert"

to

#define NVDS_ELEM_VIDEO_CONV "nvvidconv"

then Deepstream-test5 work!!
Is this a bug?

You can use nvidconv, but this element is not part of deepstream.It’s no problem usually.

I guess the JetPack verion is not match for deepstream verion.
Can you comfirm the JetPack version ?
Here is the compatibiltiy table

I install Deepstream from NVIDIA SDK Manager without flash the os. It seems NVIDIA SDK Manager not check my os verson and install Deepstream 6.3 for me. Now I change to Deepstream 6.2 and thing get work.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.