Deepstream save image on jetson orin nano headless mode

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson orin nano developer kit
• DeepStream Version: 6.3
• JetPack Version 5.1.3 L4T R35.6.0
• TensorRT Version: 8.5.2

ISSUE: I am using jetson orin nano on headless mode. I want to save images. So I compiled sample apps named deepstream-transfer-learning-app using sudo make and make install. And I am trying to run outputted deepstream-transfer-learning-app with yolo model. It is giving this error:

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:232>: Pipeline running

JPEG parameter struct mismatch: library thinks size is 728, caller expects 720
GPUassert: driver shutting down /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/src/modules/NvMultiObjectTracker/context.cpp 196
Segmentation fault (core dumped)

when trying to initialize this code:

obj_ctx_handle_ = nvds_obj_enc_create_context(gpu_id_);

deepstream-transfer-learning-app/image_meta_consumer.cpp

This is my config file:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:///home/drivelensai/DeepStream-Yolo/videos/test.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=1
sync=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0  # Enable file saving
type=4  # File sink
container=2  # MP4 format
codec=1  # H.264 encoding
bitrate=2000000  # Bitrate for quality
enc-type=1  # Use hardware encoder
output-file=/home/drivelensai/DeepStream-Yolo/output.mkv  # File location
sync=0  # Set to 1 for real-time sync, 0 for faster processing
source-id=0

[sink2]
enable=1
sync=0
# source-id=0
# msg-conv-broker-on-demux=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=config_msgconv.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=1
#(0): Create payload using NvdsEventMsgMeta
#(1): New Api to create payload using NvDsFrameMeta
msg-conv-msg2p-new-api=1
#Frame interval at which payload is generated
msg-conv-frame-interval=1
msg-conv-msg2p-lib=sources/libs/nvmsgconv/libnvds_msgconv.so
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=localhost;9092
topic=orin_test
#Optional:
msg-broker-config=config_nvmsgbroker.txt
#(0) Use message adapter library api's
#(1) Use new msgbroker library api's
new-api=1

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[img-save]
enable=1
gpu-id=0
save-img-full-frame=1
save-img-cropped-obj=0
frame-to-skip-rules-path=./test_frames/test.csv
output-folder-path=./test_frames

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV8.txt

[tracker]
enable=1
gpu-id=0
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.3/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=config_tracker_IOU.yml
# ll-config-file=config_tracker_NvDCF_perf.yml
ll-config-file=config_tracker_NvDCF_accuracy.yml
# ll-config-file=config_tracker_DeepSORT.yml
display-tracking-id=1

[secondary-gie]
enable=1
model-engine-file=weights/Secondary/traffic_sign_classification_fp16.engine
config-file=config_infer_secondary_yolov11.txt
gie-unique-id=2
operate-on-gie-id=1

[tests]
file-loop=0

I found similar issues looks like mine but I am not understanding how they solved their issue.

Tried:

Could you try to run deepstream-image-meta-test sample to save the image first?

:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-image-meta-test$ deepstream-image-meta-test 0 inferserver file://…/
…/…/…/samples/streams/sample_720p.mp4

(deepstream-image-meta-test:152234): GLib-GObject-WARNING **: 12:21:41.601: g_object_set_is_valid_property: object class 'GstNv3dSink' has no property named 'gpu-id'
JPEG parameter struct mismatch: library thinks size is 728, caller expects 720

still giving the same error

I just reinstalled the JetPack 5.1.5 and it is still giving the same error without installing anything new

:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-image-meta-test$ deepstream-image-meta-test 0 inferserver file://…/…/…/…/samples/streams/sample_720p.mp4

(deepstream-image-meta-test:3913): GLib-GObject-WARNING **: 14:14:16.415: g_object_set_is_valid_property: object class 'GstNv3dSink' has no property named 'gpu-id'
JPEG parameter struct mismatch: library thinks size is 728, caller expects 720

If you are using headless mode, please do not use the GstNv3dSink plugin. You can try to modify that to the fakesink.

Still have the same issue. Edited the source code of deepstream_image_meta_test.c to use GstFakeSink instead of GstNv3dSink and rebuilt it. Used that new deepstream app

drivelensai@ubuntu:~/projects/detection-mobile/deepstream$ ./bin/deepstream-image-meta-test 0 ./videos/test.mp4
Failed to load config file: No such file or directory
** ERROR: <gst_nvinfer_parse_config_file:1319>: failed

(deepstream-image-meta-test:74886): GLib-GObject-WARNING **: 14:02:46.498: g_object_set_is_valid_property: object class ‘GstFakeSink’ has no property named ‘gpu-id’
JPEG parameter struct mismatch: library thinks size is 728, caller expects 720
drivelensai@ubuntu:~/projects/detection-mobile/deepstream$

Please run it in the corresponding file directory: sources\apps\sample_apps\deepstream-image-meta-test

Started using python examples. Using it I have access to the images. Thanks for your help!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.