Deeepstream-test5 error when save image

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version: 7.2.2-1+cuda11.1
• NVIDIA GPU Driver Version (valid for GPU only): 460
• Issue Type( questions, new requirements, bugs): bugs

Using deepstream- test5 I can save images by using some API from deepstream-transfer-learning. However, I can only receive a black image. I checked image size I saw that the image size is only 34kB while on deepstream-transfer-learning it up to 134kB. I add API
save_image to deepstream-test5 and call this in

bbox_generated_probe_after_analytics probe

gboolean save_image(gchar *path,NvBufSurface *ip_surf, NvDsObjectMeta *obj_meta,
NvDsFrameMeta *frame_meta, unsigned obj_counter) {
NvDsObjEncUsrArgs userData = {0};
if (strlen(path) >= sizeof(userData.fileNameImg)) {
g_print (“Path save image out of size\n”);
return FALSE;
}
userData.saveImg = TRUE;
userData.attachUsrMeta = FALSE;
g_stpcpy(userData.fileNameImg,path);
userData.fileNameImg[strlen(path)] = ‘\0’;
userData.objNum = obj_counter++;
init_image_save_library_on_first_time(g_img_meta_consumer);
nvds_obj_enc_process(g_img_meta_consumer->obj_ctx_handle_,
&userData, ip_surf, obj_meta, frame_meta);
return TRUE;
}

Update

Update 2:

  • After create prob img_save_buf_prob in create_common_elements in deepstream-app.c function. I run 2 sample deepstream-test5 and deepstream-tranfer-learning, but only deepstream-transfer-learning return image for me and deepstream-test5 return black image only. Can you help me figure out what is different between two this sample. Since as I know, they also use the deepstream-app to create pipeline.

HI,
For a quick run, i just refer to image meta test sample which save the cropped objects to file, transfer learning sample have other logic, please see the attached file, it works on my side. deepstream_test5_app_main.c (52.9 KB) Makefile (2.7 KB)

Hi amycao,
Thank you for your reply.
I have tried your file but it’s also not working on my side. I think my problem can be explained following the questions below. Please help me clarify that.

  • Can I ask where you put nvds_obj_enc_create_context() API. Since in my code I created the new NvDsObjEncCtxHandle obj_ctx_handle; in _AppCtx struct and then call it in create_pipeline like this
  appCtx->all_bbox_generated_cb = all_bbox_generated_cb;
  appCtx->bbox_generated_post_analytics_cb = bbox_generated_post_analytics_cb;
  appCtx->overlay_graphics_cb = overlay_graphics_cb;

  //TODO
  NvDsObjEncCtxHandle obj_ctx_handle = nvds_obj_enc_create_context();
  if (!obj_ctx_handle)
  {
    NVGSTDS_ERR_MSG_V("erro");
    goto done;
  }
  appCtx->obj_ctx_handle = obj_ctx_handle;
  //DONE

In next, I created a prob in create_processing_instance line 844

  NVGSTDS_BIN_ADD_GHOST_PAD (instance_bin->bin, last_elem, "sink");
  if (config->osd_config.enable) {
    NVGSTDS_ELEM_ADD_PROBE (instance_bin->all_bbox_buffer_probe_id,
        instance_bin->osd_bin.nvosd, "sink",
        gie_processing_done_buf_prob, GST_PAD_PROBE_TYPE_BUFFER, instance_bin);
  } else {
    NVGSTDS_ELEM_ADD_PROBE (instance_bin->all_bbox_buffer_probe_id,
        instance_bin->sink_bin.bin, "sink",
        gie_processing_done_buf_prob, GST_PAD_PROBE_TYPE_BUFFER, instance_bin);
  }
  if (config->image_save_config.enable) {
    NVGSTDS_ELEM_ADD_PROBE(appCtx->pipeline.img_save_buffer_probe_id,
                           instance_bin->sink_bin.bin, "sink",
                           img_save_buf_prob, 
                           GST_PAD_PROBE_TYPE_BUFFER,appCtx);
  }

Then, I run following your file and I got this image result .

  • This is my pipeline when I run your file. I think you can help me check from this too.
    [I update the pipeline in below comment]

Update

  • This is my config file. I think the problem can come from here too.
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=4
width=1280
height=720
gpu-id=1
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[source1]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=4
gpu-id=1
nvbuf-memory-type=0

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=0
gpu-id=1
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=localhost;9092;EventTopic
topic=EventTopic
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=0
gpu-id=1
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=1
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=1
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
config-file=test5_config_file.txt
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
#config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=0
tracker-width=480
tracker-height=272
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=1
#enable-batch-process applicable to DCF only
enable-batch-process=1

[tests]
file-loop=0

[img-save]
enable=1
output-folder-path=./output/
save-img-cropped-obj=0
save-img-full-frame=1
frame-to-skip-rules-path=capture_time_rules.csv
second-to-skip-interval=600
min-confidence=0.2
max-confidence=0.9
min-box-width=5
min-box-height=5

For the camera , it’s using h.264 encoding method.

  • I also run in deepstream-transfer-learning. and this is image result I got.
    13_0_93_Bicycle_165x220
    As you see, the file name in deepstream-test5 don’t contain object name, while in deepstream-transfer-learning contain object name. So I guess for deepstream-test5. The object is not detected yet or fail because the frame is not full.
  • This is my pipeline grap when run in deepstream-transfer-learning
    [I will updated the graph when I convert to the image file]

HI,
Sorry for missing file deepstream_app.c (50.4 KB)
please remove other changed, just used these two files to run, Makefile should be ok, since you can run the app. please let me know if any issues.

Hi amycao,
Thank you a lot for your reply. After I running with your code. I found out the problem not come from code. It’s come from selecting GPU to running. Since my computer using 2 GPU RTX 3080. So the deepstream-test5 can give me a normal image when I use default GPU = 0. But when I select the second GPU, it gives me a black image again. Can I ask is there any problem with my GPU or with my config file itself. Below is my config file for deepsteam-test5

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=4
width=1280
height=720
gpu-id=1
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[source1]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=4
gpu-id=1
nvbuf-memory-type=0

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=0
gpu-id=1
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=localhost;9092;EventTopic
topic=EventTopic
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=0
gpu-id=1
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=1
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=1
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
config-file=test5_config_file.txt
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
#config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=0
tracker-width=480
tracker-height=272
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=1
#enable-batch-process applicable to DCF only
enable-batch-process=1

[tests]
file-loop=0

[img-save]
enable=1
output-folder-path=./output/
save-img-cropped-obj=0
save-img-full-frame=1
frame-to-skip-rules-path=capture_time_rules.csv
second-to-skip-interval=600
min-confidence=0.2
max-confidence=0.9
min-box-width=5
min-box-height=5

Repro the issue, checking internally, will feedback if progress.

Please let me know when you have an answer for that, thank you

I tried your config, enable tracker, then the issue gone. please try on your side.

Thank you @Amycao for your reply,
I miss saving actually also I change the tracker but it still not work when I enable tracker and change the gpu-id from 0 to 1

Update
This is my current config after change follow your help but the image still return black. Can you point out what you change more in source code or in where.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=3
width=1280
height=720
gpu-id=1
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=3


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xx:xx@xxx.xxx.xxx.xxx:xxx/cam0_0
num-sources=1
gpu-id=1
nvbuf-memory-type=3

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xx:xx@xxx.xxx.xxx.xxx:xxx/cam0_0
num-sources=4
gpu-id=1
nvbuf-memory-type=3

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xx:xx@xxx.xxx.xxx.xxx:xxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=3

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xx:xx@xxx.xxx.xxx.xxx:xxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=3

[source4]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xx:xx@xxx.xxx.xxx.xxx:xxx/cam0_0
num-sources=1
gpu-id=1
nvbuf-memory-type=3

[source5]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../../../../samples/streams/20200306_20h30m20s.mp4
num-sources=1
gpu-id=1
nvbuf-memory-type=3

[source6]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../../../../samples/streams/20200306_20h22m06s.mp4
num-sources=1
gpu-id=1
nvbuf-memory-type=3

[source7]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../../../../samples/streams/20200306_19h20m25s.mp4
num-sources=1
gpu-id=1
nvbuf-memory-type=3


[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=1
nvbuf-memory-type=3

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=192.168.100.2;9092;EventTopic
#msg-broker-conn-str=localhost;9092;EventTopic
topic=EventTopic
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=1
type=4
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=1
sync=0
bitrate=4000000
source-id=0
rtsp-port=8554
udp-port=5400

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=1
gpu-id=1
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=3

[streammux]
gpu-id=1
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=1
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=3
interval=0
gie-unique-id=1
config-file=test5_config_file.txt
#labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
#config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=1
tracker-width=480
tracker-height=272
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=1
#enable-batch-process applicable to DCF only
enable-batch-process=1

[tests]
file-loop=0

[img-save]
enable=1
output-folder-path=output/
save-img-cropped-obj=0
save-img-full-frame=1
frame-to-skip-rules-path=capture_time_rules.csv
second-to-skip-interval=10
min-confidence=0.5
max-confidence=0.9
min-box-width=5
min-box-height=5

I used this config file, tmp.txt (5.2 KB)
after enable tracker in the config, saved image black issue gone. please find which part caused the issue one by one on your side.
[tracker]
enable=1

Thank you @Amycao for your reply, After using your config. I meet this problem as below

0:00:02.669697196 18246 0x560b387ee370 WARN          nvvideoconvert gstnvvideoconvert.c:3000:gst_nvvideoconvert_transform:<sink_sub_bin_transform3> error: Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used
0:00:02.669739969 18246 0x560b387ee370 WARN          nvvideoconvert gstnvvideoconvert.c:3000:gst_nvvideoconvert_transform:<sink_sub_bin_transform3> error: surface-gpu-id=1,sink_sub_bin_transform3-gpu-id=0
0:00:02.669817478 18246 0x560b387ee370 ERROR         nvvideoconvert gstnvvideoconvert.c:3387:gst_nvvideoconvert_transform: buffer transform failed
ERROR from sink_sub_bin_transform3: Memory Compatibility Error:Input surface gpu-id doesnt match with configured gpu-id for element, please allocate input using unified memory, or use same gpu-ids OR, if same gpu-ids are used ensure appropriate Cuda memories are used
Debug info: gstnvvideoconvert.c(3000): gst_nvvideoconvert_transform (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin3/Gstnvvideoconvert:sink_sub_bin_transform3:
surface-gpu-id=1,sink_sub_bin_transform3-gpu-id=0

So I change the to

nvbuf-memory-type=3. It can solved the problem.

But I still cannot saving the image only black image is saving. Also since I only had 2 gpus. So I set

gpu-id=1 instead of gpu-id=2

this problem also happen when I change it on deepstream-transfer-learning-test and it’s also cannot work when I change like your config.

Update
This is my config for deepstream-transfer-learning-test

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=0
gpu-id=1
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=localhost;9092;EventTopic
topic=EventTopic
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt



[osd]
enable=0
gpu-id=1
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=1
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1


# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=1
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_ds_transfer_learning.txt

[tracker]
enable=1
# For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=1
#enable-batch-process and enable-past-frame applicable to DCF only
enable-batch-process=1
enable-past-frame=0
display-tracking-id=1

[tests]
file-loop=0

when I setup like this, the image will be save with black image like in deepstream-test5. everything in source code, I still keep as origin. So I guess this not cause due to the config or source code file. I guess it maybe come from GPU problem. do I right

Yes, you need to change GPU ID accordingly, since i have 3 GPU in my system.

Did you just change GPU ID to another GPU in configuration, other untouched which will cause the issue, right? but your GPUs are 2 RTX 3080, same GPU, different result, does not make sense.

hi @Amycao,
I so not sure what happened but I already checked everything on my setup. The error only happened when I change the GPU id. actually from GPU 0 to GPU 1. I also checked GPU usage when using both. it’s also running when I changed but saving images still not work when I used the second GPU. If you need me to provide any information I will upgrade here for you