Deeepstream-test5 error when save image

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version: 7.2.2-1+cuda11.1
• NVIDIA GPU Driver Version (valid for GPU only): 460
• Issue Type( questions, new requirements, bugs): bugs

Using deepstream- test5 I can save images by using some API from deepstream-transfer-learning. However, I can only receive a black image. I checked image size I saw that the image size is only 34kB while on deepstream-transfer-learning it up to 134kB. I add API
save_image to deepstream-test5 and call this in

bbox_generated_probe_after_analytics probe

gboolean save_image(gchar *path,NvBufSurface *ip_surf, NvDsObjectMeta *obj_meta,
NvDsFrameMeta *frame_meta, unsigned obj_counter) {
NvDsObjEncUsrArgs userData = {0};
if (strlen(path) >= sizeof(userData.fileNameImg)) {
g_print (“Path save image out of size\n”);
return FALSE;
}
userData.saveImg = TRUE;
userData.attachUsrMeta = FALSE;
g_stpcpy(userData.fileNameImg,path);
userData.fileNameImg[strlen(path)] = ‘\0’;
userData.objNum = obj_counter++;
init_image_save_library_on_first_time(g_img_meta_consumer);
nvds_obj_enc_process(g_img_meta_consumer->obj_ctx_handle_,
&userData, ip_surf, obj_meta, frame_meta);
return TRUE;
}

Update

Update 2:

  • After create prob img_save_buf_prob in create_common_elements in deepstream-app.c function. I run 2 sample deepstream-test5 and deepstream-tranfer-learning, but only deepstream-transfer-learning return image for me and deepstream-test5 return black image only. Can you help me figure out what is different between two this sample. Since as I know, they also use the deepstream-app to create pipeline.

HI,
For a quick run, i just refer to image meta test sample which save the cropped objects to file, transfer learning sample have other logic, please see the attached file, it works on my side. deepstream_test5_app_main.c (52.9 KB) Makefile (2.7 KB)

Hi amycao,
Thank you for your reply.
I have tried your file but it’s also not working on my side. I think my problem can be explained following the questions below. Please help me clarify that.

  • Can I ask where you put nvds_obj_enc_create_context() API. Since in my code I created the new NvDsObjEncCtxHandle obj_ctx_handle; in _AppCtx struct and then call it in create_pipeline like this
  appCtx->all_bbox_generated_cb = all_bbox_generated_cb;
  appCtx->bbox_generated_post_analytics_cb = bbox_generated_post_analytics_cb;
  appCtx->overlay_graphics_cb = overlay_graphics_cb;

  //TODO
  NvDsObjEncCtxHandle obj_ctx_handle = nvds_obj_enc_create_context();
  if (!obj_ctx_handle)
  {
    NVGSTDS_ERR_MSG_V("erro");
    goto done;
  }
  appCtx->obj_ctx_handle = obj_ctx_handle;
  //DONE

In next, I created a prob in create_processing_instance line 844

  NVGSTDS_BIN_ADD_GHOST_PAD (instance_bin->bin, last_elem, "sink");
  if (config->osd_config.enable) {
    NVGSTDS_ELEM_ADD_PROBE (instance_bin->all_bbox_buffer_probe_id,
        instance_bin->osd_bin.nvosd, "sink",
        gie_processing_done_buf_prob, GST_PAD_PROBE_TYPE_BUFFER, instance_bin);
  } else {
    NVGSTDS_ELEM_ADD_PROBE (instance_bin->all_bbox_buffer_probe_id,
        instance_bin->sink_bin.bin, "sink",
        gie_processing_done_buf_prob, GST_PAD_PROBE_TYPE_BUFFER, instance_bin);
  }
  if (config->image_save_config.enable) {
    NVGSTDS_ELEM_ADD_PROBE(appCtx->pipeline.img_save_buffer_probe_id,
                           instance_bin->sink_bin.bin, "sink",
                           img_save_buf_prob, 
                           GST_PAD_PROBE_TYPE_BUFFER,appCtx);
  }

Then, I run following your file and I got this image result .

  • This is my pipeline when I run your file. I think you can help me check from this too.
    [I update the pipeline in below comment]

Update

  • This is my config file. I think the problem can come from here too.
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=4
width=1280
height=720
gpu-id=1
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[source1]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=4
gpu-id=1
nvbuf-memory-type=0

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=0
gpu-id=1
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=localhost;9092;EventTopic
topic=EventTopic
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=0
gpu-id=1
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=1
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=1
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
config-file=test5_config_file.txt
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
#config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=0
tracker-width=480
tracker-height=272
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=1
#enable-batch-process applicable to DCF only
enable-batch-process=1

[tests]
file-loop=0

[img-save]
enable=1
output-folder-path=./output/
save-img-cropped-obj=0
save-img-full-frame=1
frame-to-skip-rules-path=capture_time_rules.csv
second-to-skip-interval=600
min-confidence=0.2
max-confidence=0.9
min-box-width=5
min-box-height=5

For the camera , it’s using h.264 encoding method.

  • I also run in deepstream-transfer-learning. and this is image result I got.
    13_0_93_Bicycle_165x220
    As you see, the file name in deepstream-test5 don’t contain object name, while in deepstream-transfer-learning contain object name. So I guess for deepstream-test5. The object is not detected yet or fail because the frame is not full.
  • This is my pipeline grap when run in deepstream-transfer-learning
    [I will updated the graph when I convert to the image file]

HI,
Sorry for missing file deepstream_app.c (50.4 KB)
please remove other changed, just used these two files to run, Makefile should be ok, since you can run the app. please let me know if any issues.

Hi amycao,
Thank you a lot for your reply. After I running with your code. I found out the problem not come from code. It’s come from selecting GPU to running. Since my computer using 2 GPU RTX 3080. So the deepstream-test5 can give me a normal image when I use default GPU = 0. But when I select the second GPU, it gives me a black image again. Can I ask is there any problem with my GPU or with my config file itself. Below is my config file for deepsteam-test5

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=4
width=1280
height=720
gpu-id=1
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[source1]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=4
gpu-id=1
nvbuf-memory-type=0

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://xxx:xxx@xxx.xxx.xxx.xxx:xxxx/stream1
num-sources=1
gpu-id=1
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=0
gpu-id=1
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=localhost;9092;EventTopic
topic=EventTopic
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=0
gpu-id=1
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=1
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=1
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
config-file=test5_config_file.txt
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
#config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=0
tracker-width=480
tracker-height=272
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=1
#enable-batch-process applicable to DCF only
enable-batch-process=1

[tests]
file-loop=0

[img-save]
enable=1
output-folder-path=./output/
save-img-cropped-obj=0
save-img-full-frame=1
frame-to-skip-rules-path=capture_time_rules.csv
second-to-skip-interval=600
min-confidence=0.2
max-confidence=0.9
min-box-width=5
min-box-height=5

Repro the issue, checking internally, will feedback if progress.

Please let me know when you have an answer for that, thank you