Failed to run deepstream-segmentation-analytics in deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier AGX
• DeepStream Version 5.1.0
• JetPack Version (valid for Jetson only) 4.5-b129
• TensorRT Version 7.1.3-1
• Issue Type( questions, new requirements, bugs) bugs

I trained a maskrcnn model and deployed it on deepstream and try to separately output the black and white masks like the pics as below.
0001
0001 (1)

I’m looking into the deepstream-segmentation-analytics but met some problem.

I put the image path as below in usr_input.txt and then copy the input/0599.jpg to image/ for testing.

batch_size=2
width=512
height=512
stream0=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-analytics/image
stream1=images1
stream2=images2
pro_per_sec=40
no_streams=1
production=1

After I run ./deepstream-segmentation-analytics -c dstest_segmentation_config_industrial.txt -i usr_input.txt. It said fail to load the config file.

Get CPU profile_start()
Get the line: batch_size=2
Get the line: width=512
Get the line: height=512
Get the line: stream0=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-analytics/image
stream0
Get the line: stream1=images1
stream1
Get the line: stream2=images2
stream2
Get the line: pro_per_sec=40
Get the line: no_streams=1
Get the line: production=1
batchSize = 2, width = 512, height = 512
no_streams = 1, pro_per_sec = 40
production = 1
Get the batchSize = 2
Get the num_sources = 1
Get the infer_config_file  = dstest_segmentation_config_industrial.txt
Get the MUXER_OUTPUT_WIDTH  = 512
Get the MUXER_OUTPUT_HEIGHT  = 512
Get CPU profile_end()
For frame = 1, CPU time accumulated 40.0066

Loading the image file: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-analytics/image/0599.jpg
Failed to load config file: No such file or directory
** ERROR: <gst_nvinfer_parse_config_file:1260>: failed

Now playing: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-analytics/image/0599.jpg,
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
0:00:00.235565280 15770   0x5597210930 WARN                 nvinfer gstnvinfer.cpp:769:gst_nvinfer_start:<primary-nvinference-engine> error: Configuration file parsing failed
0:00:00.235627680 15770   0x5597210930 WARN                 nvinfer gstnvinfer.cpp:769:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: jpg
Running...
ERROR from element primary-nvinference-engine: Configuration file parsing failed
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(769): gst_nvinfer_start (): /GstPipeline:dstest-image-decode-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: jpg
Returned, stopping playback
Deleting pifile_outpeline

Move the file: out_rgba_0599.jpg into the mask directory
Delete the file: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-analytics/image/0599.jpg

Any assistance you can provide would be greatly appreciated.

Failed to load config file: No such file or directory
** ERROR: <gst_nvinfer_parse_config_file:1260>: failed

dstest_segmentation_config_industrial.txt, did this file exist under where you run the app?

ERROR from element primary-nvinference-engine: Configuration file parsing failed
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(769): gst_nvinfer_start (): /GstPipeline:dstest-image-decode-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: jpg

it’s wiered why config file path is jpg? while in your command it’s dstest_segmentation_config_industrial.txt?

Yes, it exists. I didn’t make any change on the directory structure. Please find the config file and input file as below.
dstest_segmentation_config_industrial.txt (3.7 KB)
usr_input.txt (214 Bytes)

Recommend you use builtin segmentation sample.
deepstream-segmentation-test

I tried deepstream-segmentation-test but also had some issue. One of your colleages recommand me the deepstream-segmentation-analytics… So which one could help me to output the binary mask(black&white mask) from maskrcnn model?

What error you met with deepstream-segmentation-test?

Sorry I mixed them up. It was deepstream-mrcnn-test I got some issue on getting the inference masks. The issue is still remain…
I’m ok with deepstream-segmentation-test. But the questions are:

  1. The output masks have 4 colors in deepstream-segementation-test, how could I change the color to white masks and black background?
  2. How could I save the content on display to image files?
  3. Maskrcnn model has 3 layers but the segematation model in deepstream-segementation-test only has 2 layers. Is it the same way to output maskrcnn masks?
    Thanks!~
  • The output masks have 4 colors in deepstream-segementation-test, how could I change the color to white masks and black background?

    [amycao] nvsegvisual overlay the mask color pixel by pixel, you can add one probe on nvsegvisual src pad, segmentation model output information NvDsInferSegmentationMeta stored as NvDsUserMeta to the frame_user_meta_list of the corresponding frame_meta or object_user_meta_list of the corresponding object with the meta_type set to NVDSINFER_SEGMENTATION_META. you can base on NvDsInferSegmentationMeta field class_map to change class mask color pixel by pixel as you want.

  • How could I save the content on display to image files?

    [amycao] you can refer to sources/apps/apps-common/src/deepstream_sink_bin.c function create_encode_file_bin for how to save output to file.

  • Maskrcnn model has 3 layers but the segematation model in deepstream-segementation-test only has 2 layers. Is it the same way to output maskrcnn masks?

    [amycao] for mrcnn output parsing, you can refer,
    deepstream_tao_apps/nvdsinfer_custombboxparser_tao.cpp at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub function NvDsInferParseCustomMrcnnTLTV2

Is there any example for this part?

We do not have this sample.

Can I directly use NvDsInferParseCustomMrcnnTLTV2 in the my config file? by adding:

parse-bbox-instance-mask-func-name=NvDsInferParseCustomMrcnnTLTV2
custom-lib-path=post_processor/libnvds_infercustomparser_tlt.so

How could I access the data parsed by NvDsInferParseCustomMrcnnTLTV2?