Running Tao Trained Unet model in Deepstream

I am trying to run unet segmentation model , I have an etlt file , through which I am running my app , I pass in an image as the argument, the program runs fine it creates a engine model file as well , and at the end it says “handling EOS for source-0”
, then “end of stream” , there is no error anywhere , but the output image is not saved in the output directory , now instead of image if I pass a video it says error: out of memory or failing drivers.

Can you provide the command line to reproduce this issue? and what’s the Jetpack version in use? Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Hardware Platform : Xavier
Deepstream Version : 6.2
Jetpack Version : NVIDIA Jetson Xavier NX Developer Kit - Jetpack 5.1.1 [L4T 35.3.1]
TensorRT version : Tensorrt version shown in the images below



The command I ran :
python3 deepstream_segmentation.py dstest_segmentation_config_industrial.txt te.jpg output

output :

Using Tao Unet model exported in .etlt , running the above command produced a engine model.

Deepstream App I am running : deepstream_python_apps/apps/deepstream-segmentation at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

Modified Config File:

[property]
gpu-id=0
#int8-calib-file=cal.bin
labelfile-path=labels.txt
tlt-encoded-model=model_isbi.etlt
model-engine-file=model_isbi.etlt_b1_gpu0_fp32.engine
uff-input-order=0
uff-input-blob-name=input_1
batch-size=1

network-mode=0
interval=0
gie-unique-id=1
#parse-bbox-func-name=NvDsInferParseCustomSSD
#custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so
#scaling-filter=0
#scaling-compute-hw=0
net-scale-factor=0.00784313725490196
offsets=127.5
infer-dims=1;320;320
tlt-model-key=nvidia_tlt
network-type=100
num-detected-classes=2
model-color-format=2
maintain-aspect-ratio=0
output-tensor-meta=1
segmentation-threshold=0.0
output-blob-names=argmax_1
segmentation-output-order=1

[class-attrs-all]
pre-cluster-threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800

The issue is that no output frames are written

Please Reply as soon as you can !

1.This demo cannot support video, it just support jpeg image. If you want to support video, you need to customize it by yourself.
2.This demo cannot support using Gstreamer to save file. You can add some log to debug if the seg_src_pad_buffer_probe has been called normally.

What is the use of the output folder argument mentioned in the readme then ? The deepstream_segmentation.py has opencv code in there to save frames , does that not work as well ? Is there any other git repo that uses the segmentation model for video processing ?

Sorry, I mean it use the opencv to save the picture not Gstreamer filesink plugin. Could you run our demo with our model and config file first to check if it can save the jpeg picture?

So it should save the segmented image right ?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Yes. I run our demo with the command below, it generates the 0.jpg in the output path.

sudo python3 deepstream_segmentation.py dstest_segmentation_config_industrial.txt ../../../../samples/streams/sample_industrial.jpg ./output

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.