Deepstream-tao-app segmentaion failed on deepstream6.4

Hello.

I have recently upgraded my deepstream from 6.3 to 6.4. I tried to use citysemsegformer with deepstream-tao-app, and the output video consists of a flickering screen. I tried to use peopleSegNet as well, but the result is the same. Deepstream-tao-app has been working perfectly on dp6.3. And I have been able to execute deepstream-test1 sample after upgrading to dp6.4, so I believe the problem lies on my installation of deepstream-tao-app instead of deepstream.

./apps/tao_segmentation/ds-tao-segmentation -c configs/nvinfer/citysemsegformer_tao/pgie_citysemsegformer_tao_config.txt -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 
Request sink_0 pad from streammux
batchSize 1...
model_width:1820, model_height:1024
Now playing: configs/nvinfer/citysemsegformer_tao/pgie_citysemsegformer_tao_config.txt
0:00:05.333155630 52780 0x5571afe39ef0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/vgpu/deepstream_tao_apps/models/citysemsegformer/citysemsegformer.etlt_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x1024x1820     
1   OUTPUT kINT32 output          1024x1820x1     

0:00:05.373524767 52780 0x5571afe39ef0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/vgpu/deepstream_tao_apps/models/citysemsegformer/citysemsegformer.etlt_b1_gpu0_fp16.engine
0:00:05.396400696 52780 0x5571afe39ef0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:configs/nvinfer/citysemsegformer_tao/pgie_citysemsegformer_tao_config.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running...

**PERF:  FPS 0 (Avg)	
Wed Jan  3 14:45:10 2024
**PERF:  0.00(0.00)	
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
In cb_newpad
###Decodebin pick nvidia decoder plugin.
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)30/1, width=(int)1820, height=(int)1024
Wed Jan  3 14:45:15 2024
**PERF:  3.87(3.73)	
Wed Jan  3 14:45:20 2024
**PERF:  4.22(3.97)	
Wed Jan  3 14:45:25 2024
**PERF:  4.19(4.04)	
Wed Jan  3 14:45:30 2024
**PERF:  4.19(4.08)	
Wed Jan  3 14:45:35 2024
**PERF:  4.17(4.11)	
Wed Jan  3 14:45:40 2024
**PERF:  4.10(4.12)	
Wed Jan  3 14:45:45 2024
**PERF:  4.04(4.10)	
Wed Jan  3 14:45:50 2024
**PERF:  3.51(4.01)	
Wed Jan  3 14:45:55 2024
**PERF:  3.51(3.97)	
Wed Jan  3 14:46:00 2024
**PERF:  3.52(3.93)	
Wed Jan  3 14:46:05 2024
**PERF:  3.51(3.88)	
Wed Jan  3 14:46:10 2024
**PERF:  3.54(3.86)	
Wed Jan  3 14:46:15 2024
**PERF:  3.52(3.82)	
Wed Jan  3 14:46:20 2024
**PERF:  3.53(3.81)	
Wed Jan  3 14:46:25 2024
**PERF:  3.71(3.81)	
Wed Jan  3 14:46:30 2024
**PERF:  4.16(3.83)	
Wed Jan  3 14:46:35 2024
**PERF:  3.52(3.81)	
Wed Jan  3 14:46:40 2024
**PERF:  3.53(3.79)	
Wed Jan  3 14:46:45 2024
**PERF:  3.57(3.78)	

**PERF:  FPS 0 (Avg)	
Wed Jan  3 14:46:50 2024
**PERF:  3.97(3.79)	
Wed Jan  3 14:46:55 2024
**PERF:  4.04(3.80)	
Wed Jan  3 14:47:00 2024
**PERF:  4.09(3.81)	
Wed Jan  3 14:47:05 2024
**PERF:  4.12(3.83)	
Wed Jan  3 14:47:10 2024
**PERF:  4.19(3.84)	
Wed Jan  3 14:47:15 2024
**PERF:  4.14(3.86)	
Wed Jan  3 14:47:20 2024
**PERF:  4.07(3.86)	
Wed Jan  3 14:47:25 2024
**PERF:  4.09(3.87)	
Wed Jan  3 14:47:30 2024
**PERF:  3.97(3.87)	
Wed Jan  3 14:47:35 2024
**PERF:  4.14(3.88)	
Wed Jan  3 14:47:40 2024
**PERF:  4.13(3.89)	
Wed Jan  3 14:47:45 2024
**PERF:  4.12(3.90)	
Wed Jan  3 14:47:50 2024
**PERF:  4.12(3.91)	
Wed Jan  3 14:47:55 2024
**PERF:  4.21(3.92)	
Wed Jan  3 14:48:00 2024
**PERF:  4.17(3.92)	
Wed Jan  3 14:48:05 2024
**PERF:  3.95(3.93)	
Wed Jan  3 14:48:10 2024
**PERF:  3.66(3.92)	
Wed Jan  3 14:48:15 2024
**PERF:  4.03(3.92)	
Wed Jan  3 14:48:20 2024
**PERF:  4.09(3.92)	
Wed Jan  3 14:48:25 2024
**PERF:  3.83(3.92)	

**PERF:  FPS 0 (Avg)	
Wed Jan  3 14:48:30 2024
**PERF:  4.13(3.93)	
Wed Jan  3 14:48:35 2024
**PERF:  4.06(3.93)	
Wed Jan  3 14:48:40 2024
**PERF:  4.17(3.94)	
Wed Jan  3 14:48:45 2024
**PERF:  4.05(3.94)	
Wed Jan  3 14:48:50 2024
**PERF:  3.95(3.94)	
Wed Jan  3 14:48:55 2024
**PERF:  4.16(3.94)	
Wed Jan  3 14:49:00 2024
**PERF:  3.96(3.94)	
Wed Jan  3 14:49:05 2024
**PERF:  3.82(3.94)	
Wed Jan  3 14:49:10 2024
**PERF:  3.51(3.93)	
Wed Jan  3 14:49:15 2024
**PERF:  3.98(3.93)	
Wed Jan  3 14:49:20 2024
**PERF:  4.01(3.94)	
Wed Jan  3 14:49:25 2024
**PERF:  3.89(3.93)	
Wed Jan  3 14:49:30 2024
**PERF:  3.51(3.93)	
Wed Jan  3 14:49:35 2024
**PERF:  4.05(3.93)	
Wed Jan  3 14:49:40 2024
**PERF:  3.96(3.93)	
Wed Jan  3 14:49:45 2024
**PERF:  3.91(3.93)	
Wed Jan  3 14:49:50 2024
**PERF:  3.72(3.92)	
Wed Jan  3 14:49:55 2024
**PERF:  3.51(3.92)	
Wed Jan  3 14:50:00 2024
**PERF:  3.44(3.91)	
Wed Jan  3 14:50:05 2024
**PERF:  4.06(3.91)	

**PERF:  FPS 0 (Avg)	
Wed Jan  3 14:50:10 2024
**PERF:  4.15(3.92)	
Wed Jan  3 14:50:15 2024
**PERF:  4.17(3.92)	
Wed Jan  3 14:50:20 2024
**PERF:  4.10(3.92)	
Wed Jan  3 14:50:25 2024
**PERF:  4.16(3.93)	
Wed Jan  3 14:50:30 2024
**PERF:  4.03(3.93)	
Wed Jan  3 14:50:35 2024
**PERF:  3.80(3.93)	
Wed Jan  3 14:50:40 2024
**PERF:  3.87(3.93)	
Wed Jan  3 14:50:45 2024
**PERF:  3.97(3.93)	
Wed Jan  3 14:50:50 2024
**PERF:  3.85(3.93)	
Wed Jan  3 14:50:55 2024
**PERF:  3.84(3.92)	
Wed Jan  3 14:51:00 2024
**PERF:  3.83(3.92)	
Wed Jan  3 14:51:05 2024
**PERF:  3.96(3.92)	
Wed Jan  3 14:51:10 2024
**PERF:  4.13(3.93)	
Wed Jan  3 14:51:15 2024
**PERF:  4.18(3.93)	
nvstreammux: Successfully handled EOS for source_id=0
End of stream
Returned, stopping playback
Deleting pipeline

Output video

• Hardware Platform: GPU
• DeepStream Version: 6.4
• TensorRT Version: 8.6.1.6-1
• NVIDIA GPU Driver Version: 535.54.03
• GPU: Quadro GV100
• CUDA Version: 12.2

Any help would be appreciated, thanks.

Regards,
Shojin

You can try to set the gpu-on to FALSE currently deepstream_seg_app.c.
g_object_set (G_OBJECT (segvisual), "gpu-on", FALSE, NULL);
And we will analyze this issue as soon as possible.

1 Like

Hi. Thank you for your response. I have downgraded Deepstream to 6.3 as I had to use it a few days ago. So I can’t try ds6.4 right now.

I have finally tried it. It’s working now. Thank you very much.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.