Want segmentation model with 25 fps for jetson nano

Hardware Platform (Jetson / GPU) = Jetson nano
DeepStream Version = 6.0.1
JetPack Version (valid for Jetson only) = 4.6.4
TensorRT Version =
Python version = 3.6.9

I want a segmentation model that can detect and segment the vehicle with real time performance ( fps 25-30 ) that can run on jetson nano with deepstream.

which sample are you referring to? what is the source 's fps? please refer to topic for performance improvement.

I tried the standard python deepstream segmentation app but getting very low fps 1.

So i want a segmentation model that can do vehicle segmentation in real time (20 fps) on jetson nano with hardware specification as provided above.

did you try the suggestions in this link?

Yes i tried. But the results were same.
The standard python deepstream segmentation sample should atleast run at real time fps but it is not running thats why i am asking for some new model.

  1. there is no fps printing, how do you know fps is 1?
  2. AYK, deepstream uses TensorRT to do inference. could you share the result of following command-line? it will test trt inference.
cd /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-segmentation
 /usr/src/tensorrt/bin/trtexec --uff=../../../../samples/models/Segmentation/semantic/unetres18_v4_pruned0.65_800_data.uff --uffInput=data,3,512,512 --output=final_conv/BiasAdd \
    --saveEngine=1.engine  --outputIOFormats=fp32:chw --buildOnly --useCudaGraph
 /usr/src/tensorrt/bin/trtexec --loadEngine=1.engine --fp16
    sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")
    sink.set_property("sync", 0)

please add the above code. “sync = 0” means playing as fast as possible. please refer to deepstream_test_3.py for fps measure. here is the simplified code.python-fps.txt (521 Bytes)
I tested on xavier nx with DS6.0 after modifying the code. if saving jpg, the fps is about 15 save-jpg.log (2.2 KB); if not saving, the fps is about 40 no-save-jpg.log (2.2 KB)

Thank you for your support. I will try and let you know

Output of this above command is
test1.txt (6.5 KB)

And for FPS measure i referred the deepstream test3 sample file

thanks for the sharing! please add this code and fps printing. then please share the test log.

Added the fps printing in code but getting this error

Traceback (most recent call last):
File “deepstream_segmentation.py”, line 36, in
from common.FPS import PERF_DATA
ImportError: cannot import name ‘PERF_DATA’

I guess as i am using the deepstream 6.0.1 so i downloaded the deepstream python app folder of that version (deepstream_python_apps-1.1.1.zip) not the latest one. Is that a problem?

here is the source code. you can copy it.

Thank you for sharing.
Please check my code output
codeoutput.txt (3.0 KB)
And this is my modified script

code.txt (10.7 KB)

cus.py (10.7 KB)
bb.log (2.2 KB)

noticing you are 1.1.1 version, please use nveglglessink. please refer to cus.py and cus.py 's test log on Jetson nx.

Thank you for your fast reply and effort, it is working with 10 fps now. But in your log it is working with 30 fps is that device is jetson nano? If yes why mine is only running at 10 fps.

And also the video is also streaming with segmented mask i think that is also delaying the performance. How to not show that ?

my device is Xavier NX. AYK, DeepStream leverages low-level TensorRT for inference. from the test1.txt you shared, fps of using TensorRT for inference is also about 10 (Throughput: 9.99167 qps). it is performance limitation of Jetson Nano.

for better fps, you can use fakesink instead, which will not show video. here is the sample code:
sink = Gst.ElementFactory.make(“fakesink”, “fakesink”)

1 Like

Okay Thank you for your help.

I tried all this till now by searching different forum,
#sink = Gst.ElementFactory.make(“nv3dsink”, “nv3d-sink”)
#sink = Gst.ElementFactory.make(“fakesink”, “fakesink”)
#sink = Gst.ElementFactory.make(“nvoverlaysink”, “nvvideo-renderer”)

But all of this giving me error

root@ubuntu:/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-segmentation# rm -r frame/
root@ubuntu:/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-segmentation# python3 deepstream_segmentation.py dstest_segmentation_config_semantic.txt …/…/…/…/samples/streams/sample_720p.mjpeg frame
Creating Pipeline

Creating Source

Creating jpegParser

Creating Decoder

Creating EGLSink

Playing file …/…/…/…/samples/streams/sample_720p.mjpeg
WARNING: Overriding infer-config batch-size 2 with number of sources 1

Adding elements to Pipeline

Linking elements in the Pipeline

Now playing…
1 : …/…/…/…/samples/streams/sample_720p.mjpeg
Starting pipeline

0:00:14.009253123 5774 0x82f3470 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Segmentation/semantic/unetres18_v4_pruned0.65_800_data.uff_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x512x512
1 OUTPUT kFLOAT final_conv/BiasAdd 4x512x512

0:00:14.010585231 5774 0x82f3470 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Segmentation/semantic/unetres18_v4_pruned0.65_800_data.uff_b1_gpu0_fp32.engine
0:00:14.138105905 5774 0x82f3470 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest_segmentation_config_semantic.txt sucessfully

**PERF: {‘stream0’: 0.0}

NvMMLiteOpen : Block : BlockType = 277
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 277
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)1/1, width=(int)512, height=(int)512

**PERF: {‘stream0’: 0.0}

0:00:19.933116000 5774 0x7e8df20 WARN nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:19.933215471 5774 0x7e8df20 WARN nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)

please uncomment this line.

Yes I tried all three options separately without the comment mark. For your reference I left the # mark.