Is there any way to make deepstream_tlt_apps faster?

Hello, I ran the source files on tx2 on the follow link. It takes more than 5 seconds to run the program, can’t it run faster? My goal is to receive and process real-time raw data, but it’s too slow about un example programs. but is it only slow when it’s first run?

I am afraid you are talking about the fps of the demo files in GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream , right? Which network did you run?
What do you mean by “5 seconds”, is it 5fps?

There is no problem with detection after actual execution. However, it takes more than 5 - 10 seconds to load the program (through the Linux Shell), so is there no problem if you make it into a program because the program keeps going back to infinite loop? Thank you

Can you give more details about how to load the program (through the Linux Shell) ?

yes. I just command
./deepstream-custom -c pgie_frcnn_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264

or ./deepstream-custom -c pgie_frcnn_tlt_config.txt -i <sample jpeg file> -d

It works well, but it takes a while to load, so I was wondering if this was a problem or a normal operation. Thank you

OK, I am afraid you are talking about the cost time of trt engine generation.
In pgie_frcnn_tlt_config.txt, if you do not set model-engine-file, the app will generate trt engine every time.
To avoid this, you can set model-engine-file in the spec. You can find the path of trt engine during the log.

model-engine-file=

Do not forget to comment out below lines

tlt-encoded-model=<TLT exported .etlt>
tlt-model-key=

Then, the app will directly load the trt engine next time.

See more in https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/text/deploying_to_deepstream.html#id8

Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream.

model-engine-file=

1 Like

Thank you. This is an extra question, but I currently can only put the jpg or h264 codec for sample code now, but if I change the gstreamer code, can the raw or png file be applied to the deepstream code? Thank you.

@mchi
Could you help on below? Thanks.

I currently can only put the jpg or h264 codec for sample code now, but if I change the gstreamer code, can the raw or png file be applied to the deepstream code?

for png, here is a reference

$ gst-launch-1.0 multifilesrc location=“img.%04d.png” index=0 caps=“image/png,framerate=(fraction)12/1” !
pngdec ! videoconvert ! video/x-raw,format=BGRx ! nvvideoconvert …

for raw data, you need to use filesrc to read the data like below

gst-launch-1.0 -v filesrc location=size_1920x1080.yuv !
videoparse width=1920 height=1080 framerate=25/1 format=GST_VIDEO_FORMAT_Y42B !
videoconvert ! nvvideoconvert !..

1 Like