Too much lag when using pruned TrafficCam with DeepStream-Python-App (python bindings)

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version

• Issue Type( questions, new requirements, bugs)
I want to use Python App that came with installing Deepstream Python bindings, to run TrafficCam(.etlt) by TAO.
I am using deepstream-test3-app that came with python bindings.
In the following video demonstration, first I run the python file with default config file for pgie. Then in next run, I changed the config file to the ‘trafficcam’ config by TAO.
Just by changing this, the deepstream takes too much time to start and after starting, lags more than 10-15 secs.

Another things is, how can I generate and save engine file for FP16? By default, TAO only gave INT8 file, which isn’t compatible with Nano.

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Screen record attached:

I have also tried running Trafficcam directly by deepstream-app from the terminal. It still lags on both (RTSP and MP4) and gives less than 1fps. (below is the termnal output):

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

You can deploy tlt/tao models in deepstream via GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream which is mentioned in tao user guide. YOLOv4 — TAO Toolkit 3.21.11 documentation

Usually when you deploy in deepstream, there is a config file you can set. For example, deepstream_tao_apps/pgie_yolov4_tiny_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub. In Nano, it will change to fp16 mode automatically. (The log “trying FP16 mode” also proved this.)

Yes, ds-tao-detection can be used, but I want to use python bindings specifically, and .etlt model should run fine in these. TrafficCam is also not available in ds-tao-detection.

Secondly, I think, Deepstream generates engine file everytime I run the python file. This is time consuming. How is it possible to generate engine file for TrafficCam just once because it’ll save time at every startup.

Please share your config file when you run python bindings.

Actually if you already run inference for once and set “model-engine-file=xxx” and the inference will run with it directly. For example, see line29 of deepstream_tao_apps/pgie_yolov4_tiny_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

## 0=FP32, 1=INT8, 2=FP16 mode

#Use the config params below for dbscan clustering mode

#Use the config params below for NMS clustering mode

## Per class configurations


Changing engine file to xxx doesn’t do anything. It still takes 1 min 50 secs to startup.

I found out setting interval=1 removes the lag for single stream. But for 4 rtsp streams, I have to set interval=5 to make the lag reasonable.
Is this normal?

Please share the all the config file. The model-engine-file should be set in another config file as well if you want to load the existing engine every time.

It is available in deepsteram sdk be default.

Please refer to below two config files.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.