Too much lag when using pruned TrafficCam with DeepStream-Python-App (python bindings)

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1.6

• Issue Type( questions, new requirements, bugs)
I want to use Python App that came with installing Deepstream Python bindings, to run TrafficCam(.etlt) by TAO.
I am using deepstream-test3-app that came with python bindings.
In the following video demonstration, first I run the python file with default config file for pgie. Then in next run, I changed the config file to the ‘trafficcam’ config by TAO.
Just by changing this, the deepstream takes too much time to start and after starting, lags more than 10-15 secs.

Another things is, how can I generate and save engine file for FP16? By default, TAO only gave INT8 file, which isn’t compatible with Nano.

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Screen record attached:

I have also tried running Trafficcam directly by deepstream-app from the terminal. It still lags on both (RTSP and MP4) and gives less than 1fps. (below is the termnal output):

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

You can deploy tlt/tao models in deepstream via GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream which is mentioned in tao user guide. YOLOv4 — TAO Toolkit 3.21.11 documentation

Usually when you deploy in deepstream, there is a config file you can set. For example, deepstream_tao_apps/pgie_yolov4_tiny_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub. In Nano, it will change to fp16 mode automatically. (The log “trying FP16 mode” also proved this.)

Yes, ds-tao-detection can be used, but I want to use python bindings specifically, and .etlt model should run fine in these. TrafficCam is also not available in ds-tao-detection.

Secondly, I think, Deepstream generates engine file everytime I run the python file. This is time consuming. How is it possible to generate engine file for TrafficCam just once because it’ll save time at every startup.

Please share your config file when you run python bindings.

Actually if you already run inference for once and set “model-engine-file=xxx” and the inference will run with it directly. For example, see line29 of deepstream_tao_apps/pgie_yolov4_tiny_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=/opt/nvidia/deepstream/deepstream-6.0/samples/models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt
labelfile-path=labels_trafficnet.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/tao_pretrained_models/trafficcamnet/trafficnet_int8.txt
model-engine-file=xxx
input-dims=3;544;960;0
uff-input-blob-name=input_1
batch-size=3
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
cluster-mode=2

#Use the config params below for dbscan clustering mode
#[class-attrs-all]
#detected-min-w=4
#detected-min-h=4
#minBoxes=3
#eps=0.7

#Use the config params below for NMS clustering mode
[class-attrs-all]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.2

## Per class configurations
[class-attrs-0]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.4

#[class-attrs-1]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5

Changing engine file to xxx doesn’t do anything. It still takes 1 min 50 secs to startup.

I found out setting interval=1 removes the lag for single stream. But for 4 rtsp streams, I have to set interval=5 to make the lag reasonable.
Is this normal?

Please share the all the config file. The model-engine-file should be set in another config file as well if you want to load the existing engine every time.

It is available in deepsteram sdk be default.

Please refer to below two config files.
/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/config_infer_primary_trafficcamnet.txt

/opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/deepstream_app_source1_trafficcamnet.txt

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.