Integrating TAO model with python DeepStream pipeline

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 6.2 (docker image)
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only) 525.85.12

Hello, I’ve a pipeline as follows:
4 class detector → tracker → vehicle color classifier → vehicle make classifier

The problem lies with “vehicle make classifier”.
The dstest2_sgie2_config.txt (from deepstream-test2) works fine and gives expected results, but when I use the config written below, the engine doesn’t run any inference. There are no errors shown either.

Objective of this exercise is to understand how models from NGC can be integrated to my DS pipeline.

sgie2_config_vehiclemake.txt (This is the config which is causing the problems)
[property]
gpu-id=0
net-scale-factor=1
offsets=103.939;116.779;123.68
tlt-model-key=tlt_encode
tlt-encoded-model=/root/ngc_assets/vehiclemakenet_vpruned_v1.0.1/resnet18_vehiclemakenet_pruned.etlt
labelfile-path=/root/ngc_assets/vehiclemakenet_vpruned_v1.0.1/labels.txt
int8-calib-file=/root/ngc_assets/vehiclemakenet_vpruned_v1.0.1/vehiclemakenet_int8.txt
uff-input-order=1
infer-dims=3;224;224;
uff-input-blob-name=input_1
batch-size=1
network-mode=0
network-type=1
num-detected-classes=20
model-color-format=1
process-mode=2
gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=0
output-blob-names=predictions/Softmax

How do you use it? Use your config and pruned model of vehiclemakenet,work well in my docker.

Did you modify anything ? Can you share code and upload log after export GST_DEBUG=3

Thanks.

I use the same code for testing both config files i.e. sgie2_config_vehiclemake.txt and dstest2_sgie2_config.txt (from deepstream-test2)

“log-works.txt” is when I use dstest2_sgie2_config.txt (from deepstream-test2)
“log-no-works.txt” is when I use sgie2_config_vehiclemake.txt

log-no-works.txt (3.3 KB)
log-works.txt (3.0 KB)

Here’s my code:
deepstream_code.py (10.8 KB)

Here’s the processed video:


I used sgie2_config_vehiclemake.txt for processing this video. From here you can see that the vehiclemake classifier is not running.

the pruned model of vehiclemakenet contains common INT8 calibration cache for GPU and DLA.
you can try modify value of network-mode from 0 to 1

## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1

refer the document to got more information

“network-mode=1” didn’t solve the problem. Is there anything else that I can try?

Consider the pipeline you use.can you try use deepstream-app first ?

I think the command line below down same as your code.

$ sudo deepstream-app -c deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt

This is document

If the command line work fine,There should be a problem with your configuration file.

Maybe same problem with the carmake labels.txt

try modify the labels.txt look like below

acura;audi;bmw;chevrolet;chrysler;dodge;ford;gmc;honda;hyundai;infiniti;jeep;kia;lexus;mazda;mercedes;nissan;subaru;toyota;volkswagen

I suggest other parameters in configure file are same as sample for easy to debug

1 Like

I love you! This solved the problem! Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.