Deepstream Python Transfer Learning on Jetson AGX

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson AGX
• DeepStream 5.1
• JetPack Version 4.6
• TensorRT Version 8.0.1
• Issue Type: Question

Hi All!

I have trained a resnet18 model using the latest TAO toolkit, and now I want to run it on a Deepstream python application. I don’t know how to fit these files into the sample application config file.

‘calibration.bin’
‘calibration.tensor’
‘labels.txt’
‘nvinfer_config.txt’
‘resnet18_detector.etlt’
‘resnet18_detector.trt’
‘resnet18_detector.trt.int8’

Any help would be appreciated.

Did you export deepstream config when exporting the model (option --gen_ds_config)? You can refer to the config example in deepstream_python_apps\apps\deepstream-test1\dstest1_pgie_config.txt and combine the config the exported deepstream config file.

Yes, I exported the ds config file in the TAO notebook. I also combined the exported config file with the deepstream config file and ran the application, but got the following error.

Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
Creating nvvidconv1 
 
Creating filter1 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating transform 
 
Creating EGLSink 

Atleast one of the sources is live
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
1 :  rtsp://admin:abc12345@10.50.15.182
Starting pipeline 


Using winsys: x11 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:00.295563073  6480     0x34c61e40 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:01.428713086  6480     0x34c61e40 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
0:00:01.428845764  6480     0x34c61e40 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1822> [UID = 1]: build backend context failed
0:00:01.428876326  6480     0x34c61e40 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1149> [UID = 1]: generate backend failed, check config file settings
0:00:01.428917288  6480     0x34c61e40 WARN                 nvinfer gstnvinfer.cpp:812:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:01.428947146  6480     0x34c61e40 WARN                 nvinfer gstnvinfer.cpp:812:gst_nvinfer_start:<primary-inference> error: Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(812): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

This is what my deepstream config file looks like. I don’t know what the problem is.

[property]
net-scale-factor=0.00392156862745098
offsets=0.0;0.0;0.0
infer-dims=3;384;1248
tlt-model-key=tlt_encode
network-type=0
num-detected-classes=3
uff-input-order=0
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
uff-input-blob-name=input_1
model-color-format=0
maintain-aspect-ratio=0
gpu-id=0
model-file=/opt/nvidia/deepstream/deepstream-5.1/sources/python/apps/deepstream-imagedata-multistream-tlt/tlt_files/resnet18_detector.etlt
labelfile-path=/opt/nvidia/deepstream/deepstream-5.1/sources/python/apps/deepstream-imagedata-multistream-tlt/tlt_files/labels.txt
batch-size=1
process-mode=1
network-mode=1
interval=0
gie-unique-id=1
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3 = None(No clustering)
cluster-mode=1

[class-attrs-all]
threshold=0.2
eps=0.7
minBoxes=1

These are the contents of the exported config file from the TAO notebook.

net-scale-factor=0.00392156862745098
offsets=0.0;0.0;0.0
infer-dims=3;384;1248
tlt-model-key=tlt_encode
network-type=0
num-detected-classes=3
uff-input-order=0
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
uff-input-blob-name=input_1
model-color-format=0
maintain-aspect-ratio=0

This is the original deepstream config file.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
batch-size=1
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3 = None(No clustering)
cluster-mode=1

[class-attrs-all]
threshold=0.2
eps=0.7
minBoxes=1

you can refer to https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/deepstream_app_tao_configs/config_infer_primary_dashcamnet.txt#L27

these are needed in the config files

tlt-model-key=tlt_encode
tlt-encoded-model=…/…/models/tao_pretrained_models/dashcamnet/resnet18_dashcamnet_pruned.etlt
labelfile-path=labels_dashcamnet.txt
int8-calib-file=…/…/models/tao_pretrained_models/dashcamnet/dashcamnet_int8.txt

It worked. Thanks a bunch!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.