Unable to add multiple streams for inference

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : NVIDIA GeForce GTX 1650 dGPU
• DeepStream Version : 6.1
• TensorRT Version 8.2.5.1 GA
• NVIDIA GPU Driver Version (valid for GPU only) : 510.85.02
• Issue Type( questions, new requirements, bugs) : Question

I’m creating a video analytics application using DeepStream and Flask. Single source works fine, however when adding more than one source, it gives the following errors. My application is similar to deepstream-nvdsanalytics sample provided in deepstream-python-apps sample which works perfectly with multiple streams. The only change is that I’m using peoplenet model from NVIDIA NGC.

The error output that I’m getting:

0:00:01.534761859 10577 0x7f64bd438160 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1832> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:01.534782446 10577 0x7f64bd438160 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2009> [UID = 1]: deserialized backend context :/home/forgottenlight/DeepStream/deepstream-peoplenet-test/data/pgies/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine failed to match config params, trying rebuild
0:00:01.535565262 10577 0x7f64bd438160 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
RTSP server offline
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:179 Uff input blob name is empty
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:02.368260021 10577 0x7f64bd438160 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::61] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating

Following is my PGIE file…

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=../../data/pgies/resnet34_peoplenet_int8.etlt
labelfile-path=../../data/labels/labels_peoplenet.txt
model-engine-file=../../data/pgies/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
int8-calib-file=../../data/pgies/resnet34_peoplenet_int8.txt
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

nvidia-smi

Mon Sep  5 13:38:38 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.85.02    Driver Version: 510.85.02    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| N/A   43C    P8     2W /  N/A |    426MiB /  4096MiB |      2%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A       953      G   /usr/lib/xorg/Xorg                 45MiB |
|    0   N/A  N/A      1534      G   /usr/lib/xorg/Xorg                117MiB |
|    0   N/A  N/A      1706      G   /usr/bin/gnome-shell               52MiB |
|    0   N/A  N/A      3977      G   /usr/lib/firefox/firefox          178MiB |
|    0   N/A  N/A      4556      G   ...RendererForSitePerProcess       20MiB |
+-----------------------------------------------------------------------------+

from the error “Uff input blob name is empty”, you did set uff-input-blob-name, please refer to sample: deepstream_tao_apps/pgie_peopleSegNet_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

Thanks for your reply. I was able to solve the problem by following this sample:
https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/deepstream_app_tao_configs/config_infer_primary_peoplenet.txt

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.