Pgie not running with nvdspreprocess on deepstream 6.0 application

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)

• DeepStream Version (6.0)

• TensorRT Version (8.0.1-1+cuda11.3)

• NVIDIA GPU Driver Version (470.57.02)

• Issue Type (bugs)

• I have based my application on

deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-preprocess-test/

When Gst pipeline is set to playing the pgie does not get loaded.
This is the console out from my application

no begin time for video; this should be enforced somewhere
gpu_memory_info:---> {'total': 4294967296, 'free': 4220125184, 'used': 74842112}
Creating Pipeline

Creating source_bin cam_id: 7, intended_fps: 10
Creating source bin
source-bin-07
Creating H264 Encoder
Creating H264 rtppay
Unknown or legacy key specified 'input-tensor-meta' for group [property]
PREPROCESS LINKED
demux source 7

Starting pipeline
<Gst.Pipeline object at 0x7f043657f6c0 (GstPipeline at 0x7f0430248140)>
YYY
0:00:00.974496830 24871 0x7f0430a126f0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 12]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 12]: deserialized trt engine from :/opt/pm/vast/bpnet-platform/nvast/ds_vast_pipeline/pm_multiobject_monitoring_beta_02jan23.etlt_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x544x960
1   OUTPUT kINT32 BatchedNMS      1
2   OUTPUT kFLOAT BatchedNMS_1    200x4
3   OUTPUT kFLOAT BatchedNMS_2    200
4   OUTPUT kFLOAT BatchedNMS_3    200

0:00:00.974551913 24871 0x7f0430a126f0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 12]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 12]: Use deserialized engine model: /opt/pm/vast/bpnet-platform/nvast/ds_vast_pipeline/pm_multiobject_monitoring_beta_02jan23.etlt_b1_gpu0_int8.engine
0:00:00.975517466 24871 0x7f0430a126f0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 12]: Load new model:ds_vast_pipeline/pm_multiobject_monitoring_beta_02jan23.conf sucessfully
terminate called after throwing an instance of 'std::invalid_argument'
  what():  stof

this is my preprocess config file
preprocess_config.conf (3.5 KB)

following is the pgie config file
pm_multiobject_monitoring_beta_02jan23.conf (5.8 KB)

please narrow down this issue by the the following steps:

  1. can you use gdb to debug? please share the call stack.
  2. if using the default config_preprocess.txt and new nvinfer configuration file, will the app output error?
  3. can you use deepsteram-test1 to test nvinfer’s configuration file? wondering which configuration caused the error.

Did standalone test with the nvinfer configuration file, application runs fine. I will run the rest of the tests and share the results here,

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks

yes, I have ran tests to confirm it is not an nvinfer issue, running the app without nvpreprocess in the pipeline and testing the application with a different nvinfer.
The issue is present with the addition of nvpreprocess in the pipeline. I checked the GPU usage to confirm that pipeline is not created when Gst state is set to playing.

thanks for the update, deepstream sampe /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-preprocess-test runs fine, please compare your preprocess_config.conf with the config_preprocess.txt in deepstream-preprocess-test.

I could not find an issue with the preprocess config file when compared with the one in sample application.

I have confirmed that the above error message comes in the console out when Gst state is set to playing, I suppose something goes wrong in the pipeline before the nvinfer gets loaded into the GPU memory. Can we debug why this error gets raised.

As the code shown, I replace config_preprocess.txt with the preprocess_config.conf, besides, there is no any modification. After testing, there is configuration parsing error, please compare the configuration again, here is the error log:
log-20230727.txt (2.7 KB)
can you help to reproduce that “std::invalid_argument” issue based on this deepstream sample?

As you asked I have replaced the config_preprocess.txt with the preprocess_config.conf, and tested sample . I am getting the same error.

There are no changes to the PGIE config.
I am attaching the modified preprocess config (do not mind the file name change)
preprocess_test.conf (3.5 KB)
this is the console out when running the sample

Creating Pipeline

Creating streamux

Creating source_bin  0

Creating source bin
source-bin-00
Creating Pgie

Creating tiler

Creating nvvidconv

Creating nvosd

Creating H264 Encoder
Creating H264 rtppay
Adding elements to Pipeline


 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***


Starting pipeline

0:00:00.150994145 12876      0x25f1d30 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:00.684605940 12876      0x25f1d30 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:00.684661058 12876      0x25f1d30 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:00.685014752 12876      0x25f1d30 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
terminate called after throwing an instance of 'std::invalid_argument'
  what():  stof
Aborted (core dumped)

I have found out that not adding [user-configs] pixel-normalization-factor=0.003921568 in the preprocess config raises the above exception.

my application is running now, but I am getting detections on the full frame rather than on the roi and I am getting the following in my console out Using offsets : 103.939003,116.778999,123.680000 Error: gst-resource-error-quark: Failed to set tensor buffer pool to active (1): gstnvdspreprocess.cpp(640): gst_nvdspreprocess_start (): /GstPipeline:pipeline0/GstNvDsPreProcess:preprocess-plugin ^CTraceback (most recent call last):

could you share the whole log?

When I ran the application again (no changes) I am not getting this message in the console, this is the console out
log20230729.txt (2.2 KB)
This is the whole console out when I got the above message
log20230728.txt (2.0 KB)

why are there two different logs?
from the logs, you are testing own model. please correct the preprocess_test.conf, tensor-name=input_1 should be tensor-name=Input.
please check why is there “Unknown or legacy key specified ‘input-tensor-meta’ for group [property]” error.

Apologies for the late reply, I have uploaded the log for my application, hence my model. “Unknown or legacy key specified ‘input-tensor-meta’ for group [property]” This is because of the input-tensor-meta=1 set in the PGIE config. Following is a debug level log
log20230804.txt (5.3 KB)

from the new log, there is no any error. can the application run now?
if still failed, could you share more logs? please do “export GST_DEBUG=6” first to modify Gstreamer’s log level, then run again, you can redirect the logs to a file.

Here is the log,
out.log (6.0 MB). The application is running now, but the model is running inference outside the roi we have confgured for the source in the preprocess config.

if only using one video, please set as follows:

[group-0]
src-ids=0
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
roi-params-src-0=0;10;10;10;10;0;10;10;0;0;10;10;

source id for the video is 7, I have confirmed this. Still I tried the modification you have suggested, but not working.

is there ROI green rect on the video? do you mean there is bbox outside the ROIs?
nvpreprocess is opensource. you can add log in gst_nvdspreprocess_on_frame of \opt\nvidia\deepstream\deepstream-6.2\sources\gst-plugins\gst-nvdspreprocess\gstnvdspreprocess.cpp to check rect’s with and height for inference.

bboxes outside ROI, we are not drawing the ROI on the video frame. So Iam running this experiment on deepstream6.0, is the suggestion to add log relevant.