No detections on deepstream app with nvdspreprocess plugin

• Hardware Platform GPU
• DeepStream Version 6.1.1
• TensorRT Version 8.4.1.5
• NVIDIA GPU Driver Version 11.7
• Issue Type bugs
• How to reproduce the issue ? Replace the dstest1_pgie_config.txt and config_preprocess.txt in https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/v1.1.4/apps/deepstream-preprocess-test/ with the files shared here and run the application

I am trying to add nvdspreprocess in my application that uses deepstream. As a first step I am testing my config files for preprocess and nvinfer in the deepstream-preprocess-test app (link above), replacing the original config files. The application runs without errors, but the object meta list in pgie_src_pad_buffer_probe is empty.

I did standalone testing for the preprocess config without changing the nvinfer, we get detections then, but when I replace the nvinfer config is when the detections don’t happen.

The above mentioned nvinfer config runs fine without nvdspreprocess in the pipeline in my own application, so there are no issues with the actual model or engine files being used.

Follwing are the config files
dstest1_pgie_config.txt (4.2 KB)
config_preprocess.txt (1.9 KB)

Do you mean that there is no problem running our demo without any change, but replacing your config file has caused the problem?

Yes, the pgie config file is causing the issue. I cannot figure out the reason for the issue. Kindly go through the PGIE and help me figure out the issue.

Perhaps your model itself has no output. You can add some log to our open source code to debug that yourself first.

opt\nvidia\deepstream\deepstream\sources\libs\nvdsinfer_customparser\nvdsinfer_custombboxparser.cpp
extern "C"
bool NvDsInferParseCustomBatchedNMSTLT (
         std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
         NvDsInferNetworkInfo  const &networkInfo,
         NvDsInferParseDetectionParams const &detectionParams,
         std::vector<NvDsInferObjectDetectionInfo> &objectList) {
...
+      std::cout <<"get object!!!" << std::endl;
        objectList.push_back(object);
...
}

Why would that be the case ? I ask this because the model runs and gives output when nvdspreprocess is not in the pipeline.
Let me make the above changes and see whether this is indeed the case.

I have tried this, no data gets printed. We can confirm that the model itself has no output.
The preprocess config file is not the issue because the default config (using caffe model) gives output, but why does my model have no output ?
Is this something to do with the input tensor meta that is going in to the model ?

  • run the demo directly OK
  • run the demo with your modle and pgie config file OK
  • run the demo with your model , pgie config file and preprocess config file No Output

The above is the result you described, right?
You can try to set the gie-unique-id to 88 in your pgie config file. The method above only teaches you to locate the root cause of the problem.

you are right about the points.
Tried changing the gie-unique-id to 88 in PGIE and also changed the target-unique-ids to 88 in the preprocess config as well. Still no output, I don’t understand how changing the id would help.
Can you replicate the issue on your end and help me debug. Thanks!

Yes. You can attach your model and label file. I’ll run that with our test3.

1.How do you run our demo with I did standalone testing for the preprocess config without changing the nvinfer, we get detections?

2.Could you check the parameters of your own model, like network-input-order, network-color-format, pixel-normalization-factor?

I just run our deepstream-preprocess-test demo with your models or your config file. It cannot work properly. But when I change the pixel-normalization-factor to 1.0, the obj_list is not NULL.

I made some changes to the model input parameters in the preprocess config so that it would work with the caffe model. Iam uploading a working preprocess config that will run with the caffe model.
I have made changes to tensor-name , removed offsets and updated pixel-normalization-factor from [user-configs], updated network-color-format , network-input-shape, processing-width and processing-height

config_preprocess.txt (1.9 KB)

I mean you should set the parameters related to your model. Like when I change the pixel-normalization-factor to 1.0 with your model. The obj_list is not NULL.
So could you check the parameters related to your models again?

1 Like

wrong value for the pixel-normalization-factor caused the issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.